Planet Russell

,

MEWords Have Meanings

As a follow-up to my post with Suggestions for Trump Supporters [1] I notice that many people seem to have private definitions of words that they like to use.

There are some situations where the use of a word is contentious and different groups of people have different meanings. One example that is known to most people involved with computers is “hacker”. That means “criminal” according to mainstream media and often “someone who experiments with computers” to those of us who like experimenting with computers. There is ongoing discussion about whether we should try and reclaim the word for it’s original use or whether we should just accept that’s a lost cause. But generally based on context it’s clear which meaning is intended. There is also some overlap between the definitions, some people who like to experiment with computers conduct experiments with computers they aren’t permitted to use. Some people who are career computer criminals started out experimenting with computers for fun.

But some times words are misused in ways that fail to convey any useful ideas and just obscure the real issues. One example is the people who claim to be left-wing Libertarians. Murray Rothbard (AKA “Mr Libertarian”) boasted about “stealing” the word Libertarian from the left [2]. Murray won that battle, they should get over it and move on. When anyone talks about “Libertarianism” nowadays they are talking about the extreme right. Claiming to be a left-wing Libertarian doesn’t add any value to any discussion apart from demonstrating the fact that the person who makes such a claim is one who gives hipsters a bad name. The first time penny-farthings were fashionable the word “libertarian” was associated with left-wing politics. Trying to have a sensible discussion about politics while using a word in the opposite way to almost everyone else is about as productive as trying to actually travel somewhere by penny-farthing.

Another example is the word “communist” which according to many Americans seems to mean “any person or country I don’t like”. It’s often invoked as a magical incantation that’s supposed to automatically win an argument. One recent example I saw was someone claiming that “Russia has always been communist” and rejecting any evidence to the contrary. If someone was to say “Russia has always been a shit country” then there’s plenty of evidence to support that claim (Tsarist, communist, and fascist Russia have all been shit in various ways). But no definition of “communism” seems to have any correlation with modern Russia. I never discovered what that person meant by claiming that Russia is communist, they refused to make any comment about Russian politics and just kept repeating that it’s communist. If they said “Russia has always been shit” then it would be a clear statement, people can agree or disagree with that but everyone knows what is meant.

The standard response to pointing out that someone is using a definition of a word that is either significantly different to most of the world (or simply inexplicable) is to say “that’s just semantics”. If someone’s “contribution” to a political discussion is restricted to criticising people who confuse “their” and “there” then it might be reasonable to say “that’s just semantics”. But pointing out that someone’s writing has no meaning because they choose not to use words in the way others will understand them is not just semantics. When someone claims that Russia is communist and Americans should reject the Republican party because of their Russian connection it’s not even wrong. The same applies when someone claims that Nazis are “leftist”.

Generally the aim of a political debate is to convince people that your cause is better than other causes. To achieve that aim you have to state your cause in language that can be understood by everyone in the discussion. Would the person who called Russia “communist” be more or less happy if Russia had common ownership of the means of production and an absence of social classes? I guess I’ll never know, and that’s their failure at debating politics.

CryptogramSecurity Vulnerability in ESS ExpressVote Touchscreen Voting Computer

Of course the ESS ExpressVote voting computer will have lots of security vulnerabilities. It's a computer, and computers have lots of vulnerabilities. This particular vulnerability is particularly interesting because it's the result of a security mistake in the design process. Someone didn't think the security through, and the result is a voter-verifiable paper audit trail that doesn't provide the security it promises.

Here are the details:

Now there's an even worse option than "DRE with paper trail"; I call it "press this button if it's OK for the machine to cheat" option. The country's biggest vendor of voting machines, ES&S, has a line of voting machines called ExpressVote. Some of these are optical scanners (which are fine), and others are "combination" machines, basically a ballot-marking device and an optical scanner all rolled into one.

This video shows a demonstration of ExpressVote all-in-one touchscreens purchased by Johnson County, Kansas. The voter brings a blank ballot to the machine, inserts it into a slot, chooses candidates. Then the machine prints those choices onto the blank ballot and spits it out for the voter to inspect. If the voter is satisfied, she inserts it back into the slot, where it is counted (and dropped into a sealed ballot box for possible recount or audit).

So far this seems OK, except that the process is a bit cumbersome and not completely intuitive (watch the video for yourself). It still suffers from the problems I describe above: voter may not carefully review all the choices, especially in down-ballot races; counties need to buy a lot more voting machines, because voters occupy the machine for a long time (in contrast to op-scan ballots, where they occupy a cheap cardboard privacy screen).

But here's the amazingly bad feature: "The version that we have has an option for both ways," [Johnson County Election Commissioner Ronnie] Metsker said. "We instruct the voters to print their ballots so that they can review their paper ballots, but they're not required to do so. If they want to press the button 'cast ballot,' it will cast the ballot, but if they do so they are doing so with full knowledge that they will not see their ballot card, it will instead be cast, scanned, tabulated and dropped in the secure ballot container at the backside of the machine." [TYT Investigates, article by Jennifer Cohn, September 6, 2018]

Now it's easy for a hacked machine to cheat undetectably! All the fraudulent vote-counting program has to do is wait until the voter chooses between "cast ballot without inspecting" and "inspect ballot before casting." If the latter, then don't cheat on this ballot. If the former, then change votes how it likes, and print those fraudulent votes on the paper ballot, knowing that the voter has already given up the right to look at it.

A voter-verifiable paper audit trail does not require every voter to verify the paper ballot. But it does require that every voter be able to verify the paper ballot. I am continuously amazed by how bad electronic voting machines are. Yes, they're computers. But they also seem to be designed by people who don't understand computer (or any) security.

Worse Than FailureCodeSOD: Flip to a Blank Page

You have a web application, written in Spring. Some pages live at endpoints where they’re accessible to the world. Other pages require authentication, and yet others require users belong to specific roles. Fortunately for you, Spring has features and mechanisms to handle all of those details, down to making it extremely easy to return the appropriate HTTP error.

Unfortunately for you, one of the developers on your team is a Rockstar™ who is Officially Very Smart and absolutely refuses to use the tools your platform provides. When that Certified Super Genius leaves the organization, you inherit their code.

That’s what happened to Emmer. And that’s how they found this:

List<String> typeList = getTypeList (loginName);
if(CollectionUtils.size(typeList) > 0){
    return viewRepository.findBySubmsnTypeList(typeList, pr);
}
else{
      return viewRepository.findEmptyPage(pr);
}

This doesn’t look too bad, does it? It’s not great- why are roles called “types”, why are we representing them with strings, why are we checking if a user is logged in by checking which roles they have, and not whether or not they’re logged in… and why on Earth would you send the user an empty page if they’re not authenticated?

The question is: how do you generate an empty page? If “just return an empty view object” is what you thought you’d do, you’re obviously not a Rockstar™.

	@Query(value = "from viewTable where 1=0", nativeQuery = false)
	public Page<viewTable> findEmptyPage(Pageable pr);

If you want to return an empty page, you run a query against the database which is guaranteed to return absolutely no results. That guarantees that you’ll send a blank page back, because there’s no data to put on the page. Genius! This way, returning nothing requires a hop across the network and a call to the database, instead of just, y’know, returning nothing (or even better, returning an error page).

Suffice to say, when this master programmer gave their two weeks notice, Emmer and the rest of the team suggested that this programmer should spend their last two weeks on vacation.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianLars Wirzenius: vmdb2 roadmap

I now have a rudimentary [roadmap][] for reaching 1.0 of [vmdb2][], my Debian image building tool.

Visual roadmap

The visual roadmap is generated from the following YAML file:

vmdb2_1_0:
  label: |
    vmdb2 is production ready
  depends:
    - ci_builds_images
    - docs
    - x220_install

docs:
  label: |
    vmdb2 has a user
    manual of acceptable
    quality

x220_install:
  label: |
    x220 can install Debian
    onto a Thinkpad x220
    laptop

ci_builds_images:
  label: |
    CI builds and publishes
    images using vmdb2
  depends:
    - amd64_images
    - arm_images

amd64_images:
  label: |
    CI: amd64 images

arm_images:
  label: |
    CI: arm images of
    various kinds

Planet DebianNorbert Preining: Han Kang: The Vegetarian

The Vegetarian by Han Kang (한강) is a rough, dark, and intriguing story about two families onto which a series of strange events inflicts irreparable damage. Set in modern day Korea it draws a grueling image how the decision to become vegetarian kicked all members of the family into an unstoppable race into a precipe of horror.

That evening there was a feast at our house. All the middle-aged men from the market alleyways came, everyone my father considered worth knowing. The saying goes that for a wound caused by a dog bite to heal you have to eat that same dog, and I did scoop up a mouthful for myself. No, in fact I ate an entire bowlful with rice. The smell of burnt flesh, which the perilla seeds couldn’t wholly mask, pricked my nose. I remember the two eyes that had watched me, while the dog was made to run on, while he vomited blood mixed with froth, and how later they had seemed to appear, flickering, on the surface of the soup. But I don’t care. I really didn’t care.
– The Vegetarian

The novel consists of three connected short-stories about the two sisters Yeong-hye and In-hye. Both are seemingly married happily, Yeong-hye with a business man, her sister In-hye with a video artist. In the first – name giving – story “The Vegetarian” Yeong-hye, after a recurring night mare started to plague her, started to become vegetarian. Despite her husbands trial to keep a normal life, things start to go more and more wrong until a family intervention at her sister’s place is called, with their sister parents present. Her father, who served in Vietnam, requests Yeong-hye to eat meat, and after her refusal and with the help of her husband and younger brother he forces some meat into Yeong-hye. This triggers a rabiat response with her breaking free, grabbing a knife and cutting her wrist. She is brought to an hospital and is later hospitalized as mentally unstable. The first story closes with her escaping from the hospital. She is finally found sitting bare breasted in the park asking “Have I done something wrong?”, and a dead bird covered with bite marks is retrieved from her palm.

This was the body of a beautiful young woman, conventionally an object of desire, and yet it was a body from which all desire had been eliminated. But this was nothing so crass as carnal desire, not for her—rather, or so it seemed, what she had renounced was the very life that her body represented.
– Mongolian Mark

The second story, “Mongolian Mark”, switches focus onto the husband of In-Hye. He, too, is haunted by dreams, but one of two love-making people with their bodies painted with flowers. When he learns from In-Hye that her sister Hyeog-hye still has her Mongolian mark despite the usual disappearance of these birth marks, he grows more and more obsessed with enacting his dream with Yeong-hye as the female part. The reader learns that Yeong-hye has been divorced, and on a visit to bring her fruits the husband-in-law finds her naked but unashamed of it in her apartment. After initial hesitation he asks her to model onto which she agrees. After a first session of painting with her alone, the husband-in-law arranges for a second part where a friend plays the male part. After an initial harmonic start the artist asks to engage in intercourse, which became too much for the friend and he leaves. Yeong-hye says that during all this she felt the fear and pressure of the consistent nightmare disappearing. The husband-in-law asks a friend to paint his body with flowers according to his designs, visits Yeong-hye and the two continue where the initial video was left. After a deep and exhausted sleep they wake up to In-hye having entered the apartment and played back the recorded video. She calls emergency services on grounds of mental illness of both, and after a short trial to throw himself of the balcony, both are taken into custody.

Life is such a strange thing, she thinks, once she has stopped laughing. Even after certain things have happened to them, no matter how awful the experience, people still go on eating and drinking, going to the toilet and washing themselves—living, in other words. And sometimes they even laugh out loud. And they probably have these same thoughts, too, and when they do it must make them cheerlessly recall all the sadness they’d briefly managed to forget.
– Flaming Trees

The third and last story, “Flaming Trees”, finally focuses onto In-hye. She split with her husband and remain with their son the only ones of the family to support Yeong-hye, who has been transferred into a hospital for mentally ill. In-hye regularly reflects on her difficulties with the family and grows considerable depressed. Yeong-hye’s condition grows again more severe: She imagines becoming a tree, rejects all food, escapes from the hospital to be found in the forest in the rain. On her way to the hospital, In-hye recalls their childhood and the harsh treatment the older Yeong-hye received from the father, inflicting severe mental damage onto both of them. One of the core memories is the event of both of them getting lost, and when they find their way Yeong-hye suggested to run away from home. Returning home, In-hye feels happiness but sees the subdued and depressed Yeong-hye. With this memory, In-hye is present during a trial to force feed and sedate Yeong-hye. In-hye, observing the pain afflicted to her sister, bites the nurse restricting her. Finally In-hye brings her sister to a different hospital for her final stages. “The trees by the side of the road are blazing, green fire undulating like the rippling flanks of a massive animal, wild and savage.”


Despite that throughout the book one feels that all the horrors started with Yeong-hye’s decision to become vegetarian, the memory recalled by In-hye in the last part closes a circle. One cannot blame one only, innocence does not exist. The author stated herself:

I wanted to deal with my long-lasting questions about the possibility/impossibility of innocence in this world, which is mingled with such violence and beauty.

Planet DebianDirk Eddelbuettel: binb 0.0.1: binb is not Beamer

Following a teaser tweet two days ago, we are thrilled to announce that binb version 0.0.1 arrived on CRAN earlier this evening.

binb extends a little running joke^Htradition I created a while back and joins three other CRAN packages offering RMarkdown integration:

  • tint for tint is not Tufte : pdf or html papers with a fresher variant of the famed Tufte style;
  • pinp for pinp is not PNAS : two-column pdf vignettes in the PNAS style (which we use for several of our packages);
  • linl for linl is not Letter : pdf letters

All four offer easy RMarkdown integration, leaning heavily on the awesome super-power of pandoc as well as general R glue.

This package (finally) wraps something I had offered for Metropolis via a simpler GitHub repo – a repo I put together more-or-less spur-of-the-moment-style when asked for it during the useR! 2016 conference. It also adds the lovely IQSS Beamer theme by Ista Zahn which offers a rather sophisticated spin on the original Metropolis theme by Matthias Vogelgesang.

We put two simple teasers on the GitHub repo.

Metropolis

Consider the following minimal example, adapted from the original minimal example at the bottom of the Metropolis page:

---
title: A minimal example
author: Matthias Vogelgesang
date: \today
institute: Centre for Modern Beamer Themes
output: binb::metropolis
---

# First Section

## First Frame

Hello, world!

It creates a three-page pdf file which we converted into this animated gif (which loses font crispness, sadly):

IQSS

Similarly, for IQSS we use the following input adapting the example above but showing sections and subsections for the nice headings it generates:

---
title: A minimal example
author: Ista Zahn
date: \today
institute: IQSS
output: binb::iqss
---

# First Section

## First Sub-Section

### First Frame

Hello, world!

# Second Section

## Second Subsection

### Second Frame

Another planet!

This creates this pdf file which we converted into this animated gif (also losing font crispness):

The initial (short) NEWS entry follows:

Changes in binb version 0.0.1 (2018-09-19)

  • Initial CRAN release supporting Metropolis and IQSS

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianGunnar Wolf: Privacy and Anonymity Colloquium • Activity program announced!

It's only two weeks to the beginning of the privacy and anonymity colloquium we will be celebrating at the Engineering Faculty of my University. Of course, it's not by mere chance we are holding this colloquium starts just after the Tor Meeting, which will happen for the first time in Latin America (and in our city!)

So, even though changes are still prone to happen, I am happy to announce the activity program for the colloquium!

I know some people will ask, so — We don't have the infrastructure to commit to having a video feed from it. We will, though, record the presentations on video, and I have the committment to the university to produce a book from it within a year time. So, at some point in the future, I will be able to give you a full copy of the topics we will discuss!

But, if you are in Mexico City, no excuses: You shall come to the colloquium!

AttachmentSize
poster.pdf881.35 KB
poster_small.jpg81.22 KB

Planet DebianJohannes Schauer: mmdebstrap: unprivileged reproducible multi-mirror Debian chroot in 11 s

I wrote an alternative to debootstrap. I call it mmdebstrap which is short for multi-mirror debootstrap. Its interface is very similar to debootstrap, so you can just do:

$ sudo mmdebstrap unstable ./unstable-chroot

And you'll get a Debian unstable chroot just as debootstrap would create it. It also supports the --variant option with minbase and buildd values which install the same package sets as debootstrap would.

A list of advantages in contrast to debootstrap:

  • more than one mirror possible (or really anything that is a legal apt sources.list entry)
  • security and updates mirror included for Debian stable chroots (a wontfix for debootstrap)
  • 2-3 times faster (for debootstrap variants)
  • chroot with apt in 11 seconds (if only installing Essential: yes and apt)
  • gzipped tarball with apt is 27M small
  • bit-by-bit reproducible output (if $SOURCE_DATE_EPOCH is set)
  • unprivileged operation using Linux user namespaces, fakechroot or proot (mode is chosen automatically)
  • can operate on filesystems mounted with nodev
  • foreign architecture chroots with qemu-user (without manually invoking --second-stage)

You can find the code here:

https://gitlab.mister-muffin.de/josch/mmdebstrap

Planet DebianMark Brown: 2018 Linux Audio Miniconference

As in previous years we’re trying to organize an audio miniconference so we can get together and talk through issues, especially design decisons, face to face. This year’s event will be held on Sunday October 21st in Edinburgh, the day before ELC Europe starts there. Cirrus Logic have generously offered to host this in their Edinburgh office:

7B Nightingale Way
Quartermile
Edinburgh
EH3 9EG

As with previous years let’s pull together an agenda through a mailing list discussion on alsa-devel – if you’ve got any topics you’d like to discuss please join the discussion there.

There’s no cost for the miniconference but if you’re planning to attend please sign up using the document here.

Krebs on SecurityMirai Botnet Authors Avoid Jail Time

Citing “extraordinary cooperation” with the government, a court in Alaska on Tuesday sentenced three men to probation, community service and fines for their admitted roles in authoring and using “Mirai,” a potent malware strain used in countless attacks designed to knock Web sites offline — including an enormously powerful attack in 2016 that sidelined this Web site for nearly four days.

The men — 22-year-old Paras Jha Fanwood, New Jersey,  Josiah White, 21 of Washington, Pa., and Dalton Norman from Metairie, La. — were each sentenced to five years probation, 2,500 hours of community service, and ordered to pay $127,000 in restitution for the damage caused by their malware.

Mirai enslaves poorly secured “Internet of Things” (IoT) devices like security cameras, digital video recorders (DVRs) and routers for use in large-scale online attacks.

Not long after Mirai first surfaced online in August 2016, White and Jha were questioned by the FBI about their suspected role in developing the malware. At the time, the men were renting out slices of their botnet to other cybercriminals.

Weeks later, the defendants sought to distance themselves from their creation by releasing the Mirai source code online. That action quickly spawned dozens of copycat Mirai botnets, some of which were used in extremely powerful denial-of-service attacks that often caused widespread collateral damage beyond their intended targets.

A depiction of the outages caused by the Mirai attacks on Dyn, an Internet infrastructure company. Source: Downdetector.com.

The source code release also marked a period in which the three men began using their botnet for far more subtle and less noisy criminal moneymaking schemes, including click fraud — a form of online advertising fraud that costs advertisers billions of dollars each year.

In September 2016, KrebsOnSecurity was hit with a record-breaking denial-of-service attack from tens of thousands of Mirai-infected devices, forcing this site offline for several days. Using the pseudonym “Anna_Senpai,” Jha admitted to a friend at the time that the attack on this site was paid for by a customer who rented tens of thousands of Mirai-infected systems from the trio.

In January 2017, KrebsOnSecurity published the results of a four-month investigation into Mirai which named both Jha and White as the likely co-authors of the malware.  Eleven months later, the U.S. Justice Department announced guilty pleas by Jha, White and Norman.

Prior to Tuesday’s sentencing, the Justice Department issued a sentencing memorandum that recommended lenient punishments for the three men. FBI investigators argued the defendants deserved light sentences because they had provided the government “extraordinary cooperation” in identifying other cybercriminals engaged in related activity and helping to thwart massive cyberattacks on several companies.

Paras Jha, in an undated photo from his former LinkedIn profile.

The government said Jha was especially helpful, devoting hundreds of hours of work in helping investigators. According to the sentencing memo, Jha has since landed a part-time job at at a cybersecurity firm, although the government declined to name his employer.

However, Jha is not quite out of the woods yet: He has also admitted to using Mirai to launch a series of punishing cyberattacks against Rutgers University, where he was enrolled as a computer science student at the time. Jha is slated to be sentenced next week in New Jersey for those crimes.

The Mirai case was prosecuted out of Alaska because the lead FBI agent in the investigation, 36-year-old Special Agent Elliott Peterson, is stationed there. Peterson was able to secure jurisdiction for the case after finding multiple DVRs in Alaska infected with Mirai. Last week, Peterson traveled to Washington, D.C. to join several colleagues in accepting the FBI’s Director Award — the bureau’s highest honor — for the Mirai investigation.

TEDMeet the Fall 2018 class of TED Residents

Activist Glenn Cantave (far left) and artist Kemi Layeni (far right) introduce themselves; behind them, Savannah Rodgers (center) says hello to alumnus Bayeté Ross Smith.

In the foreground, activist Glenn Cantave (far left) and artist Kemi Layeni (far right) introduce themselves; behind them, Savannah Rodgers (center) says hello to alumnus Bayeté Ross Smith during the meet-and-greet that kicked off this season’s Residency. Photo: Dian Lofton / TED

On September 12, TED welcomed its latest class to the TED Residency program, an in-house incubator for breakthrough ideas. These 21 Residents will spend 14 weeks in TED’s New York headquarters working and thinking together. The class includes exceptional people from all over the map — from Bulgaria to South Africa, Canada to Kansas.

New Residents include:

  • A rabbi helping to bring centuries-old wisdom to modern-day social media
  • An underwater photographer depicting the surprising variety of wildlife in urban waters
  • A former banker who inherited a plot of land and is now learning how to farm
  • A journalist seeking transparency in the US healthcare system
  • A rapper helping others find their voice

Heidi Boisvert, PhD, is an interdisciplinary artist and creative technologist building an open-source biometric lab and AI system to understand humans’ emotional reactions to media. Her goal: to figure out what makes media truly affecting and effective, so the knowledge can be used for good and to safeguard against manipulation.

TED Fellow and human rights activist Yana Buhrer Tavanier — based in Sofia, Bulgaria — is the co-founder of Fine Acts, which bridges human rights and art to instigate social change. In 2017, she launched Fine Acts Labs, bringing activists, and artists and technologists together in one space to tackle one social issue.

Recognizing that history-book narratives are Eurocentric, Glenn Cantave’s organization Movers and Shakers wants to paint a fuller picture using augmented reality. His team is working with local educators to create an AR book on the true story of Christopher Columbus. He also plans to launch neighborhood walking tours that depict digital landmarks of black and brown people in the US. He calls the project a “Pokemon-Go for contextualized history.”

Maria Adele Carrai is a sinologist and political scientist finishing a book about sovereignty in China, and how its infrastructure initiatives are changing the country’s place in the world.

Diana Henry chats with fellow residents Heidi Boisvert and Azi Jamalian.

From left, investor Diane Henry chats with fellow residents Heidi Boisvert, an artist and technologist, and Azi Jamalian, a cognitive scientist and entrepreneur. Photo: Dian Lofton / TED

Luz Claudio, PhD, is an environmental health research scientist, author and educator who develops innovative tools for research and the mentoring of young scientists. She is also developing new ways to translate discoveries from the environmental health sciences into practical actions—like writing a children’s book!

Keith Ellenbogen is an acclaimed underwater photographer who focuses on environmental conservation. His work dives into New York’s surprisingly vibrant and diverse marine ecosystems.

Diane Henry is a tech seed investor and startup strategist dedicated to finding, funding and amplifying visionary founders who believe in building ethics into tech as it evolves.

Muhammed Y. Idris is putting his PhD in social data analytics to good use through his app, Atar — a mobile platform that gives refugees seeking asylum information about their rights and available services. For now, the app is limited to people in Montreal but in the last year he has partnered with the UNHCR to expand his work further.

Azadeh (Azi) Jamalian is an entrepreneur and cognitive science PhD building invention hubs and communities for kid inventors. Her mission is to create a world in which all children dare to act on their dreams.

Keith Kirkland is an industrial designer, CEO and cofounder of WearWorks, a company that makes haptic navigation products for the blind. Their product, Wayband, helped the first blind person ever navigate the 2017 NYC Marathon without sighted assistance.

TED Res alumni came back to give the new class some advice on how to take advantage of their time.

TED Res alumni came back to give the new class some advice on how to take advantage of their time. A veteran of the first class, podcaster and author Brian McCullough (in orange), gives his thoughts. Photo: Dian Lofton / TED

TEDx organizer Kelo Kubu started her career in finance and later founded a design studio, but then inherited farmland as restitution from apartheid but didn’t know what to do with it. She’s launching an ag-tech accelerator focusing on urban female farmers so they can all learn to profit from their land.

Kemi Layeni is an artist focusing on the stories and experiences of people of African descent. She is working on a multimedia project about slavery “through a speculative, Afro-futuristic and Afro-surreal lens.”

Entrepreneur and graduate of The Second City, Mary Lemmer is applying improv comedy training to help people become better leaders.

Mordechai Lightstone is a Chasidic rabbi, the social media editor at Chabad.org and the founder of Tech Tribe. He is passionate about using new media to further Jewish identity and community building.

Jullia Suhyoung Lim designs technology to tackle challenges in education and medicine. She is currently designing an AR app to help autistic children identify abstract concepts in real life.

Samy el-Noury is an actor and LGBTQ+ advocate developing his first full-length play, about the life of a historical transgender activist. His goal is to recognize trans people in history and create more opportunities for working trans artists.

Technologist Muhammad Y. Idiris meets alum Danielle Gustafson.

Technologist Muhammad Y. Idiris meets alum Danielle Gustafson. Photo: Dian Lofton / TED

Jeanne Pinder demands radical transparency from the US healthcare system. Although price opacity is often drowned out by the noisier debate about policy and politics, the effects on real people are huge and growing, she says. Her award-winning startup, ClearHealthCosts, reports on and crowdsources true costs of medical procedures and makes the data public.

Originally from Colombia, Mariana Prieto designs scalable solutions to solve complex wildlife conservation challenges.

Savannah Rodgers is a documentary filmmaker from Kansas City. She wanted to make movies after seeing her favorite film, Chasing Amy (1997), for the first time at age 12. Now she’s making a documentary about the film and its enduring, controversial LGBTQ+ legacy.

Julie Scelfo is a journalist and social justice activist whose stories about society and human behavior reframe popular ideas and ask us to rethink basic assumptions. The author of The Women Who Made New York says she believes Donald Trump is onto something when he talks about “fake news” — although she would define it differently.

Raegan Sealy is a poet, singer, rapper and the founder of Sound Board NYC. An advocate of social change through the arts, Raegan and her work draw on her own experiences of overcoming domestic violence, sexual abuse and addiction.

Two members of this class, Kemi Layeni and Savannah Rodgers, won their seats in the Adobe Project 1324 TED Residency challenge, for young creatives (aged 18 to 24) with big ideas to share. The TED team chose the winners; Adobe is generously picking up the tab for their travel, room and board.

The fall Residency will culminate with a program of TED Talks in December. Would you or someone you know like to become a Resident? Applications for the Spring 2019 Residency (February 25–May 31) open October 1 and will close December 3, 2018. You can learn more at ted.com/residency.

Cory DoctorowHey, Swarthmore! I’m headed your way next week




I’m heading to the east coast next week, first for a lecture series in NYC for Columbia University (including a conversation with Radiolab’s Jad Abumrad about Big Tech, monopolies and democratic technology); and from there I’m headed to Pennsylvania for a talk about my novel Walkaway at Swarthmore, on Sept 28 from 7-9PM at the Lang Performing Arts Center Room (LPAC) 101 Cinema. All the events are free, though some require tickets, so be sure to check in advance. Hope to see you there!

Worse Than FailureWestward Ho!

Buoy in the ocean

Roman K. once helped to maintain a company website that served a large customer base mainly within the United Kingdom. Each customer was itself a business offering a range of services. The website displayed these businesses on a map so that potential customers could find them. This was done by geocoding the business' addresses to get their longitude and latitude coordinates, then creating points on the map at those locations.

Simple enough—except that over time, some of the businesses began creeping west through the Atlantic Ocean, toward the east coast of North America.

Roman had no idea where to start with troubleshooting. It was only happening with a subset of businesses, and only intermittently. He was certain the initial geocoded coordinates were correct. Those longitude and latitude values were stored in a database table of customer data with strict permissions in place. Even if he wanted to change them himself, he couldn't. Whatever the problem was, it was powerful, and oddly selective: it only ever changed longitude values. Latitude values were never touched.

Were they being hacked by their competitors? Were their customers migrating west en masse? Were goblins messing with the database every night when no one was looking?

Roman dug through reams of code and log files, searching desperately for any whiff of "longitude." He questioned his fellow developers. He blamed his fellow developers. It was all for naught, for the problem was no bug or hack. The problem was a "feature" of the database access layer. Roman discovered that the user class had a simple destructor method that saved all the currently loaded data back to the database:

function __destruct() {
     if ($this->_changed) {
          $this->database->update('customer_table', $this->_user_data, array('customer_id' => $this->_user_data['customer_id']));
     }
}

The legwork was handled by a method called update(). And just what did update() do?

public function update($table, $record, $where = '') {
     // snip
     foreach ($record as $field => $value) {
          if (isset($value[0]) && is_numeric($number) && ($value[0] == '+' || $value[0] == '-')) {
               $set[] = "`$field` = `$field` {$value[0]} ".$number;
          }
     }
}

Each time a customer logged into their account via the website and changed their data: if any of their data began with either a plus or minus sign, those values would have some mysterious value (contained in the variable $number) either added to or subtracted from them. Say a customer's business happened to be located west of the prime meridian. Their longitude would therefore be stored as a negative value, like -3. The next time that customer logged in and changed anything, the update() method would subtract $number from -3, relocating the customer to prime oceanic property. Latitude was never affected because latitude coordinates above the equator are positive. These coordinate values were simply stored as-is, with no sign in front of them.

There was no documentation for the database access layer. The developer responsible for it was long gone. As such, Roman never did learn whether there were some legitimate business reason for this "feature" to exist. He added a flag to the update() method so that customers could disable the behavior upon request. Ever since, the affected companies have remained safely anchored upon UK soil.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

CryptogramPegasus Spyware Used in 45 Countries

Citizen Lab has published a new report about the Pegasus spyware. From a ZDNet article:

The malware, known as Pegasus (or Trident), was created by Israeli cyber-security firm NSO Group and has been around for at least three years -- when it was first detailed in a report over the summer of 2016.

The malware can operate on both Android and iOS devices, albeit it's been mostly spotted in campaigns targeting iPhone users primarily. On infected devices, Pegasus is a powerful spyware that can do many things, such as record conversations, steal private messages, exfiltrate photos, and much much more.

From the report:

We found suspected NSO Pegasus infections associated with 33 of the 36 Pegasus operators we identified in 45 countries: Algeria, Bahrain, Bangladesh, Brazil, Canada, Cote d'Ivoire, Egypt, France, Greece, India, Iraq, Israel, Jordan, Kazakhstan, Kenya, Kuwait, Kyrgyzstan, Latvia, Lebanon, Libya, Mexico, Morocco, the Netherlands, Oman, Pakistan, Palestine, Poland, Qatar, Rwanda, Saudi Arabia, Singapore, South Africa, Switzerland, Tajikistan, Thailand, Togo, Tunisia, Turkey, the UAE, Uganda, the United Kingdom, the United States, Uzbekistan, Yemen, and Zambia. As our findings are based on country-level geolocation of DNS servers, factors such as VPNs and satellite Internet teleport locations can introduce inaccuracies.

Six of those countries are known to deploy spyware against political opposition: Bahrain, Kazakhstan, Mexico, Morocco, Saudi Arabia, and the United Arab Emirates.

Also note:

On 17 September 2018, we then received a public statement from NSO Group. The statement mentions that "the list of countries in which NSO is alleged to operate is simply inaccurate. NSO does not operate in many of the countries listed." This statement is a misunderstanding of our investigation: the list in our report is of suspected locations of NSO infections, it is not a list of suspected NSO customers. As we describe in Section 3, we observed DNS cache hits from what appear to be 33 distinct operators, some of whom appeared to be conducting operations in multiple countries. Thus, our list of 45 countries necessarily includes countries that are not NSO Group customers. We describe additional limitations of our method in Section 4, including factors such as VPNs and satellite connections, which can cause targets to appear in other countries.

Motherboard article. Slashdot and Boing Boing posts.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 220 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for September).
  • Antoine Beaupré did 23.75 hours.
  • Ben Hutchings did 5 hours (out of 15 hours allocated + 8 extra hours, thus keeping 8 extra hours for September).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not manage to work but returned all his hours to the pool (out of 23.75 hours allocated + 19.5 extra hours).
  • Holger Levsen did 10 hours (out of 8 hours allocated + 16 extra hours, thus keeping 14 extra hours for September).
  • Hugo Lefeuvre did nothing (out of 10 hours allocated, but he gave back those hours).
  • Markus Koschany did 23.75 hours.
  • Mike Gabriel did 6 hours (out of 8 hours allocated, thus keeping 2 extra hours for September).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 8 remaining hours, thus keeping 11.5 extra hours for September).
  • Roberto C. Sanchez did 6 hours (out of 18h allocated, thus keeping 12 extra hours for September).
  • Santiago Ruano Rincón did 8 hours (out of 20 hours allocated, thus keeping 12 extra hours for September).
  • Thorsten Alteholz did 23.75 hours.

Evolution of the situation

The number of sponsored hours decreased to 206 hours per month, we lost two sponsors and gained only one.

The security tracker currently lists 38 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

,

Planet DebianDaniel Pocock: What is the relationship between FSF and FSFE?

Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here)

Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused.

The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand.

When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston.

To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly.

FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari?

Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime)

The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further.

FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me.

Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue.

FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name.

In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations.

People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script?

Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today.

That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are.

If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly.

Harald WelteWireshark dissector for 3GPP CBSP - traces wanted!

I recently was reading 3GPP TS 48.049, the specification for the CBSP (Cell Broadcast Service Protocol), which is the protocol between the BSC (Base Station Controller) and the CBC (Cell Broadcast Centre). It is how the CBC according to spec is instructing the BSCs to broadcast the various cell broadcast messages to their respective geographic scope.

While OsmoBTS and OsmoBSC do have support for SMSCB on the CBCH, there is no real interface in OsmoBSC yet on how any external application would instruct it tot send cell broadcasts. The only existing interface is a VTY command, which is nice for testing and development, but hardly a scalable solution.

So I was reading up on the specs, discovered CBSP and thought one good way to get familiar with it is to write a wireshark dissector for it. You can find the result at https://code.wireshark.org/review/#/c/29745/

Now my main problem is that as usual there appear to be no open source implementations of this protocol, so I cannot generate any traces myself. More surprising is that it's not even possible to find any real-world CBSP traces out there. So I'm facing a chicken-and-egg problem. I can only test / verify my wireshark dissector if I find some traces.

So if you happen to have done any work on cell broadcast in 2G network and have a CBSP trace around (or can generate one): Please send it to me, thanks!

Alternatively, you can of course also use the patch linked above, build your own wireshark from scratch, test it and provide feedback. Thanks in either case!

Planet DebianJonathan McDowell: Using ARP via netlink to detect presence

If you remember my first post about home automation I mentioned a desire to use some sort of presence detection as part of deciding when to turn the heat on. Home Assistant has a wide selection of presence detection modules available, but the easy ones didn’t seem like the right solutions. I don’t want something that has to run on my phone to say where I am, but using the phone as the proxy for presence seemed reasonable. It connects to the wifi when at home, so watching for that involves no overhead on the phone and should be reliable (as long as I haven’t let my phone run down). I run OpenWRT on my main house router and there are a number of solutions which work by scraping the web interface. openwrt_hass_devicetracker is a bit better but it watches the hostapd logs and my wifi is actually handled by some UniFis.

So how to do it more efficiently? Learn how to watch for ARP requests via Netlink! That way I could have something sitting idle and only doing any work when it sees a new event, that could be small enough to run directly on the router. I could then tie it together with the Mosquitto client libraries and announce presence via MQTT, tying it into Home Assistant with the MQTT Device Tracker.

I’m going to go into a bit more detail about the Netlink side of things, because I found it hard to find simple documentation and ended up reading kernel source code to figure out what I wanted. If you’re not interested in that you can find my mqtt-arp (I suck at naming simple things) tool locally or on GitHub. It ends up as an 8k binary for my MIPS based OpenWRT box and just needs fed a list of MAC addresses to watch for and details of the MQTT server. When it sees a device it cares about make an ARP request it reports the presence for that device as “home” (configurable), rate limiting it to at most once every 2 minutes. Once it hasn’t seen anything from the device for 10 minutes it declares the location to be unknown. I have found Samsung phones are a little prone to disconnecting from the wifi when not in use so you might need to lengthen the timeout if all you have are Samsung devices.

Home Assistant configuration is easy:

device_tracker:
  - platform: mqtt
    devices:
      noodles: 'location/by-mac/0C:11:22:33:44:55'
      helen: 'location/by-mac/4C:11:22:33:44:55'

On to the Netlink stuff…

Firstly, you can watch the netlink messages we’re interested in using iproute2 - just run ip monitor. Works as an unpriviledged user which is nice. This happens via an AF_NETLINK routing socket (rtnetlink(7)):

int sock;
sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);

We then want to indicate we’re listening for neighbour events:

struct sockaddr_nl group_addr;
bzero(&group_addr, sizeof(group_addr));
group_addr.nl_family = AF_NETLINK;
group_addr.nl_pid = getpid();
group_addr.nl_groups = RTMGRP_NEIGH;
bind(sock, (struct sockaddr *) &group_addr, sizeof(group_addr));

At this point we’re good to go and can wait for an event message:

received = recv(sock, buf, sizeof(buf), 0);

This will be a struct nlmsghdr message and the nlmsg_type field will provide details of what type. In particular I look for RTM_NEWNEIGH, indicating a new neighbour has been seen. This is of type struct ndmsg and immediately follows the struct nlmsghdr in the received message. That has details of the address family type (IPv6 vs IPv4), the state and various flags (such as whether it’s NUD_REACHABLE indicating presence). The only slightly tricky bit comes in working out the MAC address, which is one of potentially several struct nlattr attributes which follow the struct ndmsg. In particular I’m interested in an nla_type of NDA_LLADDR, in which case the attribute data is the MAC address. The main_loop function in mqtt-arp.c shows this - it’s fairly simple stuff, and works nicely. It was just figuring out the relationship between it all and the exact messages I cared about that took me a little time to track down.

Planet DebianJoey Hess: censored Amazon review of Sandisk Ultra 32GB Micro SDHC Card

★ counterfeits in amazon pipeline

The 32 gb card I bought here at Amazon turned out to be fake. Within days I was getting read errors, even though the card was still mostly empty.

The logo is noticably blurry compared with a 32 gb card purchased elsewhere. Also, the color of the grey half of the card is subtly wrong, and the lettering is subtly wrong.

Amazon apparently has counterfiet stock in their pipeline, google "amazon counterfiet" for more.

You will not find this review on Sandisk Ultra 32GB Micro SDHC UHS-I Card with Adapter - 98MB/s U1 A1 - SDSQUAR-032G-GN6MA because it was rejected. As far as I can tell my review violates none of Amazon's posted guidelines. But it's specific about how to tell this card is counterfeit, and it mentions a real and ongoing issue that Amazon clearly wants to cover up.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #177

Here’s what happened in the Reproducible Builds effort between Sunday September 9 and Saturday September 15 2018:

Patches filed

diffoscope development

Chris Lamb made a large number of changes to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages:

These changes were then uploaded as diffoscope version 101.

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

Misc.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianJonathan Dowland: Digital Minimalism and Deep Work

Russ Allbery of the Debian project writes reviews of books he has read on his blog. It was through Russ's review that I learned of "Deep Work" by Cal Newport, and duly requested it from my local library.

I've a long-held skepticism of self-help books, but several aspects of this one strike the right notes for me. The author is a Computer Scientist, so there's a sense of kinship there, but the writing also follows the standard academic patterns of citing sources and a certain rigour to the new ideas that are presented. Despite this, there are a few sections of the book which I felt lacked much supporting evidence, or where some obvious questions of the relevant concept were not being asked. One of the case studies in the book is of a part-time PhD student with a full-time job and a young child, which I can relate to. The author obviously follows his own advice: he runs a productivity blog at calnewport.com and has no other social media presences. One of the key productivity tips he espouses in the book (and elsewhere) is simply "quit social media".

Through Newport's blog I learned that the title of his next book is Digital Minimalism. This intrigued me, because since I started thinking about minimalism myself, I've wondered about the difference of approach needed between minimalism in the "real world" and the digital domains. It turns out the topic of Newport's next book is about something different: from what I can tell, focussing on controlling how one spends one's time online for maximum productivity.

That's an interesting topic which I have more to write about at some point. However, my line of thought for the title "digital minimalism" spawned from reading Marie Kondo, Fumio Sakai and others. Many of the tips they offer to their readers revolve around moving meaning away from physical clutter and into the digital domain: scan your important papers, photograph your keepsakes, and throw away the physical copies. It struck me that whilst this was useful advice for addressing the immediate problem of clutter in the physical world, it exacerbates the problem of digital clutter, especially if we don't have good systems for effectively managing digital archives. Broadly speaking, I don't think we do: at least, not ones that are readily accessible to the majority of people. I have a hunch that most have no form of data backup in place at all, switch between digital hosting services on a relatively ad-hoc manner (flickr, snapchat, instagram…) and treat losing data (such as when an old laptop breaks, or a tablet or phone is stolen) as a fact of life, rather than something that could be avoided if our tools (or habits, or both) were better.

Worse Than FailureCodeSOD: Switch On Suppression

Krista noticed our article explaining that switches were replacements for ifs. She sent in a version she found in her codebase, around the same idea:

	@SuppressWarnings("incomplete-switch")
	@Transactional
	public void removeAssetFromPackage(Package pkg, Asset assetToRemove) {
		pkg.getAssets().remove(assetToRemove);
		// Delete from DB and asset store.
		removeAsset(pkg, assetToRemove);

		// If we're removing LIVE asset, also delete AsyncJobs.
		switch (assetToRemove.getType()) {
			case LIVE:
				asyncJobService.removeAsyncJobsForPresentation(pkg);
				break;
		}

		// Flush package cache.
		cacheInvalidationService.invalidatePresenationCache(pkg);
	}

Once again, we use a switch instead of an if. Perhaps this was premature flexibility- there are obviously other states the getType method could return. Maybe, someday in the future, they’ll need to do other things inside that switch. Until then, it’s just the sort of thing the compiler will throw warnings about, since there’s no default case.

Oh, except, of course, they suppressed the warning up top.

A quick search in Krista’s codebase for @SuppressWarnings("incomplete-switch") finds dozens of usages of this pattern.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianRuss Allbery: Review: The Collapsing Empire

Review: The Collapsing Empire, by John Scalzi

Series: Interdependency #1
Publisher: Tor
Copyright: March 2017
ISBN: 0-7653-8889-8
Format: Kindle
Pages: 333

Cardenia Wu-Patrick was never supposed to become emperox. She had a quiet life with her mother, a professor of ancient languages who had a brief fling with the emperox but otherwise stayed well clear of the court. Her older half-brother was the imperial heir and seemed to enjoy the position and the politics. But then Rennered got himself killed while racing and Cardenia ended up heir whether she wanted it or not, with her father on his deathbed and unwanted pressure on her to take over Rennered's role in a planned marriage of state with the powerful Nohamapetan guild family.

Cardenia has far larger problems than those, but she won't find out about them until becoming emperox.

The Interdependency is an interstellar human empire balanced on top of a complex combination of hereditary empire, feudal guild system, state religion complete with founding prophet, and the Flow. The Flow is this universe's equivalent of the old SF trope of a wormhole network: a strange extra-dimensional space with well-defined entry and exit points and a disregard for the speed of light. The Interdependency relies on it even more than one might expect. As part of the same complex and extremely long-term plan of engineered political stability that created the guild, empire, and church balance of power, the Interdependency created an economic web in which each system is critically dependent on imports from other systems. This plus the natural choke points of the Flow greatly reduces the chances of war.

It also means that Cardenia has inherited an empire that is more fragile than it may appear. Secret research happening at the most far-flung system in the Interdependency is about to tell her just how fragile.

John Clute and Malcolm Edwards provided one of the most famous backhanded compliments in SF criticism in The Encyclopedia of Science Fiction when they described Isaac Asimov as the "default voice" of science fiction: a consistent but undistinguished style that became the baseline that other writers built on or reacted against. The field is now far too large for there to be one default voice in that same way, but John Scalzi's writing reminds me of that comment. He is very good at writing a specific sort of book: a light science fiction story that draws as much on Star Trek as it does on Heinlein, comfortably sits on the framework of standard SF tropes built by other people, adds a bit of humor and a lot of banter, and otherwise moves reliably and competently through a plot. It's not hard to recognize Scalzi's writing, so in that sense he has less of a default voice than Asimov had, but if I had to pick out an average science fiction novel his writing would come immediately to mind. At a time when the field is large enough to splinter into numerous sub-genres that challenge readers in different ways and push into new ideas, Scalzi continues writing straight down the middle of the genre, providing the same sort of comfortable familiarity as the latest summer blockbuster.

This is not high praise, and I am sometimes mystified at the amount of attention Scalzi gets (both positive and negative). I think his largest flaw (and certainly the largest flaw in this book) is that he has very little dynamic range, particularly in his characters. His books have a tendency to collapse into barely-differentiated versions of the same person bantering with each other, all of them sounding very much like Scalzi's own voice on his blog. The Collapsing Empire has emperox Scalzi grappling with news from scientist Scalzi carried by dutiful Scalzi with the help of profane impetuous Scalzi, all maneuvering against devious Scalzi. The characters are easy to keep track of by the roles they play in the plot, and the plot itself is agreeably twisty, but if you're looking for a book to hook into your soul and run you through the gamut of human emotions, this is not it.

That is not necessarily a bad thing. I like that voice; I read Scalzi's blog regularly. He's reliable, and I wonder if that's the secret to his success. I picked up this book because I wanted to read a decent science fiction novel and not take a big risk. It delivered exactly what I asked for. I enjoyed the plot, laughed at some of the characters, felt for Cardenia, enjoyed the way some villainous threats fell flat because of characters who had a firm grasp of what was actually important and acted on it, and am intrigued enough by what will happen next that I'm going to read the sequel. Scalzi aimed to entertain, succeeded, and got another happy customer. (Although I must note that I would have been happier if my favorite character in the book, by far, did not make a premature exit.)

I am mystified at how The Collapsing Empire won a Locus Award for best science fiction novel, though. This is just not an award sort of book, at least in my opinion. It's book four in an urban fantasy series, or the sixth book of Louis L'Amour's Sackett westerns. If you like this sort of thing, you'll like this version of it, and much of the appeal is that it's not risky and requires little investment of effort. I think an award winner should be the sort of book that lingers, that you find yourself thinking about at odd intervals, that expands your view of what's possible to do or feel or understand.

But that complaint is more about awards voters than about Scalzi, who competently executed on exactly what was promised on the tin. I liked the setup and I loved the structure of Cardenia's inheritance of empire, so I do kind of wish I could read the book that, say, Ann Leckie would have written with those elements, but I was entertained in exactly the way that I wanted to be entertained. There's real skill and magic in that.

Followed by The Consuming Fire. This book ends on a cliffhanger, as apparently does the next one, so if that sort of thing bothers you, you may want to wait until they're all available.

Rating: 7 out of 10

Cory DoctorowPodcast: Today, Europe Lost The Internet. Now, We Fight Back.


Here’s my reading (MP3) of Today, Europe Lost The Internet. Now, We Fight Back, written for EFF Deeplinks on the morning of the EU’s catastrophic decision to vote in the new Copyright Directive with all its worst clauses intact.

MP3

,

Planet DebianCarl Chenet: You Think the Visual Studio Code binary you use is a Free Software? Think again.

Did you download your binary of Visual Studio Code directly from the official website? If so, you’re not using a Free Software and only Microsoft knows what was added to this binary. And you should think of the worst possible.

It says « Open Source » and offers to download non open source binary packages. Very misleading.

The Microsoft Trick

I’m not a lawyer, I could be wrong or not accurate enough in my analysis (sorry!) but I’ll try nonetheless to give my understanding of the situation because the current state of licensing of Visual Studio Code tries to fool most users.

Microsoft uses here a simple but clever trick allowed by the license of the code source of Visual Studio Code: the MIT license, a permissive Free Software license.

Indeed, the MIT license is really straightforward. Do whatever you want with this software, keeps the original copyright and I’m not responsible of what could happen with this software. Ok. Except that, for the situation of Visual Studio Code, it only covers the source code, not the binary.

Unlike most of the GPL-based licenses for which both the source code and the binary built from this source code are covered by the terms of the license, using the MIT license authorizes Microsoft to make available the source code of the software, but do whatever they want with the binary of this software. And let’s be crystal-clear: 99,99% of the VSC users will never ever use directly the source code.

What a non-free license by Microsoft is

And of course Microsoft does not use purposely the MIT license for the binary of Visual Studio Code. In fact they use a fully-armed, Freedom-restricting license, the Microsoft Software License.

Lets have a look at some pieces of it. You can find the full license here: https://code.visualstudio.com/license

This license applies to the Visual Studio Code product. The source code is available under the MIT license agreement.

First sentence of the license. The difference between the license of the source code and the « product », meaning the binary you’re going to use, is clearly stated.

Data Collection. The software may collect information about you and your use of the software, and send that to Microsoft.

Yeah right, no kidding. Big Surprise from Microsoft.

UPDATES. The software may periodically check for updates, and download and install them for you. You may obtain updates only from Microsoft or authorized sources. Microsoft may need to update your system to provide you with updates. You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices.

I’ll break your installation without further notice and I don’t care what you were doing with it before, because, you know.

SCOPE OF LICENSE (…) you may not:

  • work around any technical limitations in the software;

Also known as « hacking » since… years.

  • reverse engineer, decompile or disassemble the software, or otherwise attempt to derive the source code for the software, except and to the extent required by third party licensing terms governing use of certain open source components that may be included in the software;

Because, there is no way anybody should try to know what we are doing with the binary running on your computer.

  • share, publish, rent or lease the software, or provide the software as a stand-alone offering for others to use.

I may be wrong (again I’m not a lawyer), but it seems to me they forbid you to redistribute this binary, except for the conditions mentioned in the INSTALLATION AND USE RIGHTS section (mostly for the need of your company or/and for giving demos of your products using VSC).

The following sections EXPORT RESTRICTIONS and CONSUMER RIGHTS; REGIONAL VARIATIONS include more and more restrictions about using and sharing the binary.

DISCLAIMER OF WARRANTY. The software is licensed “as-is.”

At last a term which could be identified as a term of a Free Software license. But in this case it’s of course to limit any obligation Microsoft could have towards you.

So the Microsoft software license is definitely not a Free Software license, if you were not convinced by the clever trick of dual licensing the source code and the binary.

What You Could Do

Some answers exist to use VSC in good condition. After all, the source code of VSC comes as a Free Software. So why not building it yourself? It also seems some initiatives appeared, like this repository. That could be a good start.

About the GNU/Linux distributions, packaging VSC (see here for the discussion in Debian) would be a great way to avoid people being abused by the Microsoft trick in order they use a « product » breaking almost any term of what makes a Free Software.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.io, a Job board dedicated to Free and Open Source Jobs in the US.

Follow Me On Social Networks

 

 

Krebs on SecurityGovPayNow.com Leaks 14M+ Records

Government Payment Service Inc. — a company used by thousands of U.S. state and local governments to accept online payments for everything from traffic citations and licensing fees to bail payments and court-ordered fines — has leaked more than 14 million customer records dating back at least six years, including names, addresses, phone numbers and the last four digits of the payer’s credit card.

Indianapolis-based GovPayNet, doing business online as GovPayNow.com, serves approximately 2,300 government agencies in 35 states. GovPayNow.com displays an online receipt when citizens use it to settle state and local government fees and fines via the site. Until this past weekend it was possible to view millions of customer records simply by altering digits in the Web address displayed by each receipt.

On Friday, Sept. 14, KrebsOnSecurity alerted GovPayNet that its site was exposing at least 14 million customer receipts dating back to 2012. Two days later, the company said it had addressed “a potential issue.”

“GovPayNet has addressed a potential issue with our online system that allows users to access copies of their receipts, but did not adequately restrict access only to authorized recipients,” the company said in a statement provided to KrebsOnSecurity.

The statement continues:

“The company has no indication that any improperly accessed information was used to harm any customer, and receipts do not contain information that can be used to initiate a financial transaction. Additionally, most information in the receipts is a matter of public record that may be accessed through other means. Nonetheless, out of an abundance of caution and to maximize security for users, GovPayNet has updated this system to ensure that only authorized users will be able to view their individual receipts. We will continue to evaluate security and access to all systems and customer records.”

In January 2018, GovPayNet was acquired by Securus Technologies, a Carrollton, Texas- based company that provides telecommunications services to prisons and helps law enforcement personnel keep tabs on mobile devices used by former inmates.

Although its name may suggest otherwise, Securus does not have a great track record in securing data. In May 2018, the New York Times broke the news that Securus’ service for tracking the cell phones of convicted felons was being abused by law enforcement agencies to track the real-time location of mobile devices used by people who had only been suspected of committing a crime. The story observed that authorities could use the service to track the real-time location of nearly any mobile phone in North America.

Just weeks later, Motherboard reported that hackers had broken into Securus’ systems and stolen the online credentials for multiple law enforcement officials who used the company’s systems to track the location of suspects via their mobile phone number.

A story here on May 22 illustrated how Securus’ site appeared to allow anyone to reset the password of an authorized Securus user simply by guessing the answer to one of three pre-selected “security questions,” including “what is your pet name,” “what is your favorite color,” and “what town were you born in”. Much like GovPayNet, the Securus Web site seemed to have been erected sometime in the aughts and left to age ungracefully for years.

Choose wisely and you, too, could gain the ability to look up anyone’s precise mobile location.

Data exposures like these are some of the most common but easily preventable forms of information leaks online. In this case, it was trivial to enumerate how many records were exposed because each record was sequential.

E-commerce sites can mitigate such leaks by using something other than easily-guessed or sequential record numbers, and/or encrypting unique portions of the URL displayed to customers upon payment.

Although fixing these information disclosure vulnerabilities is quite simple, it’s remarkable how many organizations that should know better don’t invest the resources needed to find and fix them. In August, KrebsOnSecurity disclosed a similar flaw at work across hundreds of small bank Web sites run by Fiserv, a major provider of technology services to financial institutions.

In July, identity theft protection service LifeLock fixed an information disclosure flaw that needlessly exposed the email address of millions of subscribers. And in April 2018, PaneraBread.com remedied a weakness that exposed millions of customer names, email and physical addresses, birthdays and partial credit card numbers.

Got a tip about a security vulnerability similar to those detailed above, or perhaps something more serious? Please drop me a note at krebsonsecurity @ gmail.com.

CryptogramClick Here to Kill Everybody Reviews and Press Mentions

It's impossible to know all the details, but my latest book seems to be selling well. Initial reviews have been really positive: Boing Boing, Financial Times, Harris Online, Kirkus Reviews, Nature, Politico, and Virus Bulletin.

I've also done a bunch of interviews -- either written or radio/podcast -- including the Washington Post, a Reddit AMA, "The 1A " on NPR, Security Ledger, MIT Technology Review, CBC Radio, and WNYC Radio.

There have been others -- like the Lawfare, Cyberlaw, and Hidden Forces podcasts -- but they haven't been published yet. I also did a book talk at Google that should appear on YouTube soon.

If you've bought and read the book, thank you. Please consider leaving a review on Amazon.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV October 2018 Workshop

Oct 20 2018 12:30
Oct 20 2018 16:30
Oct 20 2018 12:30
Oct 20 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Topic To Be Announced

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 20, 2018 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV October 2018 Main Meeting

Oct 2 2018 18:30
Oct 2 2018 20:30
Oct 2 2018 18:30
Oct 2 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, October 2, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • To Be Announced

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

October 2, 2018 - 18:30

Planet DebianJonathan Dowland: which spare laptop?

I'm in a perpetual state of downsizing and ridding my life (and my family's life) of things we don't need: sometimes old computers. My main (nearly my sole) machine is my work-provided Thinkpad T470s: a fantastic laptop that works so well I haven't had anything to write about it. However, I decided that it was worth keeping just one spare, for emergencies or other odd situations. I have two candidate machines in my possession.

In the blue corner

left: X61S; right: R600

left: X61S; right: R600

Toshiba Portégé R600. I've actually owned this now for 7 years, buying it originally to replace my beloved x40 which I loaned to my partner. At the time my main work machine was still a desktop. I received a new work laptop soon after buying this so it ended up gathering dust in a cupboard.

It's an extremely light laptop, even by today's standards. It compares favourably with the Apple Macbook Air 11" in that respect. A comfortable keyboard, but no trackpoint and a bog-standard trackpad. 1280x800 16:9 display, albeit TN panel technology with very limited viewing angles. Analog VGA video out on the laptop, but digital DVI-D out is possible via a separate dock, which was cheap and easy to acquire and very stowable. An integrated optical media drive which could be useful. Max 3G RAM (1G soldered, 2G in DIMM slot).

The CPU is apparently a generation newer but lower voltage and thus slower than its rival, which is…

In the red corner

x61s

x61s

Thinkpad X61s. The proportions match the Thinkpad X40, so it has a high nostalgia factor. Great keyboard, I love trackpoints, robust build. It has the edge on CPU over the Toshiba. A theoretical maximum of 8G (2x4) RAM, but practically nearer 4G (2x2), as the 4G sticks are too expensive. This is probably the "heart" choice.

The main drawback of the X61s is the display options: a 1024x768 TN panel, and no digital video out: VGA only on the laptop, and VGA only on the optional dock. It's possible to retro-fit a better panel, but it's not easy and the parts are now very hard to find. It's also a surprisingly heavy machine: heavier than I remember the X40 being, but it's been long enough ago that my expectations have changed.

The winner

Surprising myself perhaps more than anyone else, I've ended up opting for the Toshiba. The weight was the clincher. The CPU performance difference was too close to matter, and 3G RAM is sufficient for my spare laptop needs. Once I'd installed a spare SSD as the main storage device, day-to-day performance is very good. The resolution difference didn't turn out to be that important: it's still low enough that side-by-side text editor and browser feels crowded, so I end up using the same window management techniques as I would on the X61s.

What do I use it for? I've taken it on a couple of trips or holidays which I wouldn't want to risk my work machine for. I wrote nearly all of liquorice on it in downtime on a holiday to Turkey whilst my daughter was having her afternoon nap. I'm touching up this blog post on it now!

I suppose I should think about passing on the X61s to something/someone else.

CryptogramNSA Attacks Against Virtual Private Networks

A 2006 document from the Snowden archives outlines successful NSA operations against "a number of "high potential" virtual private networks, including those of media organization Al Jazeera, the Iraqi military and internet service organizations, and a number of airline reservation systems."

It's hard to believe that many of the Snowden documents are now more than a decade old.

Worse Than FailureCodeSOD: The Secure Cloud API

Melinda's organization has purchased a cloud-based storage system. Like any such system, it has a lovely API which lets you manage quotas and login tokens. It also had a lovely CLI, which was helpful for administrators to modify the cloud environment. Melinda's team built a PHP front-end that could not only manage files, but also allowed administrators to manage those quotas.

Melinda was managing those quotas, and when she clicked the link to view the quotas, she noticed the URL contained ?token=RO-cmV1c2luZyBrZXlzIGlzIFRSV1RG. When she went to modify the quota, the URL parameter became ?token=RW-cmV1c2luZyBrZXlzIGlzIFRSV1RG. That looked like a security key for their cloud API, transmitted in the open. The RW and RO looked like they had something to do with readwrite and readonly, but that wasn't the security model their storage provider used. When Melinda had another co-worker log in, they saw the same tokens. What was going on?

Melinda took a look at the authorization code.

function authorised($token, $verb) { // check if token can do verb // TO DO - turn this in to a database lookup $rights = array( 'RW-cmV1c2luZyBrZXlzIGlzIFRSV1RG' => array('getquota' => 0, 'setquota' => 1), 'RO-cmV1c2luZyBrZXlzIGlzIFRSV1RG' => array('getquota' => 1, 'setquota' => 0) ); return ((isset($rights[$token]) && isset($rights[$token][$verb]) && ($rights[$token][$verb] == 1))); }

The developer behind this wrote their own security model, instead of using the one their storage provider offered. The tokens here were hard-coded secret keys for the API. Essentially this meant no matter who logged in to manage quotas on the application side, the storage system saw them all as a single user- a single user with pretty much unlimited permissions that had root access on every VM in their cloud environment.

"Oh, boy, they released their secret key to essentially a root account, that's bad," you say. It gets worse. The code doesn't, but the logic does.

You see, the culprit here didn't want to learn the API. So they did everything via shell commands. They had one machine set up in the cloud environment with a bunch of machine-based permissions that allowed it to control anything in the cloud. When someone wanted to change quotas, the PHP code would shell out and use SSH to log into that cloud machine as root, and then run the cloud vendor's proprietary CLI tool from within their own environment.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianSteve Kemp: PAM HaveIBeenPwned module

So the PAM module which I pondered about in my previous post now exists:

I did mention "sponsorship" in my post which lead to a couple of emails, and the end result of that was that a couple of folk donated to charity in my/its name. Good enough.

Perhaps in the future I'll explore patreon/similar, but I don't feel very in-demand so I'll avoid it for the moment.

Anyway I guess it should be Debian-packaged for neatness, but I'll resist for the moment.

Planet DebianWouter Verhelst: Linus apologising

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

TEDGalaxies hidden in plain sight, a new role at Netflix and other TED news

The TED community is busy with new projects and ideas: below, some highlights.

A new galaxy cluster hidden in plain sight. Researchers at MIT, including TED speaker Henry Lin, have recently discovered a cluster of hundreds of galaxies obscured by an intensely active supermassive black hole at its center. That extra-bright black hole, named PKS1353-341, is 46 billion times brighter than our sun; in their newest paper, the team concluded that a feeding frenzy (big chunks of matter falling into the hole and feeding it) is the likely cause of the black hole’s extraordinary light, which blocked the cluster from view. This insight has led to the development of CHiPS, or Clusters Hiding in Plain Sight, an initiative that will re-analyze older data and images, in the hopes of identifying other galaxy clusters. (Watch Lin’s TED Talk.)

Mothers of Invention: the women solving climate change. Alongside comedian Maeve Higgins, Mary Robinson has launched a new feminist podcast spotlighting women who are leading the charge in the climate change battle. The series, Mothers of Invention, has featured Judi Wakhungu and Alice Kaudia, Kenyan policymakers who instituted Kenya’s plastic bag ban; Tara Rodriguez, a Puerto Rican restaurateur who led efforts to develop sustainable farming measures on the island following Hurricane Maria; and TED speaker Tara Houska, an indigenous rights lawyer who works toward mass divestment from fossil fuel funds. Robinson, who helped negotiate the Paris Climate Agreement in 2015, has long advocated for environment policy that protects vulnerable communities; in an interview with iNews, she said, “Climate change isn’t gender neutral: it affects women worse. So of course it makes sense that they would be the ones coming up with solutions.” (Watch Robinson’s TED Talk.)

Diversity specialist Vernā Myers joins Netflix. Following two decades of leading the Vernā Myers Company, Vernā Myers will soon join Netflix as Vice President of Inclusion Strategy. In the newly created role, Myers will strategize how Netflix can best integrate “cultural diversity, inclusion and equity” into their global expansion plans. In a press release from Netflix, Myers said, “I am so excited and look forward to collaborating all across Netflix to establish bold innovative frameworks and practices that will attract, fully develop, and sustain high performing diverse teams.” (Watch Myers’ TED Talk.)

Monica Lewinsky talks Emmy nomination. In a podcast interview with Vanity Fair, Monica Lewinsky discusses her anti-bullying work and recent Emmy nomination for her PSA “In Real Life.” The campaign, which debuted last October, features actors recreating real cyberbullying comments on the streets of New York to unknowing bystanders, and shows strangers stepping in to defend the victims. The film, which was produced in collaboration with ad agency BBDO New York and Dini von Mueffling Communications, asks the question: If it’s not okay in person, why is it okay online? “There’s a lot of pain out there from this,” Lewinsky said to Vanity Fair. “We carry that with us for a long time. I hope it helps heal people.” (Watch Lewinsky’s TED Talk.)

A celebration of poetry and art in Bhutan. Poet and educator Sarah Kay captivated audiences last week at the Mountain Echoes literary festival in Thimphu, Bhutan, with an enthralling performance and workshop session. The annual festival, which registered 17,000 visitors this year, gathered artists and literary luminaries including Kay, the Queen Mother Dorji Wangmo and theatre actress Sanjana Kapoor to facilitate ”cultural dialogue, share stories, and create memories.” In addition to her performance, Kay led a workshop session called “Considering Breakthrough: Connecting with Spoken Word Poetry.” In The Times of India, Kay, who leads the global education initiative Project VOICE, says that for her, poetry is like “puzzle-solving.” (Watch Kay’s TED Talk.)

TEDNew insights on climate change action, a milestone for Maysoon Zayid and more TED news

As usual, the TED community is making headlines. Below, some highlights.

Does local action make a difference when fighting climate change? Environmental scientist Angel Hsu teamed up with experts at several climate research institutes on a fascinating new report about the potential effects of local action in reducing greenhouse gas emissions globally. Hsu synthesized data from thousands of cities, regions and companies at Data-Driven Yale, the Singapore-based research group she founded and leads. The study found that committed action by local entities could help bring the world closer to the goals of 2015’s Paris Climate Agreement. Researchers also found that local action by American entities could reduce emissions by at least half of America’s initial Paris Agreement pledge, even without federal support. On the study, Hsu said, “The potential of these commitments to help the world avoid dangerous climate change is clear – the key is now to ensure that these commitments are really implemented.” (Watch Hsu’s TED Talk.)

A groundbreaking comedy show. Actor, comedian and disability activist Maysoon Zayid will write and star in a new show inspired by her life for ABC. The show, titled Can, Can, will follow a Muslim woman with cerebral palsy as she navigates the intricacies of her love life, career and her opinionated family. Much of Zayid’s comedy explores and expands the intersections of disability and Muslim-American identity. Zayid will be joined by writer Joanna Quraishi to help produce and write the single-camera series. (Watch Zayid’s TED Talk.)

Meet 2018’s Humanist of the Year. For his advocacy work on responsible and progressive economic ethics, Nick Hanauer will be honored as Humanist of the Year by the Humanist Hub, an organization based at MIT and Harvard. In a statement, Hanauer said, “It is an honor both to receive this award, and to join the Humanist Hub in helping to change the way we think and talk about the economy. It turns out that most people get capitalism wrong. Capitalism works best when it works for everybody, not just for zillionaires like me.” The Humanist Hub, a nonreligious philosophy group, annually celebrates a public individual they believe embodies the ideals of humanism, a philosophy of living ethically to serve the greater good of humanity. (Watch Hauner’s TED Talk.)

Are you saving enough for retirement? Behavioral economist Dan Ariely doesn’t think so — in a new study conducted with Aline Holzwarth at the Center for Advanced Hindsight at Duke University, Ariely found that we can expect to spend up to 130% of our preretirement income once we retire. Ariely and Holzwarth urge us to abandon the conventional idea that 70% of our income will be enough for retirement. Instead, they suggest we approach saving for retirement with a personalized methodology that takes into account the seven most prominent spending categories: eating out, digital services, recharge (relaxing and self-care), travel, entertainment and shopping, and basic needs. Moving past a generic one-size measurement, they advocate planning your retirement spending only after you spending time understanding your individual needs. (Watch Ariely’s TED Talk.)

In Italy, a bridge offers hope after tragedy. Following a devastating bridge collapse that killed 43 people in Genoa, Italian architect Renzo Piano has offered to donate a new bridge design to help his beloved hometown recover from the traumatic loss. Preliminary designs present a bridge that is distinctly ship-like, alluding to Genoa’s maritime history; it includes 43 illuminated posts resembling sails to memorialize each of the victims. Meanwhile, Piano has worked closely with England’s Royal Academy of the Arts to design and curate an expansive retrospective of his work called “The Art of Making Buildings,” opening September 15. On the exhibition, Piano said, “[M]aking buildings is a civic gesture and social responsibility. I believe passionately that architecture is about making a place for people to come together and share values.” (Watch Piano’s TED Talk.)

,

Planet DebianBenjamin Mako Hill: Lookalikes

Was my festive shirt the model for the men’s room signs at Daniel K. Inouye International Airport in Honolulu? Did I see the sign on arrival and subconsciously decide to dress similarly when I returned to the airport to depart Hawaii?

Planet Linux AustraliaGary Pendergast: The Mission: Democratise Publishing

It’s exciting to see the Drupal Gutenberg project getting under way, it makes me proud of the work we’ve done ensuring the flexibility of the underlying Gutenberg architecture. One of the primary philosophies of Gutenberg’s technical architecture is platform agnosticism, and we can see the practical effects of this practice coming to fruition across a variety of projects.

Yoast are creating new features for the block editor, as well as porting existing features, which they’re able to reuse in the classic editor.

Outside of WordPress Core, the Automattic teams who work on Calypso have been busy adding Gutenberg support, in order to make the block editor interface available on WordPress.com. Gutenberg and Calypso are large JavaScript applications, built with strong opinions on design direction and technical architecture, and having significant component overlap. That these two projects can function together at all is something of an obscure engineering feat that’s both difficult and overwhelming to appreciate.

If we reached the limit of Gutenberg’s platform agnosticism here, it would still be a successful project.

But that’s not where the ultimate goals of the Gutenberg project stand. From early experiments in running the block editor as a standalone application, to being able to compile it into a native mobile component, and now seeing it running on Drupal, Gutenberg’s technical goals have always included a radical level of platform agnosticism.

Better Together

Inside the WordPress world, significant effort and focus has been on ensuring backwards compatibility with existing WordPress sites, plugins, and practices. Given that WordPress is such a hugely popular platform, it’s exceedingly important to ensure this is done right. With Gutenberg expanding outside of the WordPress world, however, we’re seeing different focuses and priorities arise.

The Gutenberg Cloud service is a fascinating extension being built as part of the Drupal Gutenberg project, for example. It provides a method for new blocks to be shared and discovered, the sample hero block sets a clear tone of providing practical components that can be rapidly put together into a full site. While we’ve certainly seen similar services appear for the various site builder plugins, this is the first one (that I’m aware of, at least) build specifically for Gutenberg.

By making the Gutenberg experience available for everyone, regardless of their technical proficiency, experience, or even preferred platform, we pave the way for a better future for all.

Democratising Publishing

You might be able to guess where this is going. 😉

WordPress’ mission is to “democratise publishing”. It isn’t to “be the most popular CMS”, or to “run on old versions of PHP”, though it’s easy to think that might be the case on the surface. That these statements are true is simply a side effect of the broader principle: All people, regardless of who they are or where they come from, should be able to publish their content as part of a free and open web.

The WordPress mission is not to “democratise publishing with WordPress”.

WordPress has many advantages that make it so popular, but hoarding those to ourselves doesn’t help the open web, it just creates more silos. The open web is the only platform on which publishing can be democratised, so it makes sense for Gutenberg to work anywhere on the open web, not just inside WordPress. Drupal isn’t a competitor here, we’re all working towards the same goal, the different paths we’ve taken have made the open web stronger as a whole.

Much as the block editor has been the first practical implementation of the Gutenberg architecture, WordPress is simply the first practical integration of the block editor into a CMS. The Gutenberg project will expand into site customisation and theming next, and while there’s no requirement that Drupal make use of these, I’d be very interested to see what they came up with if they did. Bringing together our many years of experience in tackling these complex problems can only make the end result better.

I know I’m looking forward to all of us working together for the betterment of the open web.

Planet DebianLouis-Philippe Véronneau: GIMP 2.10

GIMP 2.10 landed in Debian Testing a few weeks ago and I have to say I'm very happy about it. The last major version of GIMP (2.8) was released in 2012 and the new version fixes a lot of bugs and improved the user interface.

I've updated my Beginner's Guide to GIMP (sadly only in French) and in the process I found out a few things I thought I would share:

The new GIMP logo

Theme

The default theme is Dark. Although it looks very nice in my opinion, I don't feel it's a good choice for productivity. The icon pack the theme uses is a monochrome flat 2D render and I feel it makes it hard to differentiate the icons from one another.

I would instead recommend on using the Light theme with the Color icon pack.

Single Window Mode

GIMP now enables Single Window Mode by default. That means that Dockable Dialog Windows like the Toolbar or the Layer Window cannot be moved around, but instead are locked to two docks on the right and the left of the screen.

Although you can hide and show these docks using Tab, I feel Single Window Mode is more suitable for larger screens. On my laptop, I still prefer moving the windows around as I used to do in 2.8.

You can disable Single Window Mode in the Windows tab.

Planet DebianClint Adams: Two days afterward

Sheena plodded down the stairs barefoot, her shiny bunions glinting in the cheap fluorescent light. “My boobs hurt,” she announced.

“That happens every month,” mumbled Luke, not looking up from his newspaper.

“It does not!” she retorted. “I think I'm perimenopausal.”

“At age 29?” he asked skeptically.

“Don't mansplain perimenopause to me!” she shouted.

“Okay,” he said, putting down the paper and walking over to embrace her.

“My boobs hurt,” she whispered.

Posted on 2018-09-16
Tags: mintings

,

Rondam RamblingsPardon me while I take a small victory lap

Back in March I made a prediction: Today I feel vindicated. Reading the 76 pages of charges against Manafort is like reading an international sequel to “The Godfather.” Money laundering, illegal lobbying, tax evasion, perjury, conspiracy to get others to commit perjury, movement of tens of millions of dollars through offshore bank accounts in Cyprus and St. Vincent and the Grenadines.

Planet DebianJonathan Dowland: Backing the wrong horse?

I started using the Ruby programming in around 2003 or 2004, but stopped at some point later, perhaps around 2008. At the time I was frustrated with the approach the Ruby community took for managing packages of Ruby software: Ruby Gems. They interact really badly with distribution packaging and made the jobs of organisations like Debian more difficult. This was around the time that Ruby on Rails was making a big splash for web application development (I think version 2.0 had just come out). I did fork out for the predominant Ruby on Rails book to try it out. Unfortunately the software was evolving so quickly that the very first examples in the book no longer worked with the latest versions of Rails. I wasn't doing a lot of web development that at the time anyway, so I put the book, Rails and Ruby itself on the shelf and moved on to looking at the Python programming language instead.

Since then I've written lots of Python, both professionally and personally. Whenever it looked like a job was best solved with scripting, I'd pick up Python. I hadn't stopped to reflect on the experience much at all, beyond being glad I wasn't writing Perl any more (the first language I had any real traction with, 20 years ago).

I'm still writing Python on most work days, and there are bits of it that I do really like, but there are also aspects I really don't. Some of the stuff I work on needs to work in both Python 2 and 3, and that can be painful. The whole 2-versus-3 situation is awkward: I'd much rather just focus on 3, but Python 3 didn't ship in (at least) RHEL 7, although it looks like it will in 8.

Recently I dusted off some 12-year old Ruby code and had a pleasant experience interacting with Ruby again. It made me wonder, had I perhaps backed the wrong horse? In some respects, clearly not: being proficient with Python was immediately helpful when I started my current job (and may have had a hand in getting me hired). But in other respects, I wonder how much time I've wasted wrestling with e.g. Python's verbose, rigid regular expression library when Ruby has nice language-native regular expression operators (taken straight from Perl), or the really awkward support for Unicode in Python 2 (this reminds me of Perl for all the wrong reasons)

Next time I have a computing problem to solve where it looks like a script is the right approach, I'm going to give Ruby another try. Assuming I don't go for Haskell instead, of course. Or, perhaps I should try something completely different? One piece of advice that resonated with me from the excellent book The Pragmatic Programmer was "Learn a new (programming) language every year". It was only recently that I reflected that I haven't learned a completely new language for a very long time. I tried Go in 2013 but my attempt petered out. Should I pick that back up? It has a lot of traction in the stuff I do in my day job (Kubernetes, Docker, Openshift, etc.). "Rust" looks interesting, but a bit impenetrable at first glance. Idris? Lua? Something else?

Planet DebianSteve Kemp: Recommendations for software?

A quick post with two questions:

  • What spam-filtering software do you recommend?
  • Is there a PAM module for testing with HaveIBeenPwnd?
    • If not would you sponsor me to write it? ;)

So I've been using crm114 to perform spam-filtering on my incoming mail, via procmail, for the past few years.

Today I discovered it had archived about 12Gb of my email history, because I'd never pruned it. (Beneath ~/.crm/.)

So I wonder if there are better/simpler/different Bayesian-filters out there at that I should be switching to? Recommendations welcome - but don't say "SpamAssassin", thanks!

Secondly the excellent Have I Been Pwned site provides an API which allows you to test if a password has been previously included in a leak. This is great, and I've integrated their API in a couple of my own applications, but I was thinking on the bus home tonight it might be worth tying into PAM.

Sure in the interests of security people should use key-based authentication for SSH, but .. most people don't. Even so, if keys are used exclusively, a PAM module would allow you to validate the password which is used for sudo hasn't previously been leaked.

So it seems like there is value in a PAM module to do a lookup at authentication-time, via libcurl.

Sam VargheseTwenty-five years after Oslo, there is nothing to show for it

Thursday, September 13, marked 25 years since Israel took the (then) radical step of recognising the Palestine Liberation Organisation in a Norway-brokered deal that many thought would ultimately lead to a two-state solution in the Middle East and bring an end to one of the most bitter feuds between nations.

Alas, it was not to be. Twenty-five years on, what remains of land that could have been a Palestinian homeland is bantustans, and things seem to be going from bad to worse. With the US recognising Jerusalem as Israel’s capital, it is now inconceivable that Tel Aviv will ever countenance giving up part of the city to be the capital of a future Palestinian state.

It brings back memories for me, as it was the biggest news event that I have managed in nearly 40 years as a journalist in three countries. In 1993, I was deputy chief sub-editor at the Khaleej Times in Dubai, and that September I was producing the daily editions as the chief sub-editor, my good mate T.K. Achuthan, was on leave.

He returned to work on Monday, September 13. When he came in to work, he took one look at the copy sitting there for the edition and told me that it would be better if I handled the edition as I had been following the whole thing. That’s how I came to produce what ultimately was the newsiest front page of my journalistic life.

There was a public sector strike in India the next day and the day chief-sub, N.J. Joseph, had kept the story for the front page. When I sat down to take stock, I told him that it could go on an inside page; he was quite annoyed about that, accusing me of playing down events in India.

So I waited for him to go home and then sent the strike news to the sub-editor who was laying out the late news page — the news night shift basically made those two pages, apart from the city pages — and asked him to make it the lead.

We had a rather flaky gent as editor, one Bikram Vohra, who had little news sense. He would call every night to find out what was going on the front as many editors do; when he called that night, about 7.30pm, he said that there was something about the Middle East on TV (something? this was the biggest story for decades and the editor of the biggest English daily in the Gulf and Middle East was calling it “something”!) and asked me if I would be using anything on the front page.

I was quite taken aback at such a silly question; when I realised he was serious, I told him that we would use a story on the front. I did not elaborate.

Achuthan is a newsman to the core and it is a measure of his professionalism that he opted to play second fiddle to me that night. We finished on time as we normally did when the pair of us worked together – the proprietors of the Khaleej Times were very particular about timings as the paper had to be sent to nearby Persian Gulf states by air. The entire front page was about the Israel-PLO deal, with a massive banner headline.

Both Achuthan and I had a quiet sense of satisfaction after we put the edition to bed. The front page looked very good and there were strong stories right down the page.

The next day, Vohra did not talk to me. I reckon he was annoyed that he had betrayed his ignorance about the Middle East to me.

The day chief sub, Joseph, came to me the next evening when the shifts changed and said that it was the right decision to put the Indian strike story inside. I did not crow about it; I have often made wrong decisions like that myself. One cannot see a front page in one’s head until one is right in the thick of it. One just trusts one’s instincts and goes with what seems right.

It’s sad that 25 years on, the Oslo accords are just seen as another wasted opportunity on the bloody path that is called the Middle East peace process.

,

Planet Linux AustraliaDavid Rowe: Porting a LDPC Decoder to a STM32 Microcontroller

A few months ago, FreeDV 700D was released. In that post, I asked for volunteers to help port 700D to the STM32 microcontroller used for the SM1000. Don Reid, W7DMR stepped up – and has been doing a fantastic job porting modules of C code from the x86 to the STM32.

Here is a guest post from Don, explaining how he has managed to get a powerful LDPC decoder running on the STM32.

LDPC for the STM32

The 700D mode and its LDPC function were developed and used on desktop (x86) platforms. The LDPC decoder is implemented in the mpdecode_core.c source file.

We’d like to run the decoder on the SM1000 platform which has an STM32F4 processor. This requires the following changes:

  • The code used doubles in several places, while the stm32 has only single precision floating point hardware.
  • It was thought that the memory used might be too much for a system with just 192k bytes of RAM.
  • There are 2 LDPC codes currently supported, HRA_112_112 used in 700D and, H2064_516_sparse used for Balloon Telemetry. While only the 700D configuration needed to work on the STM32 platform, any changes made to the mainstream code needed to work with the H2064_516_sparse code.

Testing

Before making changes it was important to have a well defined test process to validate new versions. This allowed each change to be validated as it was made. Without this the final debugging would likely have been very difficult.

The ldpc_enc utility can generate standard test frames and the ldpc_dec utility receive the frames and measure bit errors. So errors can be detected directly and BER computed. ldpc_enc can also output soft decision symbols to emulate what the modem would receive and pass into the LDPC decoder. A new utility ldpc_noise was written to add AWGN to the sample values between the above utilities. here is a sample run:

$ ./ldpc_enc /dev/zero - --sd --code HRA_112_112 --testframes 100 | ./ldpc_noise - - 1 | ./ldpc_dec - /dev/null --code HRA_112_112 --sd --testframes
single sided NodB = 1.000000, No = 1.258925
code: HRA_112_112
code: HRA_112_112
Nframes: 100
CodeLength: 224 offset: 0
measured double sided (real) noise power: 0.640595
total iters 3934
Raw Tbits..: 22400 Terr: 2405 BER: 0.107
Coded Tbits: 11200 Terr: 134 BER: 0.012

ldpc_noise is passed a “No” (N-zero) level of 1dB, Eb=0, so Eb/No = -1, and we get a 10% raw BER, and 1% after LDPC decoding. This is a typical operating point for 700D.

A shell script (ldpc_check) combines several runs of these utilities, checks the results, and provides a final pass/fail indication.

All changes were made to new copies of the source files (named *_test*) so that current users of codec2-dev were not disrupted, and so that the behaviour could be compared to the “released” version.

Unused Functions

The code contained several functions which are not used anywhere in the FreeDV/Codec2 system. Removing these made it easier to see the code that was used and allowed the removal of some variables and record elements to reduce the memory used.

First Compiles

The first attempt at compiling for the stm32 platform showed that the the code required more memory than was available on the processor. The STM32F405 used in the SM1000 system has 128k bytes of main RAM.

The largest single item was the DecodedBits array which was used to saved the results for each iteration, using 32 bit integers, one per decoded bit.

    int *DecodedBits = calloc( max_iter*CodeLength, sizeof( int ) );

This used almost 90k bytes!

The decode function used for FreeDV (SumProducts) used only the last decoded set. So the code was changed to save only one pass of values, using 8 bit integers. This reduced the ~90k bytes to just 224 bytes!

The FreeDV 700D mode requires on LDPC decode every 160ms. At this point the code compiled and ran but was too slow – using around 25ms per iteration, or 300 – 2500ms per frame!

C/V Nodes

The two main data structures of the LDPC decoder are c_nodes and v_nodes. Each is an array where each node contains additional arrays. In the original code these structures used over 17k bytes for the HRA_112_112 code.

Some of the elements of the c and v nodes (index, socket) are indexes into these arrays. Changing these from 32 bit to 16 bit integers and changing the sign element into a 8 bit char saved about 6k bytes.

The next problem was the run time. Each 700D frame must be fully processed in 160 ms and the decoder was taking several times this long. The CPU load was traced to the phi0() function, which was calling two maths library functions. After optimising the phi0 function (see below) the largest use of time was the index computations of the nested loops which accessed these c and v node structures.

With each node having separate arrays for index, socket, sign, and message these indexes had to be computed separately. By changing the node structures to hold an array of sub-nodes instead this index computation time was significantly reduced. An additional benefit was about a 4x reduction in the number of memory blocks allocated. Each allocation block includes additional memory used by malloc() and free() so reducing the number of blocks reduces memory use and possible heap fragmentation.

Additional time was saved by only calculating the degree elements of the c and v nodes at start-up rather than for every frame. That data is kept in memory that is statically allocated when the decoder is initialized. This costs some memory but saves time.

This still left the code calling malloc several hundred times for each frame and then freeing that memory later. This sort of memory allocation activity has been known to cause troubles in some embedded systems and is usually avoided. However the LDPC decoder needed too much memory to allow it to be statically allocated at startup and not shared with other parts of the code.

Instead of allocating an array of sub-nodes for each c or v node, a single array of bytes is passed in from the parent. The initialization function which calculates the degree elements of the nodes also counts up the memory space needed and reports this to its caller. When the decoder is called for a frame, the node’s pointers are set to use the space of this array.

Other arrays that the decoder needs were added to this to further reduce the number of separate allocation blocks.

This leaves the decisions of how to allocate and share this memory up to a higher level of the code. The plan is to continue to use malloc() and free() at a higher level initially. Further testing can be done to look for memory leakage and optimise overall memory usage on the STM32.

PHI0

There is a non linear function named “phi0” which is called inside several levels of nested loops within the decoder. The basic operation is:

   phi0(x) = ln( (e^x + 1) / (e^x - 1) )

The original code used double precision exp() and log(), even though the input, output, and intermediate values are all floats. This was probably an oversight. Changing to the single single precision versions expf() and logf() provided some improvements, but not enough to meet our CPU load goal.

The original code used piecewise approximation for some input values. This was extended to cover the full range of inputs. The code was also structured differently to make it faster. The original code had a sequence of if () else if () else if () … This can take a long time when there are many steps in the approximation. Instead two ranges of input values are covered with linear steps that is implemented with table lookups.

The third range of inputs in non linear and is handled by a binary tree of comparisons to reduce the number of levels. All of this code is implemented in a separate file to allow the original or optimised version of phi0 to be used.

The ranges of inputs are:

             x >= 10      result always 0
      10   > x >=  5      steps of 1/2
       5   > x >= 1/16    steps of 1/16
    1/16   > x >= 1/4096  use 1/32, 1/64, 1/128, .., 1/4096
    1/4096 > x            result always 10

The range of values that will appear as inputs to phi0() can be represented with as fixed point value stored in a 32 bit integer. By converting to this format at the beginning of the function the code for all of the comparisons and lookups is reduced and uses shifts and integer operations. The step levels use powers of 2 which let the table index computations use shifts and make the fraction constants of the comparisons simple ones that the ARM instruction set can create efficiently.

Misc

Two of the configuration values are scale factors that get multiplied inside the nested loops. These values are 1.0 in both of the current configurations so that floating point multiply was removed.

Results

The optimised LDPC decoder produces the same output BER as the original.

The optimised decoder uses 12k of heap at init time and needs another 12k of heap at run time. The original decoder just used heap at run time, that was returned after each call. We have traded off the use of static heap to clean up the many small heap allocations and reduce execution time. It is probably possible to reduce the static space further perhaps at the cost of longer run times.

The maximum time to decode a frame using 100 iterations is 60.3 ms and the median time is 8.8 ms, far below our budget of 160ms!

Future Possibilities

The remaining floating point computations in the decoder are addition and subtraction so the values could be represented with fix point values to eliminate the floating point operations.

Some values which are computed from the configuration (degree, index, socket) are constants and could be generated at compile time using a utility called by cmake. However this might actually slow down the operation as the index computations might become slower.

The index and socket elements of C and V nodes could be pointers instead of indexes into arrays.

Experiments would be required to ensure these changes actually speed up the decoder.

Bio

Don got his first amateur license in high school but was soon distracted with getting an engineering degree (BSEE, Univ. of Washington), then family and life. He started his IC design career with the CPU for the HP-41C calculator. Then came ICs for printers and cameras, work on IC design tools, and some firmware for embedded systems. Exposure to ARES public service lead to a new amateur license – W7DMR and active involvement with ARES. He recently retired after 42 years and wanted to find an open project that combined radio, embedded systems and DSP.

Don lives in Corvallis, Oregon, USA a small city with the state technical university and several high tech companies.

Open Source Projects and Volunteers

Hi it’s David back again ….

Open source projects like FreeDV and Codec 2 rely on volunteers to make them happen. The typical pattern is people get excited, start some work, then drift away after a few weeks. Gold is the volunteer that consistently works week in, week out until their particular project is done. The number of hours/week doesn’t matter – it’s the consistency that is really helpful to the projects. I have a few contributors/testers/users of FreeDV in this category and I appreciate you all deeply – thank you.

If you would like to help out, please contact me. You’ll learn a lot and get to work towards an open source future for HF radio.

If you can’t help out technically, but would like to support this work, please consider Patreon or PayPal.

Reading Further

LDPC using Octave and the CML library. Our LDPC decoder comes from Coded Modulation Library (CML), which was originally used to support Matlab/Octave simulations.

Horus 37 – High Speed SSTV Images. The CML LDPC decoder was converted to a regular C library, and used for sending images from High Altitude Balloons.

Steve Ports an OFDM modem from Octave to C. Steve is another volunteer who put in a fine effort on the C coding of the OFDM modem. He recently modified the modem to handle high bit rates for voice and HF data applications.

Rick Barnich KA8BMA did a fantastic job of designing the SM1000 hardware. Leading edge, HF digital voice hardware, designed by volunteers.

CryptogramFriday Squid Blogging: Dissecting a Giant Squid

Lessons learned.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDCuring cancer one nanoparticle at a time, and more news from TED speakers

As usual, the TED community is hard at work — here are some highlights:

A new drug-delivering nanoparticle. Paula Hammond, the head of the Department of Chemical Engineering at MIT, is part of a research team that has developed a new nanoparticle designed to treat a kind of brain tumor called glioblastoma multiforme. The nanoparticles deliver drugs to the brain that work in two ways — to destroy the DNA of tumor cells, and to impede the reparation of those cells. The researchers were able to shrink tumors and stop them from growing back in mice — and there’s hope this technology can be used for human applications in the future. (Watch Hammond’s TED Talk).

Reflections on grief, loss and love. Amy Krouse Rosenthal penned a poignant, humorous and heart-rending love letter to her husband — published in The New York Times ten days before her death — that resonated deeply with readers across the world. In the year since, Jason Rosenthal established a foundation in her name to fund ovarian cancer research and childhood literacy initiatives. Following the anniversary of Amy’s death, Rosenthal responded to her letter in a moving reflection on mourning and the gifts of generosity she left in her wake. “We did our best to live in the moment until we had no more moments left,” he wrote for The New York Times. “Amy continues to open doors for me, to affect my choices, to send me off into the world to make the most of it. Recently I gave a TED Talk on the end of life and my grieving process that I hope will help others.” (Watch Rosenthal’s TED Talk.)

Why we need to change our perceptions of teenagers. Neurologist Sarah-Jayne Blakemore urges us to reconsider the way we understand and treat teenagers, especially in school settings. (She wrote a book about the secret life of the teenage brain in March.) According to the latest research, teenagers shed 17% of their grey matter in the prefrontal cortex between childhood and adulthood, which, as Blakemore says, explains that traditional “bad” behaviors like sleeping in late and moodiness are a result of cognitive changes, not laziness or abrasiveness. (Watch Blakemore’s TED Talk.)

Half empty or half full? Research by Dan Gilbert indicates that our decisions may be more faulty than we think — and that we may be predisposed to seeing problems even when they aren’t there. In a recent paper Gilbert co-authored, researchers found that our judgment doesn’t follow fixed rules, but rather, our decisions are more relative. In one experiment, participants were asked to look at dots along a color spectrum from blue to purple, and note which dots were blue; at first, the dots were shown in equal measure, but when blue dots were shown less frequently, participants began marking dots they previously considered purple as blue (this video does a good job explaing). In another experiment, participants were more likely to mark ethical papers as unethical, and nonthreatening faces as threatening, when the previously-set negative stimulus was shown less frequently. This behavior — dubbed “prevalence-induced concept change” — has broad implications; the paper suggests it may explain why social problems never seem to go away, regardless of how much work we do to fix them. (Watch Gilbert’s TED Talk).

Terrifying insights from the world of parasites. Ed Yong likes to write about the creepy and uncanny of the natural world. In his latest piece for The Atlantic, Yong offered a deeper view into the bizarre habits and powers of parasitic worms. Based on research by Nicolle Demandt and Benedikt Saus from the University of Munster, Yong described how some tapeworms capitalize on the way fish shoals guide and react to each other’s behaviors and movements. Studying stickleback fish, Demandt and Saus realized parasite-informed decisions of infected sticklebacks can influence the behavior of uninfected fish, too. This means that if enough infected fish are led to dangerous situations by the controlling powers of the tapeworms, uninfected fish will be impacted by those decisions — without ever being infected themselves. (Read more of Yong’s work and watch his TED Talk.)

A new documentary on corruption within West African football. Ghanaian investigative journalist Anas Aremeyaw Anas joined forces with BBC Africa to produce an illuminating and hard-hitting documentary exposing fraud and corruption in West Africa’s football industry. In an investigation spanning two years, almost 100 officials were recorded accepting cash “gifts” from a slew of undercover reporters from Anas’ team posing as business people and investors. The documentary has already sent shock-waves throughout Ghana — including FIFA bans and resignations from football officials across the country. (Watch the full documentary and Anas’ TED Talk.)

 

CryptogramFive-Eyes Intelligence Services Choose Surveillance Over Security

The Five Eyes -- the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) -- have issued a "Statement of Principles on Access to Evidence and Encryption" where they claim their needs for surveillance outweigh everyone's needs for security and privacy.

...the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution.

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority.

The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.

To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a "defense dominant" strategy for securing the Internet and everything attached to it.

This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and -- now that they can affect the world in a direct physical manner -- affect our lives and property as well.

This is what I just wrote, in Click Here to Kill Everybody:

There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There's no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.

This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It's actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals' safe houses would be more secure, but it's pretty clear that this downside would be worth the trade-off of protecting everyone's house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.

Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won't make it impossible for law enforcement to solve crimes; I'll get to that later in this chapter.) Regardless, it's worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We've got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.

We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions.

Cory Doctorow has a good reaction.

Slashdot post.

CryptogramQuantum Computing and Cryptography

Quantum computing is a new way of computing -- one that could allow humankind to perform computations that are simply impossible using today's computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.

This is why cryptographers are hard at work designing and analyzing "quantum-resistant" public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn't. But even assuming aliens have developed the technology to its full potential, quantum computing doesn't spell the end of the world for cryptography. Symmetric cryptography is easy to make quantum-resistant, and we're working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we'll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory -- albeit with significant loss of capability.

At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it's easier to smash a plate than to glue all the pieces back together, it's much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind -- one-way functions and trap-door one-way functions -- underlie all of cryptography.

To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can't be broken before the heat death of the universe.

With symmetric cryptography -- the kind used to encrypt messages, files, and drives -- that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I'm hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it's 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.

Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you'll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.

This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer "breaking" an algorithm, what happened is that they've found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we're trying to future-proof them.

Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn't too bad. Grover's algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.

For public-key cryptography, the results are more dire. Shor's algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That's not enough of a sustainable edge.

There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don't currently exist, and no one knows when -- or even if ­- we'll be able to build one. We also don't know what sorts of practical difficulties will arise when we try to implement Grover's or Shor's algorithms for anything but toy key sizes. (Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don't know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they're going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.

In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain "ten years in the future," which means no one has any idea.

After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover's algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It's possible that quantum computers will someday break all of them, even those that today are quantum resistant.

If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution -- at scale -- with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it's possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I'm not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we'll still have security.

It's a weird future. Maybe the whole idea of number theory­-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.

Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on -- does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.

There's one more future scenario to consider, one that doesn't require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that'll be secure until computers are made of something other than matter and occupy something other than space.

But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads -- and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There's an optimism in that.

This essay originally appeared in IEEE Security and Privacy.

Planet DebianWouter Verhelst: Autobuilding Debian packages on salsa with Gitlab CI

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages:
  - build
  - autopkgtest
.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts adduser fakeroot sudo
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/
.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null
build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid
test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages:
  - build
  - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts autoconf automake adduser fakeroot sudo
  - autoreconf -f -i
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Planet DebianLars Wirzenius: New website for vmdb2

I've set up a new website for vmdb2, my tool for building Debian images (basically "debootstrap, except in a disk image"). As usual for my websites, it's ugly. Feedback welcome.

Worse Than FailureError'd: Stay Away From California

"Deep down, I knew this was one of the most honest labels I've ever seen," wrote Bob E.

 

"Some people struggle to get and keep a high credit score. I, on the other hand, appear to have gotten a high score," writes Paweł.

 

Steve M. wrote, "I'm trying to register our travel insurance with ROL Cruises, but our travel policy has been rejected because it's under age."

 

"This happens every time my mom tries to place a call in her car," writes Dylan S., "Strangely enough, the call still goes through."

 

"While I appreciate the one year warranty, I don't think I will be using these tea cakes to replace my HP battery," J.R. wrote.

 

Neil D. writes, "So, am I supposed to enter -1 and then I can buy one?"

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Sociological ImagesStories, Storms, and Simulations

This week Hurricane Florence is making landfall in the southeastern United States. Sociologists know that the impact of natural disasters isn’t equally distributed and often follows other patterns of inequality. Some people cannot flee, and those who do often don’t go very far from their homes in the evacuation area, but moving back after a storm hits is often a multi-step process while people wait out repairs.

We often hear that climate change is making these problems worse, but it can be hard for people to grasp the size of the threat. When we study social change, it is useful to think about alternatives to the world that is—to view a different future and ask what social forces can make that future possible. Simulation studies are especially helpful for this, because they can give us a glimpse of how things may turn out under different conditions and make that thinking a little easier.

This is why I was struck by a map created by researchers in the Climate Extremes Modeling Group at Stony Brook University. In their report, the first of its kind, Kevin Reed, Alyssa Stansfield, Michael Wehner, and Colin Zarzycki mapped the forecast for Hurricane Florence and placed it side-by-side with a second forecast that adjusted air temperature, specific humidity, and sea surface temperature to conditions without the effects of human induced climate change. It’s still a hurricane, but the difference in the size and severity is striking:

Reports like this are an important reminder that the effects of climate change are here, not off in the future. It is also interesting to think about how reports like these could change the way we talk about all kinds of social issues. Sociologists know that narratives are powerful tools that can change minds, and projects like this show us where simulation can make for more powerful storytelling for advocacy and social change.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowTo do in LA this Saturday: I’m speaking at the Pasadena Loves YA festival!

Angelenos! Bring your teens to the Pasadena Loves YA festival this Saturday; I’m chairing a panel on graphic novels with Mairghread Scott and Tillie Walden; other panels and events go on all day, from 11-4PM, at the Central Branch of Pasadena Public Library, 285 E Walnut St, Pasadena CA 91101. Admission is free!

Planet DebianNorbert Preining: Gaming: Rise of the Tomb Raider

Over the last weekend I have finally finished The Rise of the Tomb Raider. As I wrote exactly 4 month ago when I started the game, I am a complete newby to these kind of games, and was blown away by the visual quality and great gameplay.

I was really surprised how huge an area I had to explore over the four month. Many of them with really excellent nature scenery, some of them with a depressingly dark and solemn atmosphere.

Another thing I learned that the Challenge Tombs – some kind of puzzle challenges – haven’t been that important in previous games. I enjoyed these puzzles much more then the fighting sequences (also because I am really bad at combat and have to die soo many times before I succeed!).

Lots of sliding down on ropes, jumping, running, diving, often into the unknown.

In the last part of the game when Lara enters into the city underneath the glacier one is reminded of the scene when Frodo tries to enter into Mordor, seeing all the dark riders and troops.

The final approach starts, there is still a long way (and many strange creatures to fight), but at least the final destination is in sight!

I finished the game with 100%, because I went back to all the areas and used Stella’s Tomb Raider Site walkthrough to help me find all the items. I think it is practically impossible within life time to find all the items alone without help. This is especially funny because in one of the trailers one of the developers mentions that they count with 15h of gameplay, and 30h if you want to finish at 100%. It took me 58h (!) to finish with 100% … and that with a walkthrough!!!

Anyway, tomorrow the Shadow of the Tomb Raider will be released, and I could also start the first game of the series, Tomb Raider, but I got a bit worn out by all the combat activity and decided to concentrate on a hard-core puzzle game, The Witness, which features loads and loads of puzzles, taught from simple to complex, and combined, to create a very interesting game. Now I only need the time …

,

Planet DebianHolger Levsen: 20180913-reproducible-builds-paris-meeting

Reproducible Builds 2018 Paris meeting

Many lovely people interested in reproducible builds will meet again at a three-day event in Paris we will welcome both previous attendees and new projects alike! We hope to discuss, connect and exchange ideas in order to grow the reproducible builds effort and we would be delighted if you'd join! And this is the space we'll bring into life:

And whilst the exact content of the meeting will be shaped by the participants when we do it, the main goals will include:

  • Updating & exchanging the status of reproducible builds in various projects.
  • Improving collaboration both between and inside projects.
  • Expanding the scope and reach of reproducible builds to more projects.
  • Working and hacking together on solutions.
  • Brainstorming designs for tools enabling end-users to get the most benefits from reproducible builds.
  • Discussing how reproducible builds will be usable and meaningful to users and developers alike.

Please reach out if you'd like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

Planet DebianDaniel Pocock: What is the difference between moderation and censorship?

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

CryptogramSecurity Risks of Government Hacking

Some of us -- myself included -- have proposed lawful government hacking as an alternative to backdoors. A new report from the Center of Internet and Society looks at the security risks of allowing government hacking. They include:

  • Disincentive for vulnerability disclosure
  • Cultivation of a market for surveillance tools
  • Attackers co-opt hacking tools over which governments have lost control
  • Attackers learn of vulnerabilities through government use of malware
  • Government incentives to push for less-secure software and standards
  • Government malware affects innocent users.

These risks are real, but I think they're much less than mandating backdoors for everyone. From the report's conclusion:

Government hacking is often lauded as a solution to the "going dark" problem. It is too dangerous to mandate encryption backdoors, but targeted hacking of endpoints could ensure investigators access to same or similar necessary data with less risk. Vulnerabilities will never affect everyone, contingent as they are on software, network configuration, and patch management. Backdoors, however, mean everybody is vulnerable and a security failure fails catastrophically. In addition, backdoors are often secret, while eventually, vulnerabilities will typically be disclosed and patched.

The key to minimizing the risks is to ensure that law enforcement (or whoever) report all vulnerabilities discovered through the normal process, and use them for lawful hacking during the period between reporting and patching. Yes, that's a big ask, but the alternatives are worse.

This is the canonical lawful hacking paper.

Planet DebianBen Hutchings: Debian LTS work, August 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from July. I worked only 5 hours and therefore carried over 18 hours to September.

I prepared and uploaded updates to the linux-4.9 (DLA 1466-1, DLA 1481-1) and linux-latest-4.9 packages.

Worse Than FailureCrazy Like a Fox(Pro)

“Database portability” is one of the key things that modern data access frameworks try and ensure for your application. If you’re using an RDBMS, the same data access layer can hopefully work across any RDBMS. Of course, since every RDBMS has its own slightly different idiom of SQL, and since you might depend on stored procedures, triggers, or views, you’re often tied to a specific database vendor, and sometimes a version.

Keulemans Chama fox.png

And really, for your enterprise applications, how often do you really change out your underlying database layer?

Well, for Eion Robb, it’s a pretty common occurrence. Their software, even their SaaS offering of it, allows their customers a great deal of flexibility in choosing a database. As a result, their PHP-based data access layer tries to abstract out the ugly details, they restrict themselves to a subset of SQL, and have a lot of late nights fighting through the surprising bugs.

The databases they support are the big ones- Oracle, SQL Server, MySQL, and FoxPro. Oh, there are others that Eion’s team supports, but it’s FoxPro that’s the big one. Visual FoxPro’s last version was released in 2004, and the last service pack it received was in 2007. Not many vendors support FoxPro, and that’s one of Eion’s company’s selling points to their customers.

The system worked, mostly. Until one day, when it absolutely didn’t. Their hosted SaaS offering crashed hard. So hard that the webserver spinlocked and nothing got logged. Eion had another late night, trying to trace through and figure out: which customer was causing the crash, and what were they doing?

Many hours of debugging and crying later, Eion tracked down the problem to some code which tracked sales or exchanges of product- transactions which might not have a price when they occur.

$query .= odbc_iif("SUM(price) = 0", 0, "SUM(priceact)/SUM(" . odbc_iif("price != 0", 1, 0) . ")") . " AS price_avg ";

odbc_iif was one of their abstractions- an iif function, aka a ternary. In this case, if the SUM(price) isn’t zero, then divide the SUM(priceact) by the number of non-zero prices in the price column. This ensures that there is at least one non-zero price entry. Then they can average out the actual price across all those non-zero price entries, ignoring all the “free” exchanges.

This line wasn’t failing all the time, which added to Eion’s frustration. It failed when two very specific things were true. The first factor was the database- it only failed in FoxPro. The second factor was the data- it only failed when the first product in the resultset had no entries with a price greater than zero.

Why? Well, we have to think about where FoxPro comes from. FoxPro’s design goal was to be a data-driven programming environment for non-programmers. Like a lot of those environments, it tries its best not to yell at you about types. In fact, if you’re feeding data into a table, you don’t even have to specify the type of the column- it will pick the “correct” type by looking at the first row.

So, look at the iif again. If the SUM(price) = 0 we output 0 in our resultset. Guess what FoxPro decides the datatype must be? A single digit number. If the second row has an average price of, say, 9.99, that’s not a single digit number, and FoxPro explodes and takes down everything else with it.

Eion needed to fix this in a way that didn’t break their “database agnostic” code, and thus would continue to work in FoxPro and all the other databases, with at least predictable errors (that don’t crash the whole system). In the moment, suffering through the emergency, Eion changed the code to this:

$query .= "SUM(priceact)/SUM(" . odbc_iif("price != 0", 1, 0) . ")") . " AS price_avg ";

Without the zero check, any products which had no sales would trigger a divide-by-zero error. This was a catchable, trappable error, even in FoxPro. Eion made the change in production, got the system back up and their customers happy, and then actually put the change in source control with a very apologetic commit message.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianNorbert Preining: TeX Live contrib updates

It is now more than a year that I took over tlcontrib from Taco and provide it at the TeX Live contrib repository. It does now serve old TeX Live 2017 as well as the current TeX Live 2018, and since last year the number of packages has increased from 52 to 70.

Recent changes include pTeX support packages for non-free fonts and more packages from the AcroTeX bundle. In particular since the last post the following packages have been added: aeb-mobile, aeb-tilebg, aebenvelope, cjk-gs-integrate-macos, comicsans, datepicker-pro, digicap-pro, dps, eq-save, fetchbibpes, japanese-otf-nonfree, japanese-otf-uptex-nonfree, mkstmpdad, opacity-pro, ptex-fontmaps-macos, qrcstamps.

Here I want to thank Jürgen Gilg for reminding me consistently of updates I have missed, big thanks!

To recall what TLcontrib is for: It collects packages that not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can have a place on TLContrib. The full list of packages can be seen here.

Please see the main page for Quickstart, History, and details about how to contribute packages.

Last but not least, while this is a service to make access to non-free packages more easy for users of the TeX Live Manager, our aim is to have as many as possible packages made completely free and included into TeX Live proper!

Enjoy.

Planet DebianDirk Eddelbuettel: digest 0.6.17

digest version 0.6.17 arrived on CRAN earlier today after a day of gestation in the bowels of CRAN, and should get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

This release brings another robustifications thanks to Radford Neal who noticed a segfault in 32 bit mode on Sparc running Solaris. Yay for esoteric setups. But thanks to his very nice pull request, this is taken care of, and it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet Linux AustraliaDavid Rowe: Tony K2MO Tests FreeDV

Tony, K2MO, has recently published some fine videos of FreeDV 1600, 700C, and 700D passing through simulated HF channels. The results are quite interesting.

This video shows the 700C mode having the ability to decode with 50% of it’s carriers removed:

This 700C modem sends two copies of the tx signal at high and low frequencies, a form of diversity to help overcome selective fading. These are the combined at the receiver.

Tony’s next video shows three FreeDV modes passing through a selective fading HF channel simulation:

This particular channel has slow fading, a notch gradually creeps across the spectrum.

Tony originally started testing to determine which FreeDV mode worked best on NVIS paths. He used path parameters based on VOACAP prediction models which show the relative time delay and signal power for the each propagation mode i.e., F1, F2:

Note the long delay paths (5ms). The CCIR NVIS path model also suggests a path delay of 7ms. That much delay puts the F-layer at 1000 km (well out into space), which is a bit of a puzzle.

This video shows the results of the VOCAP NVIS path:

In this case 700C does better than 700D. The 700C modem (COHPSK) is a parallel tone design, which is more robust to long multipath delays. The OFDM modem used for 700D is configured for multipath delays of up to 2ms, but tends to fall over after that as the “O” for Orthogonal assumption breaks down. It can be configured for longer delays, at a small cost in low SNR performance.

The OFDM modem gives much tighter packing for carriers, which allows us to include enough bits for powerful FEC, and have a very narrow RF bandwidth compared to 700C. FreeDV 700D has the ability to perform interleaving (Tools-Options “FreeDV 700 Options”), which is a form of time diversity. This feature is not widely used at present, but simulations suggest it is worth up to 4dB.

It would be interesting to combine frequency diversity, LDPC, and OFDM in a wider bandwidth signal. If anyone is interested in doing a little C coding to try this let me know.

I’ve actually seen long delay on NVIS paths in the “real world”. Here is a 40M 700D contact between myself and Mark, VK5QI, who is about 40km away from me. Note at times there are notches on the waterfall 200Hz apart, indicating a round trip path delay of 1500km:

Reading Further

Modems for HF Digital Voice Part 1
, explaining the frequency diversity used in 700C
Testing FreeDV 700C, shows how to use some built in test features like noise insertion and interfering carriers.
FreeDV 700D
FreeDV User Guide, including new 700D features like interleaving

Krebs on SecurityU.S. Mobile Giants Want to be Your Online Identity

The four major U.S. wireless carriers today detailed a new initiative that may soon let Web sites eschew passwords and instead authenticate visitors by leveraging data elements unique to each customer’s phone and mobile subscriber account, such as location, customer reputation, and physical attributes of the device. Here’s a look at what’s coming, and the potential security and privacy trade-offs of trusting the carriers to handle online authentication on your behalf.

Tentatively dubbed “Project Verify” and still in the private beta testing phase, the new authentication initiative is being pitched as a way to give consumers both a more streamlined method of proving one’s identity when creating a new account at a given Web site, as well as replacing passwords and one-time codes for logging in to existing accounts at participating sites.

Here’s a promotional and explanatory video about Project Verify produced by the Mobile Authentication Task Force, whose members include AT&T, Sprint, T-Mobile and Verizon:

The mobile companies say Project Verify can improve online authentication because they alone have access to several unique signals and capabilities that can be used to validate each customer and their mobile device(s). This includes knowing the approximate real-time location of the customer; how long they have been a customer and used the device in question; and information about components inside the customer’s phone that are only accessible to the carriers themselves, such as cryptographic signatures tied to the device’s SIM card.

The Task Force currently is working on building its Project Verify app into the software that gets pre-loaded onto mobile devices sold by the four major carriers. The basic idea is that third-party Web sites could let the app (and, by extension, the user’s mobile provider) handle the process of authenticating the user’s identity, at which point the app would interactively log the user in without the need of a username and password.

In another example, participating sites could use Project Verify to supplement or replace existing authentication processes, such as two-factor methods that currently rely on sending the user a one-time passcode via SMS/text messages, which can be intercepted by cybercrooks.

The carriers also are pitching their offering as a way for consumers to pre-populate data fields on a Web site — such as name, address, credit card number and other information typically entered when someone wants to sign up for a new user account at a Web site or make purchases online.

Johannes Jaskolski, general manager for Mobile Authentication Task Force and assistant vice president of identity security at AT&T, said the group is betting that Project Verify will be attractive to online retailers partly because it can help them capture more sign-ups and sales from users who might otherwise balk at having to manually provide lots of data via a mobile device.

“We can be a primary authenticator where, just by authenticating to our app, you can then use that service,” Jaskolski said. “That can be on your mobile, but it could also be on another device. With subscriber consent, we can populate that information and make it much more effortless to sign up for or sign into services online. In other markets, we have found this type of approach reduced [customer] fall-out rates, so it can make third-party businesses more successful in capturing that.”

Jaskolski said customers who take advantage of Project Verify will be able to choose what types of data get shared between their wireless provider and a Web site on a per-site basis, or opt to share certain data elements across the board with sites that leverage the app for authentication and e-commerce.

“Many companies already rely on the mobile device today in their customer authentication flows, but what we’re saying is there’s going to be a better way to do this in a method that is intended from the start to serve authentication use cases,” Jaskolski said. “This is what everyone has been seeking from us already in co-opting other mobile features that were simply never designed for authentication.”

‘A DISMAL TRACK RECORD’

A key question about adoption of this fledgling initiative will be how much trust consumers place with the wireless companies, which have struggled mightily over the past several years to validate that their own customers are who they say they are.

All four major mobile providers currently are struggling to protect customers against scams designed to seize control over a target’s mobile phone number. In an increasingly common scenario, attackers impersonate the customer over the phone or in mobile retail stores in a bid to get the target’s number transferred to a device they control. When successful, these attacks — known as SIM swaps and mobile number port-out scams —  allow thieves to intercept one-time authentication codes sent to a customer’s mobile device via text message or automated phone-call.

Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley, said this new solution could make mobile phones and their associated numbers even more of an attractive target for cyber thieves.

Weaver said after he became a victim of a SIM swapping attack a few years back, he was blown away when he learned how simple it was for thieves to impersonate him to his mobile provider.

“SIM swapping is very much in the news now, but it’s been a big problem for at least the last half-decade,” he said. “In my case, someone went into a Verizon store, took over the account, and added themselves as an authorized user under their name — not even under my name — and told the store he needed a replacement phone because his broke. It took me three days to regain control of the account in a way that the person wasn’t able to take it back away from me.”

Weaver said Project Verify could become an extremely useful way for Web sites to onboard new users. But he said he’s skeptical of the idea that the solution would be much of an improvement for multi-factor authentication on third-party Web sites.

“The carriers have a dismal track record of authenticating the user,” he said. “If the carriers were trustworthy, I think this would be unequivocally a good idea. The problem is I don’t trust the carriers.”

It probably doesn’t help that all of the carriers participating in this effort were recently caught selling the real-time location data of their customers’ mobile devices to a host of third-party companies that utterly failed to secure online access to that sensitive data.

On May 10, The New York Times broke the news that a cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to local police forces across the United States.

A few weeks after the NYT scoop, KrebsOnSecurity broke the story that LocationSmart — a wireless data aggregator — hosted a public demo page on its Web site that would let anyone look up the real-time location data on virtually any U.S. mobile subscriber.

In response, all of the major mobile companies said they had terminated location data sharing agreements with LocationSmart and several other companies that were buying the information. The carriers each insisted that they only shared this data with customer consent, although it soon emerged that the mobile giants were instead counting on these data aggregators to obtain customer consent before sharing this location data with third parties, a sort of transitive trust relationship that appears to have been completely flawed from the get-go.

AT&T’s Jaskolski said the mobile giants are planning to use their new solution to further protect customers against SIM swaps.

“We are planning to use this as an additional preventative control,” Jaskolski said. “For example, just because you swap in a new SIM, that doesn’t mean the mobile authentication profile we’ve created is ported as well. In this case, porting your sim won’t necessarily port your mobile authentication profile.”

Jaskolski emphasized that Project Verify would not seek to centralize subscriber data into some new giant cross-carrier database.

“We’re not going to be aggregating and centralizing this subscriber data, which will remain with each carrier separately,” he said. “And this is very much a pro-competition solution, because it will be portable by design and is not designed to keep a subscriber stuck to one specific carrier. More importantly, the user will be in control of whatever gets shared with third parties.”

My take? The carriers can make whatever claims they wish about the security and trustworthiness of this new offering, but it’s difficult to gauge the sincerity and accuracy of those claims until the program is broadly available for beta testing and use — which is currently slated for sometime in 2019.

I am not likely to ever take the carriers up on this offer. In fact, I’ve been working hard of late to disconnect my digital life from these mobile providers. And I’m not about to volunteer more information than necessary beyond the bare minimum needed to have wireless service.

As with most things related to cybersecurity and identity online, much will depend on the default settings the carriers decide to stitch into their apps, and more importantly the default settings of third-party Web site apps designed to interact with Project Verify.

Jaskolski said the coalition is hoping to kick off the program next year in collaboration with some major online e-commerce platforms that have expressed interest in the initiative, although he declined to talk specifics on that front. He added that the mobile providers are currently working through exactly what those defaults might look like, but also acknowledged that some of those platforms have expressed an interest in forcing users to opt-out of sharing specific subscriber data elements.

“Users will be able to see exactly what attributes will be shared, and they can say yes or no to those,” he said. “In some cases, the [third-party site] can say here are some things I absolutely need, and here are some things we’d like to have. Those are some of the things we’re working through now.”

LongNowWhole Earth Catalog 50th Anniversary Celebration Takes Place October 13

Note: If you are interested in volunteering for this special event, please fill out a volunteer form.

50 years ago, Long Now co-founder Stewart Brand launched the Whole Earth Catalog — one of the most consequential publications of the 01960s American counterculture. The Whole Earth Catalog and its progeny (CoEvolution Quarterly, Whole Earth Review, and the WELL) inspired generations to realize their personal agency in shaping the world they wished to see, and helped usher in the modern environmental movement, the rise of the cyberculture and the web, and so much more.

A selection of pages from the Whole Earth Catalog.

The Whole Earth Catalog’s legacy will be celebrated on the occasion of its 50th anniversary on Saturday, October 13. Those who helped create the Whole Earth Catalog will reunite, and those who were influenced by it will share their stories.

Left: Some of the original staff members who worked on the Whole Earth Catalog, 01970. Middle: Stewart Brand at the Whole Earth Demise Party (01971). Right: Members of the WELL.

The evening program, which will take place at Fort Mason’s Cowell Theater, will feature an extraordinary group of guest speakers interacting on Whole Earth-related topics — in terms of then, now, and the future.

You can purchase tickets to the public program here. A livestream of the event will be on Whole Earth 50th page for those who cannot attend in person.

Planet DebianBenjamin Mako Hill: Disappointment on the new commute

Imagine my disappointment when I discovered that signs on Stanford’s campus pointing to their “Enchanted Broccoli Forest” and “Narnia”—both of which that I have been passing daily on my new commute—merely indicate the location of student living groups with whimsical names.

Sociological ImagesSchools’ Selective Screening

Originally Posted at TSP Discoveries

In a scene familiar to today’s teachers, several students in the classroom are glued to their screens: one is posting to social media, one is playing a computer game, and another is hacking their way past the school’s firewall with skills they perfected from years on the Internet. Are these students wasting class time or honing the skills that will make them a future tech millionaire?

Photo Credit: Ben+Sam, Flickr CC

Recent research in American Journal of Sociology from Matthew Rafalow finds that teachers answer that question differently based on the social class and race makeup of the school. Schools that serve primarily white, more privileged students see “digital play” such as video games, social media, and website or video production as building digital competencies that are central to success, while schools that serve larger Latino or Asian populations view digital play as irrelevant or a distraction from learning.

Based on observations of three technology-rich Bay Area middle schools, Rafalow examined whether the skills students develop through digital play are considered cultural capital — skills, habits, and dispositions that that can be traded for success in school and work. Although digital play can lead to skills like finding information online, communicating with others, and producing digital media, classed and raced stereotypes about educational needs and future work prospects affect whether teachers recognize those skills in their students. In other words, Rafalow examined whether teachers reward, ignore, or punish students for digital play in the classroom.

Rafalow found three distinct approaches across the schools. At the first school — a public middle school that largely serves middle-class Asian students — teachers viewed digital play as threatening to their traditional educational practices because it distracted students from “real” learning. Further, teachers believed students comfortable with digital skills could hack standardized tests that had been given electronically.

Photo Credit: US Department of Education, Flickr CC

At the second school — a public middle school that largely serves working-class Latino students — teachers discounted any skills that students brought into the classroom through their years of digital play. Instead, teachers thought introducing their students to website design and programming was a more important part of preparing them for 21st century working-class jobs.

In contrast, at the third school — a private, largely white middle school — teachers praised skills students developed through digital play as crucial to job success and built a curriculum that further encouraged expression and experimentation online.

The ways teachers in this study approached digital play provide a clear example of how raced and classed expectations for students’ futures determine the range of appropriate classroom behavior.

Jean Marie Maier is a graduate student in sociology at the University of Minnesota. She completed the Cultural Studies of Sport in Education MA program at the University of California, Berkeley, and looks forward to continuing research on the intersections of education, gender, and sport. Jean Marie has also worked as a Fulbright English Teaching Assistant in Gumi, South Korea and as a research intern at the American Association of University Women. She holds a BA in Political Science from Davidson College.

(View original at https://thesocietypages.org/socimages)

CryptogramSecurity Vulnerability in Smart Electric Outlets

A security vulnerability in Belkin's Wemo Insight "smartplugs" allows hackers to not only take over the plug, but use it as a jumping-off point to attack everything else on the network.

From the Register:

The bug underscores the primary risk posed by IoT devices and connected appliances. Because they are commonly built by bolting on network connectivity to existing appliances, many IoT devices have little in the way of built-in network security.

Even when security measures are added to the devices, the third-party hardware used to make the appliances "smart" can itself contain security flaws or bad configurations that leave the device vulnerable.

"IoT devices are frequently overlooked from a security perspective; this may be because many are used for seemingly innocuous purposes such as simple home automation," the McAfee researchers wrote.

"However, these devices run operating systems and require just as much protection as desktop computers."

I'll bet you anything that the plug cannot be patched, and that the vulnerability will remain until people throw them away.

Boing Boing post. McAfee's original security bulletin.

Worse Than FailureCodeSOD: Padding Your Time

Today will be a simple one, and it’s arguably low-hanging fruit, because once again, it’s date handling code. But it’s not handling dates where it falls down. It falls down on something much more advanced: conditionals. Supplied by “_ek1n”.

if ($min == 0) {
    if ($hours == 12) {
        $hours = 12;
        $min   = '00';
    } else {
        $hours = $hours;
        $min   = '00';
    }
}

My favorite part is the type-conversion/padding: turning 0 into '00', especially the fact that the same padding doesn’t happen if $min is 1, or 3, or anything else less than 10. Or, it probably does happen, and it probably happens in a general fashion which would also pad out the 0- or maybe it doesn’t, and TRWTF is that their padding method can’t zero-pad a zero.

The most likely situation, though, is that this is just a brain fart.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianArturo Borrero González: Distributing static routes with DHCP

Networking

This week I had to deal with a setup in which I needed to distribute additional static network routes using DHCP.

The setup is easy but there are some caveats to take into account. Also, DHCP clients might not behave as one would expect.

The starting situation was a working DHCP clinet/server deployment. Some standard virtual machines would request for their network setup over the network. Nothing new. The DHCP server is dnsmasq, and the daemon is running under Openstack control, but this has nothing to do with the DHCP problem itself.

By default, it seems dnsmasq sends to clients the Routers (code 3) option, which usually contains the gateway for clients in the subnet to use. My situation required to distribute one additional static route for another subnet. My idea was for DHCP clients to end with this simple routing table:

user@dhcpclient:~$ ip r
default via 10.0.0.1 dev eth0 
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.100 
172.16.0.0/21 via 10.0.0.253 dev eth0 <--- extra static route

To distribute this extra static route, you only need to edit the dnsmasq config file and add a line like this:

dhcp-option=option:classless-static-route,172.16.0.0/21,10.0.0.253

For my initial tests of this config I was simply doing requesting to refresh the lease from the client DHCP. This got my new static route online, but if in the case of a reboot, the client DHCP would not get the default route. The different behaviour is documented in dhclient-script(8).

To try something similar to a reboot situation, I had to use this command:

user@dhcpclient:~$ sudo ifup --force eth0
Internet Systems Consortium DHCP Client 4.3.1
Copyright 2004-2014 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/eth0/xx:xx:xx:xx:xx:xx
Sending on   LPF/eth0/xx:xx:xx:xx:xx:xx
Sending on   Socket/fallback
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 10.0.0.1
RTNETLINK answers: File exists
bound to 10.0.0.100 -- renewal in 20284 seconds.

Anyway this was really surprissing at first, and led me to debug DHCP packets using dhcpdump:

  TIME: 2018-09-11 18:06:03.496
    IP: 10.0.0.1 (xx:xx:xx:xx:xx:xx) > 10.0.0.100 (xx:xx:xx:xx:xx:xx)
    OP: 2 (BOOTPREPLY)
 HTYPE: 1 (Ethernet)
  HLEN: 6
  HOPS: 0
   XID: xxxxxxxx
  SECS: 8
 FLAGS: 0
CIADDR: 0.0.0.0
YIADDR: 10.0.0.100
SIADDR: xx.xx.xx.x
GIADDR: 0.0.0.0
CHADDR: xx:xx:xx:xx:xx:xx:00:00:00:00:00:00:00:00:00:00
OPTION:  53 (  1) DHCP message type         2 (DHCPOFFER)
OPTION:  54 (  4) Server identifier         10.0.0.1
OPTION:  51 (  4) IP address leasetime      43200 (12h)
OPTION:  58 (  4) T1                        21600 (6h)
OPTION:  59 (  4) T2                        37800 (10h30m)
OPTION:   1 (  4) Subnet mask               255.255.255.0
OPTION:  28 (  4) Broadcast address         10.0.0.255
OPTION:  15 ( 13) Domainname                xxxxxxxx
OPTION:  12 ( 21) Host name                 xxxxxxxx
OPTION:   3 (  4) Routers                   10.0.0.1
OPTION: 121 (  8) Classless Static Route    xxxxxxxxxxxxxx .....D..                 
[...]
---------------------------------------------------------------------------

(you can use this handy command both in server and client side)

So, the DHCP server was sending both the Routers (code 3) and the Classless Static Route (code 121) options to the clients. So, why would fail the client to install both routes?

I obtained some help from folks on IRC and they pointed me towards RFC3442:

DHCP Client Behavior
[...]
   If the DHCP server returns both a Classless Static Routes option and
   a Router option, the DHCP client MUST ignore the Router option.

So, clients are supposed to ignore the Routers (code 3) option if they get an additional static route. This is very counter-intuitive, but can be easily workarounded by just distributing the default gateway route as another classless static route:

dhcp-option=option:classless-static-route,0.0.0.0/0,10.0.0.1,172.16.0.0/21,10.0.0.253
#                                         ^^ default route   ^^ extra static route 

Obviously this was my first time in my career dealing with this setup and situation. My conclussion is that even old-enough protocols like DHCP can sometimes behave in a counter-intuitive way. Reading RFCs is not always funny, but can help understand what’s going on.

You can read the original issue in Wikimedia Foundation’s Phabricator ticket T202636, including all the back-and-forth work I did. Yes, is open to the public ;-)

Planet DebianIustin Pop: o-tour 2018 (Halbmarathon)

My first race redo at the same distance/ascent meters. Let’s see how it went… 45.2km, 1’773m altitude gain (officially: 45km, 1’800m). This was the Halbmarathon distance, compared to the full Marathon one, which is 86km/3’000m.

Pre-race

I registered for this race right after my previous one, and despite it having much more altitude meters, I was looking forward to it.

That is, until the week of the race. The entire week was just off. Work life, personal life, everything seemed out of sync. Including a half-sleepless night on Wednesday, which ruined my sleep schedule for the rest of the week and also my plans for the light maintenance rides before the event. And which also made me feel half-sick due to lack of sleep.

I prepared for my ride on Saturday (bike check, tyre pressure check, load bike on car), and I went to bed—late, again difficult to fall asleep—not being sure I’ll actually go to the race. I had a difficult night sleep, but actually I managed to wake up on the alarm. OK, chances looking somewhat better for getting to the race. Total sleep: 5 hours. Ouch!

So I get in the car—about 15 minutes later than planned—and start, only to find a road closure on the most direct route to the highway, and police people directing the traffic—at around 07:10—on the “new” route, resulting in yet another detour, and me getting stressed enough on which way to go and not paying attention to exact speed on a downhill that I got a flashed by a speed camera. Sigh…

The rest of the drive was uneventful, I reach Alpnach, I park, I get to the start/finish location, get my number, and finally get to the start line with two minutes (!!) to spare. The most “just-in-time” I ever was at a race, as I’m usually way early. By this time I was even in a later starting block since mine was already setup and would have been difficult to reach.

Oh, and because I was so late, and because this is smaller race (number of participants, setup, etc.), I didn’t find a place to fill my water bottle. And this, for the one time I didn’t fill it in advance. Fun!

The race

So given all this, I set low expectations for the race, and decided to consider it a simple Sunday ride. Will take it easy on the initial 12.5km, 1’150m climb, and then will see how it goes. There was a food station two thirds in the climb, so I said I’ll hopefully not get too dehydrated until then.

The climb starts relaxed-I was among the last people starting—and 15 minutes in, my friend the lower back says “ah, you’re climbing again, remember I’m here too”, which was way too early. So, I said to myself, nothing to lose, let’s just switch to standing every time my back gets tired, and stand until my legs get tired, then switch again.

The climb here was on pavement, so standing was pretty easy. And, to my surprise, this worked quite well: while standing I also went much faster (by much, I mean probably ~2-3km/h) than sitting so I was advancing in the long stretch of people going up the mountain, and my back was very relieved every time I switched.

So, up and down and up and down in the saddle, and up and up and up on the mountain, until I get to the food station. Water! I quickly drink an entire bottle (750ml!!), refill, and go on.

After the food station, the route changed to gravel, and this made pedalling while standing more difficult, due to less grip and slipping if you’re not careful. I tried the sit/stand/sit routine, but it was getting more difficult, so I went, slower, until a point I had to stop. I was by now in the sun, hot, and tired. And annoyed at the low volume out of the water bottle, so I opened it, and drank just like from a glass, and emptied it quickly - yet again! I felt much better, and restarted pedalling, eager to get to the top.

The last part of the climb is quite steep and more or less on a trail, so here I was pushing the bike, but since I didn’t have any goals did not feel guilty about it. Up and up, and finally I reach the top (altitude: 1’633m, elevation gained: 1’148m out of ~1’800m), and I can breathe easier knowing that the most difficult part is over.

From here, it was finally a good race. The o-tour route is much more beautiful than I remembered, but also more technically difficult, to the point of being quite annoying: it runs for long stretches on very uneven artificial paths, like if someone built a paved road, but the goal was to have the most uneven surface, all rocks being at an angle, instead of aiming for an even surface. For hikers this is excellent, especially in wet conditions, but for trying to move a bike forward, or even more, forward uphill, is annoying. There were stretches of ~5% grade that I was pushing the bike, due to how annoying biking on that surface was.

The route also has nice single track sections, some easily navigable, some not, at least for me, and some that I had to carry the bike. Or even carry the bike on my shoulder while I was climbing over roots. A very nice thing, and sadly uncommon in this series of races.

One other fun aspect of the race was the mud. Especially in the forests, there was enough water left on tracks that one got splashed quite often, and outside (where the soil doesn’t have the support of the rood), less water but quite deep mud. Deep enough that at one point, I misjudged how deep the around 3 meters long mud-alike section was, and I had enough speed so that my front wheel got stuck in mud, and slowly (and I imagine gracefully as well :P, of course) I went over the handlebars in the softest mud I ever landed in. Landed, as halfway up my elbows (!), hands full of mud, gloves muddy as hell, legs down to the ankle in mud so shoes also muddy, and me finding the situation the funniest moment of the race. The guy behind me asked if everything is alright, and I almost couldn’t answer due to laughing out-loud.

Back to serious stuff now. The rest of the “meters of climbing left”, about 600+ meters, were supposed to be distributed in about 4 sections, all about the same profile except the first one which was supposed to be a bit longer and flatter. At least, that’s what the official map was showing, in a somewhat stylised way. And that’s what I based my effort dosage on.

Of course, real life is not stylised, and there 2 small climbs (as expected), and then a long and slow climb (definitely unexpected). I managed to stay on the bike, but the unexpected long climb—two kilometres—exhausted my reserves, despite being a relatively small grade (~5%, gained ~100m). I really was not planning for it, and I paid for that. Then a bit of downhill, then another short climb, another downhill—on road, 60km/h!—and then another medium-sized climb: 1km long, gaining 60m. Then a slow and long descent, a 700m/50m climb, a descent, and another climb, short but more difficult: 900m/80m (~9%). By this time, I was spent, and was really looking forward to the final descent, which I remembered was half pavement, half very nice single-track. And indeed it was superb, after all that climbing. Yay!

And then, reaching basically the end of the race (a few kilometres left), I remembered something else: this race has a climb at the end! This is where the missing climbing meters were hiding!

So, after eight kilometres of fun, 1.5km of easy climbing to gain 80m of ascent. Really trivial, a regular commute almost, but for me at this stage, it was painful and the most annoying thing ever…

And then, reaching the final two kilometres of light descent on paved road, and finishing the race. Yay!

Overall, given the way the week went, this race was much easier than I hoped, and quite enjoyable. Why? No idea. I’ll just take the bonus points and not complain ☺

Real champions

After about two minutes of me finishing, I hear the speaker saying that the second placed woman in the long distance was nearing, and that it was Esther Süss! I’ve never seen her in person as far as I know, nor any of the other leaders in these races, since usually the end times are much apart. In this case, I apparently finished between the first and second places in the women’s race (there was 3m05s difference between them). This also explained what all those photographers with telephotos at the finish line were waiting for, and why they didn’t take my picture :)))))) In any case, I was very happy to see her in person, since I’m very impressed that at 44 years old, she’s still competing and most of the time winning against other women, 10 or even 20 years younger than her. Gives a bit of hope for older people like me. Of course minus being on the thinner side (unlike me), and actually liking long climbs (unlike me), and winning (definitely unlike me). Not even bringing up the world championships gold medals, OK?

Race analysis

Hydration, hydration…

As I mentioned above, I drank a lot at the beginning of the race. I continued to drink, and by 2 hours I was 3 full bottles in, at 2:40 I finished the fourth bottle.

Four bottles is 3 litres of liquid, which is way more than my usual consumption since I stopped carrying my hydration pack. In the Eiger bike challenge, done in much hotter temperatures and for longer, I think I drank about the same or only slightly more (not sure exactly). Then temperature: 19° average, 33° max, 6½ hours, this time: 16.2° average, 20° max, ~4 hours. And this time, with 4L in 4 hours, I didn’t need to run to the bathroom as I finished (at all).

The only conclusion I can make is that I sweat much more than I think, and that I must more actively drink water. I don’t want to go back to hydration pack in a race (definitely yes for more relaxed rides), so I need to use all the food stops to drink and refill.

General fitness vs. leg muscles

I know my core is weak, but it’s getting hilarious that 15 minutes into the climbing, I start getting signals. This is not happening on flat nor indoor for at least 2-2½ hours, so the conclusion is that I need to get fitter (core) and also do more outdoors real climbing training—just long (slower) climbs.

The sit-stand-sit routine was very useful, but it did result in even my hands getting tired from having to move and stabilise the bike. So again, need to get fitter overall and do more cross-training.

That is, as if I didn’t know it already ☹

Numbers

This is now beyond subjective, let’s see how the numbers look like:

  • 2016:
    • time: overall 3h49m34.4s, start-Langis 2h44m31s, Langis-finish: 1h05m02s.
    • age category: overall 70/77, start-Langis ranking: 70, Langis-finish: 72.
    • overall gender ranking: overall 251/282, start-Langis: 250, Langis-finish: 255.
  • 2018:
    • time: 3h53m43.4s, start-Langis: 2h50m11s, Langis-finish: 1h03m31s.
    • age category 70/84, start-Langis: 71, Langis-finish: 70.
    • overall gender ranking: overall 191/220, start-Langis: 191, Langis-finish: 189.

The first conclusion is that I’ve done infinitesimally better in the overall rankings: 252/282=0.893, 191/220=0.868, so better but only trivially so, especially given the large decline in participants on the short distance (the long one had the same). I cannot compare age category, because ☺

The second interesting titbit is that in 2016, I was relatively faster on the climb plus first part of the high-altitude route, and relatively slower on the second half plus descent, both in the age category and the overall category. In 2018, this reversed, and I gained places on the descent. Time comparison, ~6 minutes slower in the first half, 1m30s faster on the second one.

But I find my numbers so close that I’m surprised I neither significantly improved nor slowed down in two years. Yes, I’m not consistently training, but still… I kind of expect some larger difference, one way or another. Strava also says that I beat my 2016 numbers on 7 segments, but only got second place to that on 14 others, so again a wash.

So, weight gain aside, it seems nothing much has changed. I need to improve my consistency in training 10× probably to see a real difference. On the other hand, maybe this result is quite good, given my much less consistent training than in 2016 — ¯\_(ツ)_/¯.

Equiment-wise, I had a different bike now (full suspension vs. hardtail), and—compared to previous race two weeks ago, at least—I had the tyre pressure quite well dialled in for this event. So I was able to go fast, and indeed overtake a couple of people on the flat/light descents, and more importantly, was not overtaken by other people on the long descent. My brakes were much better as well, so I was a bit more confident, but the front brake started squeaking again when it got hot, so I need to improve this even more. But again, not even the changed equipment made much of a difference ☺

I’ll finish here with an image of my “heroic efforts”:

Not very proud of this… Not very proud of this…

I’m very surprised that they put a photographer at the top of a climb, except maybe to motivate people to pedal up the next year… I’ll try to remember this ☺

Planet DebianNorbert Preining: TensorFlow on Debian/sid (including Keras via R)

I have been struggling with getting TensorFlow running on Debian/sid for quite some time. The main problem is that the CUDA libraries installed by Debian are CUDA 9.1 based, and the precompiled pip installable TensorFlow packages require CUDA 9.0 which resulted in an unusable installation. But finally I got around and found all the pieces.

Step 1: Install CUDA 9.0

The best way I found was going to the CUDA download page, select Linux, then x86_64, then Ubuntu, then 17.04, and finally deb (network>. In the text appearing click on the download button to obtain currently cuda-repo-ubuntu1704_9.0.176-1_amd64.deb.

After installing this package as root with

dpkg -i cuda-repo-ubuntu1704_9.0.176-1_amd64.deb

the nvidia repository signing key needs to be added

apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1704/x86_64/7fa2af80.pub

and finally install the CUDA 9.0 libraries (not all of cuda-9-0 because this would create problems with the normally installed nvidia libraries):

apt-get update
apt-get install cuda-libraries-9-0

This will install lots of libs into /usr/local/cuda-9.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-9-0.conf.

Step 2: Install CUDA 9.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 9.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download for CUDA 9.0 and then cuDNN v7.2.1 Runtime Library for Ubuntu 16.04 (Deb).

This will download a file libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf; sess = tf.Session() ; print(tf.__version__)"
2018-09-11 16:30:27.075339: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-09-11 16:30:27.143265: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-09-11 16:30:27.143671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: 
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.4175
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.85GiB
2018-09-11 16:30:27.143702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2018-09-11 16:30:27.316389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-11 16:30:27.316432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971]      0 
2018-09-11 16:30:27.316439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0:   N 
2018-09-11 16:30:27.316595: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3578 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
1.10.1         
$

Addendum: Keras and R

With the above settled, the installation of Keras can be done via

apt-get install python3-keras

and this should pick up the TensorFlow backend automatically.

For R there is a Keras library that can be installed without

install.packages('keras')

on the R command line (as root).

After that running a simple MNIST code example should use your GPU from R (taken from Deep Learning with R from Manning Publications):

library(keras)
mnist <- dataset_mnist()
train_images <- mnist$train$x
train_labels <- mnist$train$y
test_images <- mnist$test$x
test_labels <- mnist$test$y
network <- keras_model_sequential() %>%
  layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>%
  layer_dense(units = 10, activation = "softmax")
network %>% compile(
  optimizer = "rmsprop",
  loss = "categorical_crossentropy",
  metrics = c("accuracy")
)
train_images <- array_reshape(train_images, c(60000, 28 * 28))
train_images <- train_images / 255
test_images <- array_reshape(test_images, c(10000, 28 * 28))
test_images <- test_images / 255
train_labels <- to_categorical(train_labels)
test_labels <- to_categorical(test_labels)
network %>% fit(train_images, train_labels, epochs = 5, batch_size = 128)
metrics <- network %>% evaluate(test_images, test_labels)
metrics

,

Krebs on SecurityPatch Tuesday, September 2018 Edition

Adobe and Microsoft today each released patches to fix serious security holes in their software. Adobe pushed out a new version of its beleaguered Flash Player browser plugin. Redmond issued updates to address at least 61 distinct vulnerabilities in Microsoft Windows and related programs, including several flaws that were publicly detailed prior to today and one “zero-day” bug in Windows that is already being actively exploited by attackers.

As per usual, the bulk of the fixes from Microsoft tackle security weaknesses in the company’s Web browsers, Internet Explorer and Edge. Patches also are available for Windows, Office, Sharepoint, and the .NET Framework, among other components.

Of the 61 bugs fixed in this patch batch, 17 earned Microsoft’s “critical” rating, meaning malware or miscreants could use them to break into Windows computers with little or no help from users.

The zero-day flaw, CVE-2018-8440, affects Microsoft operating systems from Windows 7 through Windows 10 and allows a program launched by a restricted Windows user to gain more powerful administrative access on the system. It was first publicized August 27 in a (now deleted) Twitter post that linked users to proof-of-concept code hosted on Github. Since then, security experts have spotted versions of the code being used in active attacks.

According to security firm Ivanti, prior to today bad guys got advance notice about three vulnerabilities in Windows targeted by these patches. The first, CVE-2018-8457, is a critical memory corruption issue that could be exploited through a malicious Web site or Office file. CVE-2018-8475 is a critical bug in most supported versions of Windows that can be used for nasty purposes by getting a user to view a specially crafted image file. The third previously disclosed flaw, CVE-2018-8409, is a somewhat less severe “denial-of-service” vulnerability.

Standard advice about Windows patches: Not infrequently, Redmond ships updates that end up causing stability issues for some users, and it doesn’t hurt to wait a day or two before seeing if any major problems are reported with new updates before installing them. Windows 10 likes to install patches and reboot your computer on its own schedule, and Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

It’s a good idea to get in the habit of backing up your computer before applying monthly updates from Microsoft. Windows has some built-in tools that can help recover from bad patches, but restoring the system to a backup image taken just before installing updates is often much less hassle and an added peace of mind while you’re sitting there praying for the machine to reboot successfully after patching.

The sole non-Microsoft update pushed by Redmond today fixes a single vulnerability in Adobe Flash Player, CVE-2018-15967. Curiously, Adobe lists the severity of this information disclosure bug as “important,” while Microsoft considers it a more dangerous “critical” flaw.

Regardless, if you have Adobe Flash Player installed, it’s time to either update your browser and/or operating system, or else disable this problematic and insecure plugin. Windows Update should install the Flash Patch for IE/Edge users; the newest version of Google Chrome, which bundles Flash but prompts users to run Flash elements on a Web page by default, also includes the fix (although a complete Chrome shutdown and restart may be necessary before the fix is in).

Loyal readers here know full well where I stand on Flash: This is a dangerous, oft-exploited program that needs to be relegated to the dustbin of Internet history (for its part, Adobe has said it plans to retire Flash Player in 2020). Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Administrators have the ability to change Flash Player’s behavior when running Internet Explorer on Windows 7 by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

As always, please feel free to leave a note in the comments below if you experience any issues installing these fixes. Happy patching!

Planet DebianJonathan McDowell: PSA: the.earth.li ceasing Debian mirror service

This is a public service announcement that the.earth.li (the machine that hosts this blog) will cease service as a Debian mirror on 1st February 2019 at the latest.

It has already been removed from the official list of Debian mirrors. Please update your sources.list to point to an alternative sooner rather than later.

The removal has been driven by a number of factors:

  • This mirror was originally setup when I was running Black Cat Networks, and a local mirror was generally useful to us. It’s 11+ years since Black Cat was sold, and 7+ since it moved away from that network.
  • the.earth.li currently lives with Bytemark, who already have an official secondary mirror. It does not add any useful resilience to the mirror network.
  • For a long time I’ve been unable to mirror all release architectures due to disk space limitations; I think such mirrors are of limited usefulness unless located in locations with dubious connectivity to alternative full mirrors.
  • Bytemark have been acquired by IOMart and I’m uncertain as to whether my machine will remain there long term - the acquisition announcement focuses on their cloud service rather than mentioning physical server provision. Disk space requirements are one of my major costs and the Debian mirror makes up ⅔ of my current disk usage. Dropping it will make moving host easier for me, should it prove necessary.

I can’t find an exact record of when I started running a mirror, but it was certainly before April 2005. 13 years doesn’t seem like a bad length of time to have been providing the service. Personally I’ve moved to deb.debian.org but if the network location of the is the reason you chose it then mirror.bytemark.co.uk should be a good option.

CryptogramUsing Hacked IoT Devices to Disrupt the Power Grid

This is really interesting research: "BlackIoT: IoT Botnet of High Wattage Devices Can Disrupt the Power Grid":

Abstract: We demonstrate that an Internet of Things (IoT) botnet of high wattage devices -- such as air conditioners and heaters -- gives a unique ability to adversaries to launch large-scale coordinated attacks on the power grid. In particular, we reveal a new class of potential attacks on power grids called the Manipulation of demand via IoT (MadIoT) attacks that can leverage such a botnet in order to manipulate the power demand in the grid. We study five variations of the MadIoT attacks and evaluate their effectiveness via state-of-the-art simulators on real-world power grid models. These simulation results demonstrate that the MadIoT attacks can result in local power outages and in the worst cases, large-scale blackouts. Moreover, we show that these attacks can rather be used to increase the operating cost of the grid to benefit a few utilities in the electricity market. This work sheds light upon the interdependency between the vulnerability of the IoT and that of the other networks such as the power grid whose security requires attention from both the systems security and power engineering communities.

I have been collecting examples of surprising vulnerabilities that result when we connect things to each other. This is a good example of that.

Wired article.

Planet DebianRussell Coker: Thinkpad X1 Carbon Gen 6

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

Worse Than FailureCodeSOD: Wear a Dunder Cap

In the Python community, one buzzword you’ll find thrown around is whether or not an approach is “pythonic”. It’s a flexible term, and something you can just throw out in code reviews, even if you’ve never written a line of Python in your life: “Is that Pythonic?”

The general rubric for what truly is “pythonic” is generally code that is simple and code that operates explicitly. There shouldn’t be any “magic”. But Python doesn’t force you to write “pythonic” code, and it provides loads of tools like decorators and metaclasses that let you get as complex and implicit as you like.

One bit of magic is the “dunder” methods, which are Python’s vague approach to operator overloading. If you want your class to support the [] operator, implement __getitem__. If you want your class to support the + operator, implement __add__. If you want your class to support access to any arbitrary property, implement __getattr__.

Yes, __getattr__ allows you to execute code on any property access, so a simple statement like this…

if self.connected:
  print(self.coffee)

…could involve twenty database calls and invoke a web service connected to a live coffee machine and literally make you a cup of coffee. At no point does the invoker ever see that they’ve called a method, it just happens by “magic”. And, for extra fun, there’s also a global method getattr, lets you wrap property access with a default, e.g., getattr(some_object, "property", False) will return the value of some_object.property, if it exists, or False if it doesn’t.

Whew, that’s a lot of Python internals, but that brings us to today’s anonymous submission.

Someone wrote some classes which might contain other classes, and wanted them to delegate property access to those, which means there are a number of classes containing this method:

def __getattr__(self, item):
    return self.nested.item

So, for example, you could call some_object.log_level, and this overridden __getattr__ would walk through the whole chain of objects to find where the log_level exists.

That’s fine, if the chain doesn’t contain any loops, where the same object appears in the chain multiple times. If, on the other hand, it does, you get a RecursionError when the loop finally exceeds the system recursion limit.

Our submitter found this a bit of a problem. When the access to log_level failed, it might take a long time before hitting the RecursionError- which, by the way, isn’t triggered by a stack overflow. Python tries to throw a RecursionError before you overflow the stack.

You can, but very much shouldn’t control that value. But our submitter didn’t want to wait for the thousands of calls it might take to get the RecursionError, so they wrapped their accesses to log_level in code that would tweak the recursion limit:

    # Protect against origin having overwritten __getattr__
    old_recursion_limit = sys.getrecursionlimit()
    n = 2
    while True:
        try:
            sys.setrecursionlimit(n)
        except RecursionError as e:
            n += 1
        break
    try:
        log_level = getattr(origin, "log_level", None)
    except RecursionError as e:
        object.__setattr__(origin, "log_level", None)
        log_level = getattr(origin, "log_level", None)
    sys.setrecursionlimit(old_recursion_limit)

So, first, we check the current recursion limit. Then, we try setting the recursion limit to two. If the current recursive depth is greater than two, then when we try to change the recursion limit it throws a RecursionError. So catch that, and then try again with one higher recursion limit.

Once we’ve set the recursion limit to one greater than our current recursion limit, we then try and fetch the log level. If we fail, we’ll just default to None, and finally return back to our original recursion limit.

This is an impressive amount of effort into constructing a footgun. From the use of __getattr__ in the first place, to the misplaced effort of short-circuiting recursion instead of preventing cycles in the first place, and finally to the abuse of setrecursionlimit and error handling to build… this. Absolutely nothing here should have happened. None of it.

Our submitter has confessed their sins. As penance, they’ll say twenty Hail Guidos, and fix this code. Remember, you can’t be absolved of your sins if you just keep on sinning.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaJeremy Visser: Floppy drive music

Some time in 2013 I set up a rig to play music with a set of floppy drives. At linux.conf.au 2015 in Auckland I gave a brief lightning talk about this, and here is a set of photos and some demo music to accompany. The hardware consists of six 3.5″ floppy drives connected to a LeoStick (Arduino) via custom vero board that connects the direction and step pins (18 and 20, respectively) as well as permanently grounding the select pin A (14).

Planet Linux AustraliaJeremy Visser: One week with the Nexus 5

My ageing Motorola Milestone finally received a kick to the bucket last week when my shiny new Nexus 5 phone arrived. Though fantastic by 2009 standards, the Milestone could only officially run Android 2.2, and 2.3 with the help of an unofficial CyanogenMod port. Having been end-of-lifed for some time now, and barely being able to render a complex web page without running out of memory, it was time for me to move on.

Planet Linux AustraliaJeremy Visser: Configuring Windows for stable IPv6 addressing

By default, Windows will use randomised IPv6 addresses, rather than using stable EUI-64 addresses derived from the MAC address. This is great for privacy, but not so great for servers that need a stable address. If you run an Exchange mail server, or need to be able to access the server remotely, you will want a stable IPv6 address assigned. You may think this is possible simply by editing the NIC and setting a manual IPv6 address.

Planet Linux AustraliaJeremy Visser: iPads as in-flight entertainment

I’m writing this whilst sitting on a Qantas flight from Perth to Sydney, heading home after attending the fantastic linux.conf.au 2014. The plane is a Boeing 767, and unlike most flights I have been on in the last decade, this one has no in-flight entertainment system built into the backs of seats. Instead, every passenger is issued with an Apple iPad (located in the back seat pocket), fitted with what appears to be a fairly robust leather jacket emblazoned with the words “SECURITY DEVICE ATTACHED” (presumably to discourage theft).

Planet Linux AustraliaJeremy Visser: SPA525G with ASA 9.1.x

At work, we have a staff member who has a Cisco SPA525G phone at his home that has built-in AnyConnect VPN support. Over the weekend, I updated our Cisco ASA firewall (which sits in front of our UC500 phone system) from version 8.4.7 to 9.1.3 and the phone broke with the odd error “Failed to obtain WebVPN cookie”. Turns out the fix was very simple. Just update the firmware on the SPA525G to the latest version.

Planet Linux AustraliaJeremy Visser: Restore ASA 5500 configuration from blank slate

The Cisco ASA 5500 series (e.g. 5505, 5510) firewalls have a fairly nice GUI interface called ASDM. It can sometimes be a pain, but it could be a lot worse than it is. One of the nice things ASDM does it let you save a .zip file backup of your entire ASA configuration. It includes your startup-configuration, VPN secrets, AnyConnect image bundles, and all those other little niceties. But when you set up an ASA from scratch to restore from said .

Planet Linux AustraliaJeremy Visser: You reap what you sow

So the ABC has broken iview for all non–Chrome Linux users. How so? Because the ABC moved iview to use a streaming format supported only by the latest versions of Adobe Flash (e.g. version 11.7, which is available on Windows and OS X), but Adobe have ceased Linux support for Flash as of version 11.2 (for reasons I don’t yet understand, some users report that the older Flash 10.3 continues to work with iview).

Planet Linux AustraliaJeremy Visser: We do not tolerate bugs; they are of the devil

I was just reading an article entitled “Nine traits of the veteran network admin”, and this point really struck a chord with me: Veteran network admin trait No. 7: We do not tolerate bugs; they are of the devil On occasion, conventional troubleshooting or building new networks run into an unexplainable blocking issue. After poring over configurations, sketching out connections, routes, and forwarding tables, and running debugs, one is brought no closer to solving the problem.

Planet Linux AustraliaJeremy Visser: ABC iview and the ‘Australia tax’

Unless you have been living in a cave, it is probable that you heard about a federal parliamentary inquiry into IT pricing (somewhat aptly entitled “At what cost? — IT pricing and the Australia tax”) which reported that, amongst other things, online geo-blocking can as much as double pricing for IT products in what is blatant price discrimination. Not only do Australians pay, on average, 42% more than US’ians for Adobe products, and 66% more for Microsoft products, but music (such as the iTunes Store), video games, and e-books (e.

Planet Linux AustraliaJeremy Visser: Turning out the lights

We put an unbelievable amount of data in the hands of third parties. In plain text. Traceable right back to you with mimimal effort to do so. For me, giving my data to the likes of Google, Apple, Microsoft, and the rest of the crowd, has always been a tradeoff: convenience vs. privacy. Sometimes, privacy wins. Most of the time, convenience wins. My iPhone reports in to Apple. My Android phone reports in to Google.

Sam VargheseAustralia: 24m people, but not one decent rugby commentator

Australia is one of the better rugby nations on the face of the earth, with two World Cup wins to show for its efforts in the game, the same as South Africa and just one behind New Zealand.

But despite its producing a number of truly great players – Nick Farr-Jones, David Campese and Mark Ella are three who come to mind – the country still lacks a decent rugby commentator and has made do with Gordon Bray for a long, long time.

Surprisingly, Bray has been commentating for more than 40 years, despite the fact that there are obvious deficiencies in his performance. His commentary sounds more like a coaching class for Australia, and a list of instances where he feels the rub of the green has gone against the Australians. Whinging is the word they use in Australia to describe his complaining.

Bray has issues with pronouncing names that are not Western. Some South African and Maori names, for example, truly pose a challenge to him. That he does not take the trouble to meet team officials from these players’ teams and find out how to pronounce such names correctly speaks to his lack of professionalism.

Additionally, Bray periodically comes up with truly ridiculous trivia, things which are really silly. Plus, the bias in his commentary is there for all to see; maybe he has survived so long because there is nobody better among the 24 million in the country. That is a depressing thought.

A good commentator makes listening to a game a joy. Across the ditch, in New Zealand, the TV commentator is a man named Grant Nisbett. Boy, is he the diametric opposite of Bray. Nisbett has excellent command of the language, never screws up on getting players’ names right, and is not there to conduct a coaching class for the All Blacks.

He is quick to praise good play by either team and while he, no doubt, supports New Zealand, he never lets his colours show during a game.

One cannot cite more experience as the reason why Nisbett is so much better at his job than Bray, even though he has now commentated on more than 300 Tests. Bray has been around for longer, and Wikipedia claims he has commentated on more than 400 games, including in all eight World Cups to date.

It is an indication that one can hang around for an eternity provided one has the right connections. Australia often reminds me of a famous saying by the late W. Somerset Maugham: “In the country of the blind, the one-eyed man is king.”

Planet DebianNorbert Preining: Debian/TeX Live binaries update 2018.20180907.48586-1

A new set of TeX Live binaries has been uploaded to Debian, based on the Subversion status as of 7 September (rev 48586). Aim was mostly fixing a bug of (x)dvipdfm(x) introduced by a previous upload. But besides fixing this, it also brought the new version of dvisvgm (2.5) into Debian.

The last update of TeX Live binaries is not so long ago, but with it a bug in dvipdfmx creeped in and strange things happened with xetex compilations. Upstream had already fixed this one, so I decided to upload new set of binaries to Debian. At the same time, dvisvgm saw a version update to 2.5, which did produce a bit of complications and for getting it into Debian I first packaged the C version of xxHash (Debian QA package page).

The current sources also contain another cherry picked bug fix for dvipdfmx, but unfortunately I will have to stop now using the subversion tree as is, due to the inclusion of an intermediate luatex release I am not convinced I want to see in Debian before the proper release of TeX Live 2019. That means, from now on I have to cherry pick till the next TeX Live release.

As usual, please report problems to the Debian Bug Tracking System.

Enjoy

Krebs on SecurityIn a Few Days, Credit Freezes Will Be Fee-Free

Later this month, all of the three major consumer credit bureaus will be required to offer free credit freezes to all Americans and their dependents. Maybe you’ve been holding off freezing your credit file because your home state currently charges a fee for placing or thawing a credit freeze, or because you believe it’s just not worth the hassle. If that accurately describes your views on the matter, this post may well change your mind.

A credit freeze — also known as a “security freeze” — restricts access to your credit file, making it far more difficult for identity thieves to open new accounts in your name.

Currently, many states allow the big three bureaus — Equifax, Experian and TransUnion — to charge a fee for placing or lifting a security freeze. But thanks to a federal law enacted earlier this year, after Sept. 21, 2018 it will be free to freeze and unfreeze your credit file and those of your children or dependents throughout the United States.

KrebsOnSecurity has for many years urged readers to freeze their files with the big three bureaus, as well as with a distant fourth — Innovis — and the NCTUE, an Equifax-operated credit checking clearinghouse relied upon by most of the major mobile phone providers.

There are dozens of private companies that specialize in providing consumer credit reports and scores to specific industries, including real estate brokers, landlords, insurers, debt buyers, employers, banks, casinos and retail stores. A handy PDF produced earlier this year by the Consumer Financial Protection Bureau (CFPB) lists all of the known entities that maintain, sell or share credit data on U.S. citizens.

The CFPB’s document includes links to Web sites for 46 different consumer credit reporting entities, along with information about your legal rights to obtain data in your reports and dispute suspected inaccuracies with the companies as needed. My guess is the vast majority of Americans have never heard of most of these companies.

Via numerous front-end Web sites, each of these mini credit bureaus serve thousands or tens of thousands of people who work in the above mentioned industries and who have the ability to pull credit and other personal data on Americans. In many cases, online access to look up data through these companies is secured by nothing more than a username and password that can be stolen or phished by cybercrooks and abused to pull privileged information on consumers.

In other cases, it’s trivial for anyone to sign up for these services. For example, how do companies that provide background screening and credit report data to landlords decide who can sign up as a landlord? Answer: Anyone can be a landlord (or pretend to be one).

SCORE ONE FOR FREEZES

The truly scary part? Access to some of these credit lookup services is supposed to be secured behind a login page, but often isn’t. Consider the service pictured below, which for $44 will let anyone look up the credit score of any American who hasn’t already frozen their credit files with the big three. Worse yet, you don’t even need to have accurate information on a target — such as their Social Security number or current address.

KrebsOnSecurity was made aware of this particular portal by Alex Holden, CEO of Milwaukee, Wisc.-based cybersecurity firm Hold Security LLC [full disclosure: This author is listed as an adviser to Hold Security, however this is and always has been a volunteer role for which I have not been compensated].

Holden’s wife Lisa is a mortgage broker, and as such she has access to a more full-featured version of the above-pictured consumer data lookup service (among others) for the purposes of helping clients determine a range of mortgage rates available. Mrs. Holden said the version of this service that she has access to will return accurate, current and complete credit file information on consumers even if one enters a made-up SSN and old address on an individual who hasn’t yet frozen their credit files with the big three.

“I’ve noticed in the past when I do a hard pull on someone’s credit report and the buyer gave me the wrong SSN or transposed some digits, not only will these services give me their credit report and full account history, it also tells you what their correct SSN is,” Mrs. Holden said.

With Mr. Holden’s permission, I gave the site pictured above an old street address for him plus a made-up SSN, and provided my credit card number to pay for the report. The document generated by that request said TransUnion and Experian were unable to look up his credit score with the information provided. However, Equifax not only provided his current credit score, it helpfully corrected the false data I entered for Holden, providing the last four digits of his real SSN and current address.

“We assume our credit report is keyed off of our SSN or something unique about ourselves,” Mrs. Holden said. “But it’s really keyed off your White Pages information, meaning anyone can get your credit report if they are in the know.”

I was pleased to find that I was unable to pull my own credit score through this exposed online service, although the site still charged me $44. The report produced simply said the consumer in question had requested that access to this information be restricted. But the real reason was simply that I’ve had my credit file frozen for years now.

Many media outlets are publishing stories this week about the one-year anniversary of the breach at Equifax that exposed the personal and financial data on more than 147 million people. But it’s important for everyone to remember that as bad as the Equifax breach was (and it was a total dumpster fire all around), most of the consumer data exposed in the breach has been for sale in the cybercrime underground for many years on a majority of Americans — including access to consumer credit reports. If anything, the Equifax breach may have simply helped ID thieves refresh some of those criminal data stores.

It costs $35 worth of bitcoin through this cybercrime service to pull someone’s credit file from the three major credit bureaus. There are many services just like this one, which almost certainly abuse hacked accounts from various industries that have “legitimate” access to consumer credit reports.

THE FEE-FREE FREEZE

According to the U.S. Federal Trade Commission, when the new law takes effect on September 21, Equifax, Experian and TransUnion must each set up a webpage for requesting fraud alerts and credit freezes.

The law also provides additional ID theft protections to minors. Currently, some state laws allow you to freeze a child’s credit file, while others do not. Starting Sept. 21, no matter where you live you’ll be able to get a free credit freeze for kids under 16 years old.

Identity thieves can and often do target minors, but this type of fraud usually isn’t discovered until the affected individual tries to apply for credit for the first time, at which point it can be a long and expensive road to undo the mess. As such, I would highly recommend that readers who have children or dependents take full advantage of this offering once it’s available for free nationwide.

In addition, the law requires the big three bureaus to offer free electronic credit monitoring services to all active duty military personnel. It also changes the rules for “fraud alerts,” which currently are free but only last for 90 days. With a fraud alert on your credit file, lenders or service providers should not grant credit in your name without first contacting you to obtain your approval — by phone or whatever other method you specify when you apply for the fraud alert.

Under the new law, fraud alerts last for one year, but consumers can renew them each year. Bear in mind, however, that while lenders and service providers are supposed to seek and obtain your approval if you have a fraud alert on your file, they’re not legally required to do this.

A key unanswered question about these changes is whether the new dedicated credit bureau freeze sites will work any more reliably than the current freeze sites operated by the big three bureaus. The Web and social media are littered with consumer complaints — particularly over the past year — about the various freeze sites freezing up and returning endless error messages, or simply discouraging consumers from filing a freeze thanks to insecure Web site components.

It will be interesting to see whether these new freeze sites will try to steer consumers away from freezes and toward other in-house offerings, such as paid credit reports, credit monitoring, or “credit lock” services. All three big bureaus tout their credit lock services as an easier and faster alternative to freezes.

According to a recent post by CreditKarma.com, consumers can use these services to quickly lock or unlock access to credit inquiries, although some bureaus can take up to 48 hours. In contrast, they can take up to five business days to act on a freeze request, although in my experience the automated freeze process via the bureaus’ freeze sites has been more or less instantaneous (assuming the request actually goes through).

TransUnion and Equifax both offer free credit lock services, while Experian’s is free for 30 days and $19.99 for each additional month. However, TransUnion says those who take advantage of their free lock service agree to receive targeted marketing offers. What’s more, TransUnion also pushes consumers who sign up for its free lock service to subscribe to its “premium” lock services for a monthly fee with a perpetual auto-renewal.

Unsurprisingly, the bureaus’ use of the term credit lock has confused many consumers; this was almost certainly by design. But here’s one basic fact consumers should keep in mind about these lock services: Unlike freezes, locks are not governed by any law, meaning that the credit bureaus can change the terms of these arrangements when and if it suits them to do so.

If you’d like to go ahead with freezing your credit files now, this Q&A post from the Equifax breach explains the basics, and includes some other useful tips for staying ahead of identity thieves. Otherwise, check back here later this month for more details on the new free freeze sites.

Planet DebianDirk Eddelbuettel: AsioHeaders 1.12.1-1

A first update to the AsioHeaders package arrived on CRAN today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release is the first following the initial upload of version 1.11.0-1 in 2015. I had noticed the updated 1.12.1 version a few days ago, and then Joe Cheng surprised me with a squeaky clean PR as he needed it to get RStudio’s websocket package working with OpenSSL 1.1.0.

I actually bumbled up the release a little bit this morning, uploading 1.12.1 first and then 1.12.1-1 as we like having a packaging revision. Old habits die hard. So technically CRAN, but we may clean that up and remove the 1.12.1 release from the archive as 1.12.1-1 is identical but for two bytes in DESCRIPTION.

The NEWS entry follow, it really is just the header update done by Joe plus some Travis maintenance.

Changes in version 1.12.1-1 (2018-09-10)

  • Upgraded to Asio 1.12.1 (Joe Cheng in #2)

  • Updated Travis CI support via newer run.sh

Via CRANberries, there is a diffstat report relative to the previous release, as well as this time also one between the version-corrected upload and the main one.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianMatthew Garrett: The Commons Clause doesn't help the commons

The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comment count unavailable comments

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #176

Here’s what happened in the Reproducible Builds effort between Sunday September 2 and Saturday September 8 2018:

Patches filed

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianSven Hoexter: Firefox 60, Yubikey, U2F vs my Google Account

tl;dr; Yes, you can use Firefox 60 in Debian/stretch with your U2F device to authenticate your Google account, but you've to use Chrome for the registration.

Thanks to Mike, Moritz and probably others there's now Firefox 60 ESR in Debian/stretch. So I took it as a chance to finally activate my for work YubiKey Nano as a U2F/2FA device for my at work Google account. Turns out it's not so simple. Basically Google told me that this browser is not support and I should install the trojan horse (Chrome) to use this feature. So I gave in, installed Chrome, logged in to my Google account and added the Yubikey as the default 2FA device. Then I quit Chrome, went back to Firefox and logged in again to my Google account. Bäm it works! The Yubikey blinks, I can touch it and I'm logged in.

Just in case: you probably want to install "u2f-host" to have "libu2f-host0" available which ships all the udev rules to detect common U2F devices correctly.

Sam VargheseNo first-time starter needs this kind of pressure

At times, even a polished outfit like the All Blacks can get it wrong. When the team was picked for the game against Argentina on Saturday, a number of second choice players were chosen, in order to get them match-ready and also to establish the depth that the team will need as it builds towards the next World Cup in Japan in 2019.

The major change was the presence of Richie Mo’unga as standoff, taking over from the man acknowledged as the best at that position, Beauden Barrett. Thus, there was tremendous pressure of Mo’unga, more so given that Barrett had put in an excellent performance in the previous outing, against Australia, dominating the game and scoring 30 of the team’s 40 points.

This was Mo’unga’s first start in a Test; he had only come on as a substitute in one earlier Test. That was the depth of his experience when he took the field.

But the All Blacks management showed little appreciation of the situation. They gave Barrett the task of being water boy for the night – which meant he would come in contact with Mo’unga every time a conversion was to be taken as he (Barrett) would be the one carrying out the kicking tee.

Did the tin-heads who manage the team not realise that this would put that much pressure on Mo’unga? To be constantly reminded that he had to live up to the extraordinary feats that Barrett had demonstrated in game two of the championship would certainly not have helped Mo’unga’s game.

Doubtless, as Barrett went out to the centre of the pitch, either with water when there was an injury break, or with a tee when a conversion was being taken, he would have spoken to Mo’unga. And this was probably the last thing that Mo’unga needed.

As a result, the man who had shown confidence and remarkable ability in the final two games of the Super Rugby season, with a standout performance in the final as the Crusaders won their ninth title, was a nervy, hesitant player against Argentina. Nobody who ranked the players that night would have given Mo’unga much more than a barely passing grade.

The All Blacks management should have made Barrett sit in the stands and picked any other player as the water carrier for the night. For once, they showed a degree of foolishness which ranks with the period, long ago, when there seemed to be a belief that Leon MacDonald, a moderately competent full-back, could play at any position.

On the plus side, there was the performance of third-string scrum0half Te Toiroa Tahuriorangi. Thomas Tekanapu Rawakata (“TJ”) Perenara was in the starting line-up and young Tahuriorangi played the last 20 minutes or so.

He showed an enormous amount of confidence for one making his debut. There was one delectable touch that led to the final All Blacks try, when he flicked the ball to his left, confident in the knowledge that someone would be there to pick it up and break through. Damian McKenzie did just that and his pass to Jack Goodhew led to the latter going in close to the left upright.

So it looks like the departure or Tavera Kerr-Barlow — who was third in the scrum-half rankings behind Aaron Smith and Perenara — for France to ply his wares, will not really affect the All Blacks as they look to build depth in their ranks ahead of the next World Cup. Tahuriorangi is there to pick up the slack.

The degree of depth in the All Blacks’ ranks was demonstrated when lock Brodie Retallick and centre Ngani Laumape were both injured early on in the game. Additionally, full-back Ben Smith had to go off for a head injury assessment.

But the game went on without a hiccup with Sam Whitelock, Anton Lienert-Brown and Damian McKenzie filling the breach. Smith later returned after he was cleared of any injury.

Worse Than FailureCodeSOD: Legacy Switchout

About a decade ago, I attended a talk. The speaker made the argument that "legacy code" may have many possible interpretations, but the practical view was to simply think of legacy code as "code without unit tests". Thus, the solution to modernizing your legacy code was to simply write unit tests. Refactoring the code to make it testable would have the side effect of modernizing the code base, and writing tests would act as documentation. It's that easy.

Andrew is struggling with some legacy code right now. Worse, they're trying to integrate a large mountain of legacy code into a custom, in-house CI/CD pipeline. This particular pile of legacy code dates back to the mid-2000s, so everything in it is glued together via XML. It was some of that XML code which started failing when Andrew threw some unit tests at it.

It doesn't start that bad:

static bool IsNamespace(XName name) { return name.LocalName == "xmlns" || name.NamespaceName == "http://www.w3.org/2000/xmlns/"; }

Now, this is a silly method, and even .Net 1.0 had an XmlNamespaceManager for handling interacting with namespaces. Without knowing more of the overall context, maybe this method was actually useful? At the very least, this method isn't a WTF.

One thing I want you to note, however, is that this method does the sane thing, and simply returns the result of a boolean comparison. That method was written by the same developer, and checked in on the same day as this method:

static bool IsNewtonsoftNamespace(XAttribute attribute) { switch (attribute.Value) { case "http://james.newtonking.com/projects/json": return true; } switch (attribute.Name.NamespaceName) { case "http://james.newtonking.com/projects/json": return true; } return false; }

Between writing IsNamespace and IsNewtonsoftNamespace, the developer apparently forgot that they could just return boolean expressions. They also apparently forgot- or perhaps never knew- what an if statement is supposed to do.

Andrew at least has the code building in their CI solution, but can look forward to many more weeks of fixing test failures by going through code like this.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianLars Wirzenius: Short-term contracting work?

I'm starting a new job in about a month. Until then, it'd be really helpful if I could earn some money via a short-term contracting or consulting job. If your company or employer could benefit from any of the following, please get in touch. I will invoice via a Finnish company, not as a person (within the EU, at least, this makes it easier for the clients). I also reside in Finland, if that matters (meaning, meeting outside of Helsinki gets tricky).

  • software architecture design and review
  • coding in Python, C, shell, or code review
  • documentation: writing, review
  • git training
  • help with automated testing: unit tests, integration tests
  • help with Ansible
  • packaging and distributing software as .deb packages

Planet DebianDaniel Pocock: An FSFE Fellowship Representative's dilemma

The FSFE Fellowship representative role may appear trivial, but it is surprisingly complicated. What's best for FSFE, what is best for the fellows and what is best for free software are not always the same thing.

As outlined in my blog Who are/were the FSFE Fellowship?, fellows have generously donated over EUR 1,000,000 to FSFE and one member of the community recently bequeathed EUR 150,000. Fellows want to know that this money is spent well, even beyond their death.

FSFE promised them an elected representative, which may have given them great reassurance about the checks and balances in the organization. In practice, I feel that FSFE hasn't been sincere about this role and it is therefore my duty to make fellows aware of what representation means in practice right now.

This blog has been held back for some time in the hope that things at FSFE would improve. Alas, that is not the case and with the annual general meeting in Berlin only four weeks away, now is the time for the community to take an interest. As fellowship representative, I would like to invite members of the wider free software community to attend as guests of the fellowship and try to help FSFE regain legitimacy.

Born with a conflict of interest

According to the FSFE e.V. constitution, as it was before elections were abolished, the Fellows elected according to §6 become members of FSFE e.V.

Yet all the other fellows who voted, the people being represented, are not considered members of FSFE e.V. Sometimes it is possible to view all fellows together as a unit, a separate organization, The Fellowship. Sometimes not all fellows want the same thing and a representative has to view them each as individuals.

Any representative of this organization, The Fellowship and the individual fellows, has a strong ethical obligation to do what is best for The Fellowship and each fellow.

Yet as the constitution recognizes the representative as a member of FSFE e.V., some people have also argued that he/she should do what is best for FSFE e.V.

What happens when what is best for The Fellowship is not in alignment with what is best for FSFE e.V.?

It is also possible to imagine situations where doing what is best for FSFE e.V. and doing what is best for free software in general is not the same thing. In such a case the representative and other members may want to resign.

Censorship of the Fellowship representatives by FSFE management

On several occasions management argued that communications to fellows need to be censored adapted to help make money. For example, when discussing an email to be sent to all fellows in February about the risk of abolishing elections, the president warned:

"people might even stop to support us financially"

if they found out about the constitutional changes. He subsequently subjected the email to censorship modification by other people.

This was not a new theme: in a similar discussion in August 2017 about communications from the representatives, another senior member of the executive team had commented:

"It would be beneficial if our PR team could support in this, who have the experience from shaping communication in ways which support retention of our donors."

A few weeks later, on 20 March, FSFE's management distributed a new censorship communications policy, requiring future emails to prioritize FSFE's interests and mandating that all emails go through the censors PR team. As already explained, a representative has an ethical obligation to prioritize the interests of the people represented, The Fellowship, not FSFE's interests. The censorship communications policy appears deliberately incompatible with that obligation.

As the elected representative of a 1500-strong fellowship, it seems obscene that communications to the people represented are subject to censorship by the very staff the representative scrutinizes. The situation is even more ludicrous when the organization concerned claims to be an advocate of freedom.

This gets to the core of our differences: FSFE appeared to be hoping a representative would be a stooge, puppet or cheerleader who's existence might "support retention of ... donors". Personally, I never imagined myself like that. Given the generosity of fellows and the large amounts of time and money contributed to FSFE, I feel obliged to act as a genuine representative, ensuring money already donated is spent effectively on the desired objectives and ensuring that communications are accurate. FSFE management appear to hope their clever policy document will mute those ambitions.

Days later, on 25 March, FSFE management announced the extraordinary general meeting to be held in the staff office in Berlin, to confirm the constitutional change and as a bonus, try to abruptly terminate the last representative, myself. Were these sudden changes happening by coincidence, or rather, a nasty reprisal for February's email about constitutional changes? I had simply been trying to fulfill my ethical obligations to fellows and suddenly I had become persona non grata.

When I first saw this termination proposal in March, it really made me feel quite horrible. They were basically holding a gun to my head and planning a vote on whether to pull the trigger. For all purposes, it looked like gangster behavior happening right under my nose in a prominent free software organization.

Both the absurdity and hostility of these tactics was further underlined by taking this vote on my role behind my back on 26 May, while I was on a 10 day trip to the Balkans pursuing real free software activities in Albania and Kosovo, starting with OSCAL.

In the end, while the motion to abolish elections was passed and fellows may never get to vote again, only four of the official members of the association backed the abusive motion to knife me and that motion failed. Nonetheless, it left me feeling I would be reluctant to trust FSFE again. An organization that relies so heavily on the contributions of volunteers shouldn't even contemplate treating them, or their representatives, with such contempt. The motion should never have been on the agenda in the first place.

Bullet or boomerang?

In May, I thought I missed the bullet but it appears to be making another pass.

Some senior members of FSFE e.V. remain frustrated that a representative's ethical obligations can't be hacked with policy documents and other juvenile antics. They complain that telling fellows the truth is an act of treason and speaking up for fellows in a discussion is a form of obstruction. Both of these crimes are apparently grounds for reprisals, threats, character assassination and potentially expulsion.

In the most outrageous act of scapegoating, the president has even tried to suggest that I am responsible for the massive exodus from the fellowship examined in my previous blog. The chart clearly shows the exodus coincides with the attempt to force-migrate fellows to the supporter program, long after the date when I took up this role.

Senior members have sent me threats to throw me out of office, most recently the president himself, simply for observing the basic ethical responsibilities of a representative.

Leave your conscience at the door

With the annual general meeting in Berlin only four weeks away, the president is apparently trying to assemble a list of people to throw the last remaining representative out of the association completely. It feels like something out of a gangster movie. After all, altering and suppressing the results of elections and controlling the behavior of the candidates are the modus operandi of dictators and gangsters everywhere.

Will other members of the association exercise their own conscience and respect the commitment of representation that was made to the community? Or will they leave their conscience at the door and be the president's puppets, voting in block like in many previous general meetings?

The free software ecosystem depends on the goodwill of volunteers and donors, a community that can trust our leaders and each other. If every free software organization behaved like this, free software wouldn't exist.

A president who conspires to surround himself with people who agree with him, appointing all his staff to be voting members of the FSFE e.V. and expelling his critics appears unlikely to get far promoting the organization's mission when he first encounters adults in the real world.

The conflict of interest in this role is not of my own making, it is inherent in FSFE's structure. If they do finally kill off the last representative, I'll wear it like a badge of honor, for putting the community first. After all, isn't that a representative's role?

As the essayist John Gardner wrote

“The citizen can bring our political and governmental institutions back to life, make them responsive and accountable, and keep them honest. No one else can.”

Planet DebianNorbert Preining: Marko Lukša: Kubernetes in Action

The rise of Kubernetes as one of the most important tools for devops engineers and developers is out of discussion. But until I moved into my current company I never had any chance to actually use Docker, not to speak of Kubernetes. But it became necessary for me to learn it, so …

I choose the Kubernetes book Kubernetes in Action from Manning, mostly because I have very good experience with Manning books (and actually have collected quite an amount of them), and I wasn’t disappointed.

The book explains practically everything and much more I will ever need, with lots of examples, well-designed graphics, and in great detail. It is structured into an initial part “Overview” which gives a very light intro to Kubernetes and Docker. The second part “Core Concepts” introduces in 8 well-separated chapters everything I had to use to the micro-service deployment of the application I have developed. The final part “Beyond the basics” goes into more advanced details and specifics relevant for cluster administrators.

If I miss something from the book it is Rancher: While the last chapter discusses briefly systems built on top of Kubernetes, namely OpenShift, Deis Workflow (no support, final release in 2017) and Helm, another very popular platform, Rancher, has been left out, although I had very good experiences with it.

A very recommendable book if one wants to learn about Kubernetes.

,

Planet DebianHideki Yamane: Earthquake struck Hokkaido and caused blackout, but security.d.o run without trouble

Dec 2014, security.debian.org mirror came to Hokkaido, Japan. And in September 2018, Huge earthquake (magnitude 6.7) has hit Hokkaido. It was a surprise because the Japan government said such a large earthquake would shake Hokkaido is less than 0.2% in 30 years.


After earthquake (left) and Before (right)
Below pics are left: after earthquake / right: before earthquake

And it causes a blackout for the whole of Hokkaido, of course, it includes Sakura Internet Ishikari DC. Ishikari DC had worked with an emergency power supply for almost 60 hours(!), so its security mirror run without any error.


Planet DebianIustin Pop: Printing paper: matte vs. glossy revisited

Let’s revisit some choices… whether they were explicit or not.

For the record, a Google search for “matte vs glossy” says “about 180.000.000 results found”, so it’s like emacs versus vi, except that only gets a paltry 10 million hits.

Tech background

Just a condensed summary that makes some large simplifications, skip if you already know this.

Photographic printing paper is normally of three main types: matte, glossy, and canvas. Glossy is the type one usually finds for normal small prints out of a printing shop/booth, matte is, well, like the normal document print paper, and canvas is really stretchable “fabric”. In the matte camp, there is the smooth vs. textured vs. rag-type (alternatively, smooth, light texture, textured), and in the glossy land, there’s luster (semi-gloss) and glossy (with the very glossy ones being unpleasant to the touch, even). Making some simplifications here, of course. In canvas land, I have no idea ☺

The black ink used for printing differs between glossy and matte, since you need a specific type to ensure that you get the deepest blacks possible for that type of paper. Some printers have two black ink “heads”, others—like (most?) Epson printers—have a single one and can switch between the two inks. This switching is costly since it needs to flush all current ink and then load the new ink, thus it wastes ink.

OK, with this in mind, let’s proceed.

My original paper choices

When I originally bought my photo printer (about five years ago), I thought at the beginning I’ll mostly print on matte paper. Good quality matte paper has a very special feel (in itself), whereas (very) glossy paper is what you usually see cheap prints on (the kind of you would have gotten 20 years ago from a photo developing shop). Good glossy paper is much more subdued, but still on the same “shiny” basis (compared to matte).

So I bought my printer, started printing—on matte paper—and because of the aforementioned switching cost, for a while all was good in matte-only land. I did buy quite a few sample packs for testing, including glossy.

Of course, at one point, the issue of printing small (e.g. the usual 10×15cm format) appeared, and because most paper you find in this format in non-specialist stores is glossy, I started printing on glossy as well. And then I did some large format prints also using glossy, and… well, glossy does have the advantage of more “impact” (colours jump up at you much more), so I realised it’s not that bad in glossy land. Time to use/test with all that sample paper!

Thus, I did do quite a bit of experimenting to decide which are my “go-to” papers and settled on four, two matte and two glossy. But because there’s always “need one small photo printed”, I never actively used the matte papers beyond my tests… Both matte papers were smooth matte, since the texture effect I found quite unpleasant with some photos, especially portraits.

So many years passed, with one-off printing, and the usual replacement of all other colours. But the matte black cartridge still had ~20% ink left, that I wasn’t using, so I ended up with having the original cartridge. Its manufacture date is 2013/08, so it’s more than five years old now. Epson says “for best results, use within 6 months), so at this time it’s about ten times the recommended age.

Accidental revisiting the matte-vs-glossy

Fast forward to earlier this week, and as I was printing a small photo for a friend, it reminded me that I the Epson paper I find in shops in Switzerland is much thinner than what I found once in US, and that for long I wanted to look up what other small format (10×15cm, A5, 5×7in, etc.) I can find in higher quality. I look at my preferred brands, and I find actually fine art paper in small format, but to my surprise, there’s also the option of smooth matte paper!

Small-format matte paper, and especially for portraits, sounded very strange; I wondered how would this actually feels (in hand). One of the best money spent during my paper research was a sample (printed) book from Hahnemühle in A5 format (this one, which I can’t find on the Hahnemühle web site, hence the link to a shop), which contains almost all their papers with—let’s hope—appropriate subjects. I grab it, search for the specific matte paper I saw available in small format (Photo Rag 308), and… WOW. I couldn’t believe my eyes and fingers. Definitely different than any small photo I’ve (personally) ever seen.

The glossy paper - Fine Art Pearl (285gsm) also looked much superior to the Epson Premium Glossy Photo paper I was using. So, time to make a three-way test.

OK, but that still left a problem - while I do have some (A4) paper of Photo Rag, I didn’t have matte ink; or rather, I had some but a very, very old one. Curiosity got the better of me - at worst, some clogging and some power cleaning (more ink waste), but I had to try it.

I chose one recent portrait photo in some subdued colours, printed (A4) using standard Epson Photo Glossy paper, then printed using Fine Art Pearl (again, what a difference!) and then, prepare to print using Photo Rag… switch black ink, run a quick small test pattern print (OK-ish), and print away. To my surprise, it did manage to print, with no problems even on this on-the-dark-side photograph.

And yes, it was as good as the sample was promising, at least for this photograph. I can only imagine how things will look and feel in small format. And I say feel because a large part of the printed photograph appeal is in the paper texture, not only the look.

Conclusion

So, two takeaways.

First, comparing these three papers, I’ve wasted a lot of prints (for friends/family/etc.) on sub-standard paper. Why didn’t I think of small-paper choices before, and only focused on large formats? Doesn’t make sense, but I’m glad I learned this now, at least.

Second, what’s with the “best used within 6 months”? Sure, 6 months is nothing if you’re a professional (as in, doing this for $day job), so maybe Epson didn’t test more than 1 year lifetimes, but still, I’m talking here about printing after 5 years.

The only thing left now is to actually order some packs and see how a small photo book will look like in the matte version. And in any case, I’ve found a better choice even for the glossy option.

What about textured matte?

In all this, where are the matte textured papers? Being very textured and much different from everything I talked above (Photo Rag is smooth matte), the normal uses for these are art reproductions. The naming of this series (for Hahnemühle) is also in-line: Albrecht Dürer, William Turner, German and Museum Etching, etc.

The sample book has these papers as well, with the following subjects:

  • Torchon: a photograph of a fountain; so-so effect, IMHO;
  • Albrecht Dürer: abstract art reproduction
  • William Turner: a family picture (photograph, not paint)!!
  • German Etching: something that looks like a painting
  • Museum Etching: abstract art

I was very surprised that between all those “art reproductions”, the William Turner one, a quite textured paper, had a well matching family picture that is, IMHO, excellent. I really don’t have a feeling on “would this paper match this photograph” or “what kind paper would match it”, so I’m often surprised like this. In this case, it wasn’t just passable, it was an excellent match. You can see it on the product page—you need to go to the third picture in the slideshow, and of course that’s the digital picture, not what you get in real life.

Unless I get some epiphany soon, “what can one use textured matte paper for” will remain an unsolved mystery. Or just a research item, assuming I find the time, the same way I find Hahnemühle’s rice paper very cool but I have no idea what to print on it. Ah, amateurs ☺

As usual, comments are welcome.

Planet DebianBits from Debian: New Debian Developers and Maintainers (July and August 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • William Blough (bblough)
  • Shengjing Zhu (zhsj)
  • Boyuan Yang (byang)
  • Thomas Koch (thk)
  • Xavier Guimard (yadd)
  • Valentin Vidic (vvidic)
  • Mo Zhou (lumin)
  • Ruben Undheim (rubund)
  • Damiel Baumann (daniel)

The following contributors were added as Debian Maintainers in the last two months:

  • Phil Morrell
  • Raúl Benencia
  • Brian T. Smith
  • Iñaki Martin Malerba
  • Hayashi Kentaro
  • Arnaud Rebillout

Congratulations!

Sam VargheseTime to rejoice: Serena Williams loses another Grand Slam final

Sunday morning brought glorious news. Serena Williams had been soundly beaten in the final of the US Open women’s final by an unknown Japanese player, Naomi Osaka.

What’s more Williams blew a fuse — as she has often done in the past — when she was penalised for code violations. This is the third time she has behaved in this ugly manner, but it is unlikely to be the last because she has been fined a pathetic sum yet again.

For an outburst in 2009, she was fined a pathetic $US10,500. In 2011, she was asked to pay $US2000. And this time, she was again fined a small amount by her earnings – US$24,000. To her, that is chump change.

That is why Williams continues to shout at officials – and in every case these officials are from other minority communities. The umpire who officiated at Saturday’s US Open women’s final was Hispanic; in 2009, the lineswoman she ranted at was Japanese and in 2011, the umpire who penalised her was a Greek woman.

But then Williams, who shouts herself hoarse in claiming that penalties levied on her are because of sexism, cannot see that she is doing something similar and reacting to officials because of racial characteristics. She is blind to everything apart from the blind desire to win somehow.

Williams would have equalled Australian Margaret Court’s record of 24 Grand Slam singles wins if she had beaten Osaka on Saturday. Given that, her loss is all the more enjoyable.

Her histrionics detracted from Osaka’s achievement. But then that is par for the course with Williams; when she wins, it is because she played well. When she loses, it is because she played badly. Her opponent never figures in the picture.

Williams is the worst example of sportsmanship, the type of person who should be thrown out of international sport. She is a spoilt, abusive individual who thinks winning is some kind of divine right.

Pampering such egos is a dangerous thing. Children will learn from such bad examples and even get worse than her. And then all the attraction of competitive tennis will be lost.

The day Williams quits the game, I will buy a bottle of champagne.

Don MartiLook who's defending users from surveillance marketing

What's the best defense against surveillance marketing? In some cases, another surveillance marketer. Just like hackers lock up a vulnerable system after they break in to protect against other hackers, surveillance marketers who know what they're doing are helping to protect users from other companies' data collection practices.

Amazon: Retailers include different degrees of data in email receipts. Amazon only emails consumers links to their full receipts, limiting the information an email provider can extract. Oath gets to know shoppers through their Yahoo emails | Digital - Ad Age

Google: Google’s recent changes with Ads Data Hub keeps data locked within Google Cloud and cannot be combined outside of Google’s controlled environment. As a result, data lakes for marketing are under threat by recent changes by Google. How does Google’s Ads Data Hub Affect My Analytics? (Part III of the Ads Data Hub Series) - Thunder Experience Cloud

Facebook: Late last week Facebook announced it would eliminate all third-party data brokers from its platform. It framed this announcement as a response to the slow motion train wreck that is the Cambridge Analytica story. Just as it painted Cambridge as a “bad actor” for compromising its users’ data, Facebook has now vilified hundreds of companies who have provided it fuel for its core business model, a model that remains at the center of its current travails. Newco Shift | Facebook: Tear Down This Wall.

Real surveillance marketers play defense.

But in most cases, publishers don't. And that's Why Local Newspaper Websites Are So Terrible. What happens when news sites can play some defense of their own?

I don't know, but IMHO it will be an improvement for everybody. And the good news is that browser privacy improvements are finally making it possible.

Let's discuss. WORKSHOP: User Data and Privacy — Building market power for trustworthy publishers, Sept. 26, Chicago | Information Trust Exchange Governing Association

The people who get how Facebook works are also the most likely to leave it

Mind The Gap: Addressing The Digital Brand Deficit

How to Improve the California Consumer Privacy Act of 2018

Planet DebianRussell Coker: Fail2ban

I’ve recently setup fail2ban [1] on a bunch of my servers. It’s purpose is to ban IP addresses associated with password guessing – or whatever other criteria for badness you configure. It supports Linux, OpenBSD [2] and probably most Unix type OSs too. I run Debian so I’ve been using the Debian packages of fail2ban.

The first thing to note is that it is very easy to install and configure (for the common cases at least). For a long time installing it had been on my todo list but I didn’t make the time to do it, after installing it I realised that I should have done it years ago, it was so easy.

Generally to configure it you just create a file under /etc/fail2ban/jail.d with the settings you want, any settings that are different from the defaults will override them. For example if you have a system running dovecot on the default ports and sshd on port 999 then you could put the following in /etc/fail2ban/jail.d/local.conf:

[dovecot]
enabled = true

[sshd]
port = 999

By default the Debian package of fail2ban only protects sshd.

When fail2ban is running on Linux the command “iptables -L -n -v|grep f2b” will show the rules that match inbound traffic and the names of the chains they direct traffic to. To see if fail2ban has acted to protect a service you can run a command like “iptables -L f2b-sshd -n” to see the iptables rules.

The fail2ban entries in the INPUT table go before other rules, so it should work with any custom iptables rules you have configured as long as either fail2ban is the last thing to be started or your custom rules don’t flush old entries.

There are hooks for sending email notifications etc, that seems excessive to me but it’s always good to have options to extend a program.

In the past I’ve tried using kernel rate limiting to minimise hostile activity. That didn’t work well as there are legitimate end users who do strange things (like a user who setup their web-cam to email them every time it took a photo).

Conclusion

Fail2ban has some good features. I don’t think it will do much good at stopping account compromise as anything that is easily guessed could be guessed using many IP addresses and anything that has a good password can’t be guessed without taking many years of brute-force attacks while also causing enough noise in the logs to be noticed. What it does do is get rid of some of the noise in log files which makes it easier to find and fix problems. To me the main benefit is to improve the signal to noise ratio of my log files.

Planet DebianClint Adams: Firefox Extensions and Other Tragedies

Several months ago a Google employee told me not to panic about the removal of XUL because Firefox had probably mainlined the functionality I need from my ossified xul-ext packages. This appears to have been wildly inaccurate.

Antoine and Paul appear to have looked into such matters and I am not filled with optimism.

In preparation of the impending doom that is a firefox migration to buster, I have finally ditched RequestPolicy by turning uBlock Origin up to 11.

This means that I am only colossally screwed by a lack of replacements for Pentadactyl and Cookie Monster.

It appears that Waterfox is not in Debian so I cannot try that out.

Posted on 2018-09-09
Tags: bamamba

,

Planet DebianJoey Hess: usb drives with no phantom load

For a long time I've not had any network attached storage at home, because it's offgrid and power budget didn't allow it. But now I have 16 terabytes of network attached storage, that uses no power at all when it's not in use, and automatically spins up on demand.

I used a USB hub with per-port power control. But even with a USB drive's port powered down, there's a parasitic draw of around 3 watts per drive. Not a lot, but with 4 drives that's more power wasted than leaving a couple of ceiling lights on all the time. So I put all the equipment behind a relay too, so it can be fully powered down.

I'm using systemd for automounting the drives, and have it configured to power a drive's USB port on and off as needed using uhubctl. This was kind of tricky to work out how to do, but it works very well.

Here's the mount unit for a drive, media-joey-passport.mount:

[Unit]
Description=passport
Requires=startech-hub-port-4.service
After=startech-hub-port-4.service
[Mount]
Options=noauto
What=/dev/disk/by-label/passport
Where=/media/joey/passport

That's on port 4 of the USB hub, the startech-hub-port-4.service unit file is this:

[Unit]
Description=Startech usb hub port 4
PartOf=media-joey-passport.mount
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/usr/sbin/uhubctl -a on -p 4 ; /bin/sleep 20
ExecStop=/usr/sbin/uhubctl -a off -p 4

The combination of PartOf with Requires and After in these units makes systemd start the port 4 service before mounting the drive, and stop it after unmounting. This was the hardest part to work out.

The sleep 20 is a bit unfortunate, it seems that it can take a few seconds for the drive to power up enough for the kernel to see it, and so without that the mount can fail, leaving the drive powered on indefinitely. Seems there ought to be a way to declare an additional dependency and avoid needing that sleep? Update: See my comment below for a better way.

Finally, the automount unit for the drive, media-joey-passport.automount:

[Unit]
Description=Automount passport
[Automount]
Where=/media/joey/passport
TimeoutIdleSec=300
[Install]
WantedBy=multi-user.target

The TimeoutIdleSec makes it unmount after around 5 minutes of not being used, and then its USB port gets powered off.

I decided to not automate the relay as part of the above, instead I typically turn it on for 5 hours or so, and use the storage whenever I want during that window. One advantage to that is cron jobs can't spin up the drives in the early morning hours.

,

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I'm giving a book talk on Click Here to Kill Everybody at the Ford Foundation in New York City, on September 5, 2018.

  • The Aspen Institute's Cybersecurity & Technology Program is holding a book launch for Click Here to Kill Everybody on September 10, 2018 in Washington, DC.

  • I'm speaking about my book Click Here to Kill Everybody: Security and Survival in a Hyper-connected World at Brattle Theatre in Cambridge, Massachusetts on September 11, 2018.

  • I'm giving a keynote on supply chain security at Tehama's "De-Risking Your Global Workforce" event in New York City on September 12, 2018.

  • I'll be appearing at an Atlantic event on Protecting Privacy in Washington, DC on September 13, 2018.

  • I'll be speaking at the 2018 TTI/Vanguard Conference in Washington, DC on September 13, 2018.

  • I'm giving a book talk at Fordham Law School in New York City on September 17, 2018.

  • I'm giving an InfoGuard Talk in Zug, Switzerland on September 19, 2018.

  • I'm speaking at the IBM Security Summit in Stockholm on September 20, 2018.

  • I'm giving a book talk at Harvard Law School's Wasserstein Hall on September 25, 2018.

  • I'm giving a talk on "Securing a World of Physically Capable Computers" at the University of Rochester in Rochester, New York on October 5, 2018.

  • I'm keynoting at SpiceWorld in Austin, Texas on October 9, 2018.

  • I'm speaking at Cyber Security Nordic in Helsinki on October 10, 2018.

  • I'm speaking at the Cyber Security Summit in Minneapolis, Minnesota on October 24, 2018.

  • I'm speaking at ISF's 29th Annual World Congress in Las Vegas, Nevada on October 30, 2018.

  • I'm speaking at Kiwicon in Wellington, New Zealand on November 16, 2018.

  • I'm speaking at the The Digital Society Conference 2018: Empowering Ecosystems on December 11, 2018.

  • I'm speaking at the Hyperledger Forum in Basel, Switzerland on December 13, 2018.

The list is maintained on this page.

CryptogramFriday Squid Blogging: 100-kg Squid Caught Off the Coast of Madeira

News.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: An Unfortunate Sign

"Found this in the School of IT. 404: Women not found. Fairly accurate," wrote Maddie J.

 

"In true Disney fashion, even their servers bid us farewell after 'It's a Small World After All' had wrapped up," writes Daniel H.

 

Lorens wrote, "I thought the GNOME bugs in the new 'Bionic Beaver' Ubuntu were going to drain my battery, but turns out there's a lot left."

 

"You know, I never click on ads, and this one certainally doesn't 'speak' to my needs, but I have to know - what is it advertising?" writes Aaron K.

 

John A. wrote, "We got a new super 'secure' expense reporting tool at work."

 

"I guess so long as 'undefined' doesn't get hacked, I'm ok," James writes.

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaBen Martin: A floating shelf for tablets

The choice of replacing the small marble table entirely or trying to "work around" it with walnut. The lower walnut tabletop is about 44cm by 55cm and is just low enough to give easy access to slide laptop(s) under the main table top. The top floating shelf is wide enough to happily accommodate two ipad sized tablets. The top shelf and lower tabletop are attached to the backing by steel brackets which cut through to the back through four CNC created mortises.


Cutting the mortises was interesting, I had to drop back to using a 1/2 inch cutting bit in order to service the 45mm depth of the timber. The back panel was held down with machining clamps but toggles would have done the trick, it was just what was on hand at the time. I cut the mortises through from the back using an upcut bit and the front turned out very clean without any blow out. You could probably cut yourself on the finish it was so clean.

The upcut doesn't make a difference in this job but it is always good to plan and see the outcomes for the next time when the cut will be exposed. The fine grain of walnut is great to work with CNC, though most of my bits are upcut for metal work.

I will likely move on to adding a head rest to the eames chair next. But that is a story for another day.

Don Martianother omission

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Johnny Ryan writes, in ePrivacy: Over-regulation or opportunity?

[A]n ad tech lobby group called ‘IAB Europe’ published a new research study that claimed to demonstrate that the behavioural ad tech companies it represents are an essential lifeline for Europe’s beleaguered publishers....the report claimed that behavioural advertising technology produces a whopping €10.6 billion in revenue for Europe’s publishers.

Surely, the ad tech lobby argued, Parliament would permit websites to use “cookie walls” that force users to consent to behavioural ad tech tracking and profiling their activity across the Web. The logic is that websites need to do this because it is the only way for publishers to stay in business.

We now know that a startling omission is at the heart of this report. Without any indication that it was doing so, the report combined Google and Facebook’s massive revenue from behavioural ad tech with the far smaller amount that Europe’s publishers receive from it.

The IAB omitted any indication that the €10.6 billion figure for “publishers” revenue included Google and Facebook’s massive share too.

That's not the only startling omission. The most often ignored player in the ePrivacy debate is adtech's old frenemy, the racket that's the number two source of revenue for international organized crime and the number three player in targeted behavioral advertising—adfraud.

And ePrivacy, like browser privacy improvements, is like an inconveniently placed motion detector that threatens to expose fraud gangs and fraud-heavy adtech firms.

The same tracking technologies that enable the behavioral targeting that IAB defends are the tracking technologies that make adfraud bots possible. Bots work by visiting legit sites, getting profiled as a high-value user, and then getting tracked while they generate valuable ad impressions for fraud sites. Adfraud works so well today because most browsers support the same kind of site-to-site tracking behavior that a fraudbot relies on.

Unfortunately for those who perpetrate fraud, or just tolerate it and profit from it, browser privacy improvements are making fraud easier to spot. Changes in browsers intended to better implement users' privacy preferences (as Ehsan Akhgari explains in On leveling the playing field and online tracking) have the helpful side effect of making a human-operated browser behave more and more differently from a fraudbot.

And regulations that make it easier for users to protect themselves from being followed from one site to another are another source of anti-fraud power. If bots need to opt in to tracking in order for fraud to work, and most users, when given a clear and fair choice, don't, then that's one more data point that makes it harder for adfraud to hide.

Publishers pay for adfraud. That's because adfraud is no big secret, and it's priced into the market. Even legit publishers are forced to accept a fraud-adjusted price for human ad impressions. I'm sure that not every adtech firm that opposes ePrivacy or browser privacy improvements is deliberately tolerating fraud, but opposing privacy regulations and privacy technologies is the position that tends to conceal and protect fraud. That's the other omission here.

What Apple’s Intelligent Tracking Prevention 2.0 (ITP) Means for Performance Marketing

Who left open the cookie jar? A comprehensive evaluation of third-party cookie policies

Why we need better tracking protection

GDPR triggers many European news sites to reassess their use of third-party content

WE ARE IN AN EFFICIENCY BUBBLE

GDPR: Getting to the Lawful Basis for Processing

If you haven’t already switched to Firefox, do it now

The Long Goodbye (To Facebook)

Changing Our Approach to Anti-tracking

Policing third-party code is essential to digital vendor risk management

War reporters like me will cease to exist if the web giants aren’t stopped | Sammy Ketz

,

Cory DoctorowDon’t just fine Big Tech for abuses; instead, cut them down to size

My latest Locus Magazine column is Big Tech: We Can Do Better Than Constitutional Monarchies, and it’s a warning that the techlash is turning into a devil’s bargain, where we make Big Tech pay for a few cosmetic changes that do little to improve bullying, harassment, and disinformation campaigns, and because only Big Tech can afford these useless fripperies, they no longer have to fear being displaced by new challengers with better ways of doing things.


No. The priority today is making Big Tech be­have itself. Laws like SESTA/FOSTA (the 2018 US law notionally concerned with fighting sex trafficking) and the proposed new EU Copyright Directive (which would require anyone provid­ing a forum for public discourse to invest tens of millions of dollars in copyright filters that would block anything potentially infringing from being published) are the kinds of rules that Big Tech’s winners can comply with, but their nascent future competitors can’t possibly follow. Google and Twitter and Facebook can find the hundreds of millions necessary to comply with EU copyright rules, but the companies that might someday knock them off their perch don’t have that kind of money.

The priorities of the techlash are handing Big Tech’s giants a potentially eternal monopoly on their turf. That’s not so surprising: a monopolist’s first preference is for no regulation, and its close second preference is for lots of regulation, especially the kind of regulation that no competitor could possibly comply with. Today, Big Tech spends hundreds of millions of dollars buying and crushing potential competitors: better for them to spend the money on regulatory compliance and have the state do the work of snuffing out those future rivals in their cradles.

The vision of the “techno utopia” is a democracy: a world where anyone who wants to can participate in the shape of the future, retooling, reconfigur­ing, remapping the systems around them to suit their needs.

The vision of the techlash is a constitutional monarchy. We start by recog­nizing the divine right of Google, Amazon, Facebook, Apple, and the other giants to rule our technology, then we gather the aristocracy – technocrats from government regulatory bodies – to place modest limits on the power of the eternal monarchs.

Big Tech: We Can Do Better Than Constitutional Monarchies [Cory Doctorow/Locus Magazine]

(Image: Heralder, CC-BY-SA)

Krebs on SecurityLeader of DDoS-for-Hire Gang Pleads Guilty to Bomb Threats

A 19-year-old man from the United Kingdom who headed a cybercriminal group whose motto was “Feds Can’t Touch Us” pleaded guilty this week to making bomb threats against thousands of schools.

On Aug. 31, officers with the U.K.’s National Crime Agency (NCA) arrested Hertfordshire resident George Duke-Cohan, who admitted making bomb threats to thousands of schools and a United Airlines flight traveling from the U.K. to San Francisco last month.

One of many tweets from the attention-starved Apophis Squad, which launched multiple DDoS attacks against KrebsOnsecurity and Protonmail over the past few months.

Duke-Cohan — a.k.a. “7R1D3N7,” “DoubleParallax” and “Optcz1” — was among the most vocal members of a group of Internet hooligans that goes by the name “Apophis Squad,” which for the better part of 2018 has been launching distributed denial-of-service (DDoS) attacks against multiple Web sites, including KrebsOnSecurity and Protonmail.com.

Incredibly, all self-described members of Duke-Cohan’s clique were active users of Protonmail, even as they repeatedly attacked its servers and taunted the company on social media.

“What we found, combined with intelligence provided by renowned cyber security journalist Brian Krebs, allowed us to conclusively identify Duke-Cohan as a member of Apophis Squad in the first week of August, and we promptly informed law enforcement,” Protonmail wrote in a blog post published today. “British police did not move to immediately arrest Duke-Cohan however, and we believe there were good reasons for that. Unfortunately, this meant that through much of August, ProtonMail remained under attack, but due to the efforts of Radware, ProtonMail users saw no impact.”

The DDoS-for-hire service run by Apophis Squad listed their members.

On Aug. 9, 2018, the attention-seeking Apophis Squad claimed on their Twitter account that flight UAL 949 had been grounded due to their actions.

“In a recording of one of the phone calls which was made while the plane was in the air, he takes on the persona of a worried father and claims his daughter contacted him from the flight to say it had been hijacked by gunmen, one of whom had a bomb,” the NCA said of Duke-Cohan’s actions in a press release on Sept. 4. “On arrival in San Francisco the plane was the subject of a significant security operation in a quarantined area of the airport. All 295 passengers had to remain on board causing disruption to onward journeys and financial loss to the airline.”

The Apophis Squad modeled itself after the actions of the Lizard Squad, another group of e-fame seeking online hoodlums who also ran a DDoS-for-hire service, called in bomb threats to airlines, DDoSed this Web site repeatedly and whose members were nearly all subsequently arrested and charged with various cybercrimes. Indeed, the Apophis Squad’s Web site and DDoS-for-hire service is hosted on the same Internet server used by a handful of other domains that were tied to the Lizard Squad.

Unsophisticated but otherwise time-wasting and annoying groups like Apophis Squad are a dime a dozen. But as I like to say, each time my site gets attacked by one of them two things usually happen not long after: Those responsible get arrested, and I get at least one decent story out of it. And if Protonmail is right, there are additional charges on the way.

“We believe further charges are pending, along with possible extradition to the US,” the company said. “In recent weeks, we have further identified a number of other individuals engaged in attacks against ProtonMail, and we are working with the appropriate authorities to bring them to justice.”

Worse Than FailureCodeSOD: Not a Not Bad Approach

In terms of elegance, I think the bitmask has a unique beauty. The compactness of your expression, the simple power of bitwise operators, and the way you can see the underlying implementation of numbers laid bare just speaks to me. Of course, bitmasks can be a bit opaque, and you may have to spend some time thinking about what foo &= 0xFF0000 is actually doing, but there’s also something alluring about it.

Of course, bitmasks are surprisingly hard. For example, let’s look at some code submitted anonymously. This code is meant to run on a line of coin-operated dryers. Depending on the particular install, how many coins a patron puts in, what modes have been enabled by the owner, and so on, different “extra” features might be disabled or not disabled.

Since bitmasks are hard, we’re not going to use one. Instead, let’s have a pile of constants, like so:

#define DRYER_EXTRA_1    1
#define DRYER_EXTRA_2    2
#define DRYER_EXTRA_3    3
#define DRYER_EXTRA_4    4
#define DRYER_NOT_EXTRA_1    6
#define DRYER_NOT_EXTRA_2    7
#define DRYER_NOT_EXTRA_3    8
#define DRYER_NOT_EXTRA_4    9

Now, how might we use those constants? Why, we’re going to use constructs like this:

if(supportedExtra == DRYER_EXTRA_1)
{
    notSupportedExtrasList[extra1] = DRYER_NOT_EXTRA_1;
}
else
{
    notSupportedExtrasList[extra1] = DRYER_EXTRA_1;
}

A double negative is not a not bad idea. If this dryer is configured to support DRYER_EXTRA_1, then we’ll add DRYER_NOT_EXTRA_1 to the list of unsupported extras. There’s a variation on this code for every possible extra, and they all use the double-negative pattern.

As extras are being selected, however, we need to check against things like if there’s enough money in the machine. Thus, we see patterns like:

if(isEnoughMoney == FALSE)
{
    selectedExtra &= ~DRYER_EXTRA_1;
    //so this is a bit mask now?
}

//...some more lines later
if(selectedExtra == DRYER_EXTRA_1)
{
    //we're back to treating it as integer?
}

//...some more lines later
if((selectedExtra & DRYER_EXTRA_1) == DRYER_EXTRA_1)

Our developer here gets confused about the nature of the constants, swinging from manipulating it as bitmasks (which won’t work, as the constants aren’t powers of two), then back as a regular number (which could work, but maybe isn’t the best way to design this), and back to bitmasks again.

You may be shocked to learn that this entire module doesn’t work, and our submitter has been called in to junk it and rewrite it from scratch. They’ll be using bitmasks.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaAnthony Towns: Money Matters

I have a few things I need to write, but am still a bit too sick with the flu to put together something novel, so instead I’m going to counter-blog Rob Collins recent claim that Money doesn’t matter. Rob’s thoughts are similar to ones I’ve had before, but I think they’re ultimately badly mistaken.

There’s three related, but very different, ways of thinking about money: as a store of value, as a medium of exchange, and as a unit of account. In normal times, dollars (or pounds or euros) work for all three things, so it’s easy to confuse them, but when you’re comparing different moneys some are better at one than another, and when a money starts failing, it will generally fail at each purpose at different rates.

Rob says “Money isn’t wealth” — but that’s wrong. In so far as money serves as a store of value, it is wealth. That’s why having a million dollars in your bank account makes you feel wealthy. The obvious failure mode for store of value is runaway inflation, and that quickly becomes a humanitarian disaster. Money can be one way to store value, but it isn’t the only way: you can store value by investing in artwork, buying property, building a company, or anything else that you expect to be able to sell at some later date. The main difference between those forms of investment versus money is that, ideally, monetary investments have low risk (perhaps the art you bought goes out of fashion and becomes worthless, or the company goes bankrupt, but your million dollars remains a million dollars), and low variance (you won’t make any huge profits, but you won’t make huge losses either). Unlike other assets, money also tends to be very fungible — if you earn $1000, you can spend $100 and have $900 left over; but if you have an artwork worth $1000 it’s a lot harder to sell one tenth of it.

Rob follows up by saying that money is “a thing you can exchange for other things”, which is true — money is a medium of exchange. Ideally it’s cheap and efficient, hard to counterfeit, and easy to verify. This is mostly a matter of technology: pretty gems are good at these things in some ways, coins and paper notes are good in others, cheques kind of work though they’re a bit to easy to counterfeit and a bit too hard to verify, and these days computer networks make credit card systems pretty effective. Ultimately a lot of modern systems have ended up as walled gardens though, and while they’re efficient, they aren’t cheap: whether you consider the 1% fees credit card companies charge, or the 2%-4% fees paypal charges, or the 30% fees from the Apple App Store or Google Play Stores, those are all a lot larger than how much you’d lose accepting a $50 note from someone directly. I have a lot of hope that Bitcoin’s Lightning Network will eventually have a huge impact here. Note that if money isn’t wealth — that is, it doesn’t manage to be a good store of value even in the short term, it’s not a good medium of exchange either: you can’t buy things with it because the people selling will have to immediately get rid of it or they’ll be making a loss; which is why currencies undergoing hyperinflation result in black markets where trade happens in stable currencies.

With modern technology and electronic derivatives, you could (in theory) probably avoid ever holding money. If you’re a potato farmer and someone wants to buy a potato from you, but you want to receive fertilizer for next season’s crop rather than paper money, the exchange could probably be fully automated by an online exchange so that you end up with an extra hundred grams of fertilizer in your next order, with all the details automatically worked out. If you did have such a system, you’d entirely avoid using money as a store of value (though you’d probably be using a credit account with your fertilizer supplier as a store of value), and you’d at least mostly avoid using money as a medium of exchange, but you’d probably still end up using money as a medium of account — that is you’d still be listing the price of potatoes in dollars.

A widely accepted unit of account is pretty important — you need it in order to make contracts work, and it makes comparing different trades much easier. Compare the question “should I sell four apples for three oranges, or two apples for ten strawberries?” with “should I sell four apples for $5, or two apples for $3” and “should I buy three oranges for $5 or ten strawberries for $3?” While I suppose it’s theoretically possible to do finance and economics without a common unit of account, it would be pretty difficult.

This is a pretty key part and it’s where money matters a lot. If you have an employment contract saying you’ll be paid $5000 a month, then it’s pretty important what “$5000” is actually worth. If a few months down the track there’s a severe inflation event, and it’s only worth significantly less, then you’ve just had a severe pay cut (eg, the Argentinian Peso dropped from 5c USD in April to 2.5c USD in September). If you’ve got a well managed currency, that usually means low but positive inflation, so you’ll instead get a 2%-5% pay cut every year — which is considered desirable by economists as it provides an automatic way to devote less resources to less valuable jobs, without managers having to deliberately fire people, or directly cut peoples’ pay. Of course, people tend to be as smart as economists, and many workers expect automatic pay rises in line with inflation anyway.

Rob’s next bit is basically summarising the concept of sticky prices: if there’s suddenly more money to go around, the economy goes weird because people aren’t able to fix prices to match the new reality quickly, causing shortages if there’s more money before there’s higher prices, or gluts (and probably a recession) if there’s less money and people can’t afford to buy all the stuff that’s around — this is what happened in the global financial crisis in 2008/9, though I don’t think there’s really a consensus on whether the blame for less money going around should be put on the government via the Federal Reserve, or the banks, or some other combination of actors.

To summarise so far: money does matter a lot. Having a common unit of account so you can give things meaningful prices is essential, having a convenient store of value that you can use for large and small amounts, and being able to easily trade it for goods and services is a really big deal. Screwing it up hurts people directly, and can be really massively harmful. You could probably use something different for medium of exchange than method of account (eg, a lot of places accepting cryptocurrencies use the cryptocurrency as medium of exchange, but use regular dollars for both store of value and pricing); but without a store of value you don’t have a medium of exchange, and once you’ve got a method of account, having it also work as a store of value is probably too convenient to skip.

But all that said, money is just a tool — generally money isn’t what anyone wants, people want the things they can get with money. Rob phrases that as “resources and productivity”, which is fine; I think the economics jargon would be “real GDP” — ie, the actual stuff that goes into GDP, as opposed to the dollar figure you put on it. Things start going wonky quickly though, in particular with the phrase “If, given the people currently in our country, and what they are being paid to do today, we have enough resources, and enough labour-and-productivity to …” — this starts mixing up nominal and real terms: people expect to be paid in dollars, but resources and labour are real units. If you’re talking about allocating real resources rather than dollars, you need to balance that against paying people in real resources rather than dollars as well, because that’s what they’re going to buy with their nominal dollars.

Why does that matter? Ultimately, because it’s very easy to get the maths wrong and not have your model of the national economy balanced: you allocate some resources here, pay some money there, then forget that the people you paid will use that money to reallocate some resources. If the error’s large enough and systemic enough, you’ll get runaway inflation and all the problems that go with it.

Rob has a specific example here: an unemployed (but skilled) builder, and a homeless family (who need a house built). Why not put the two together, magic up some money to prime the system and build a house? Voila the builder has a job, and the family has a home and everyone is presumably better off. But you can do the same thing without money: give the homeless family a loaded gun and introduce them to the builder: the builder has a job, and the family get a home, and with any luck the bullet doesn’t even get used! The key problem was that we didn’t inspect the magic sufficiently: the builder doesn’t want a job, or even money, he wants the rewards that the job and the money obtain. But where do those rewards come from? Maybe we think the family will contribute to the economy once they have a roof over their heads — if so, we could commit to that: forget the gun, the family goes to a bank, demonstrates they’ll be able to earn an income in future, and takes out a loan, then goes to the builder and pays for their house, and then they get jobs and pay off their mortgage. But if the house doesn’t let the family get jobs and pay for the home, the things the builder buys with his pay have to come from somewhere, and the only way that can happen is by making everyone else in the country a little bit poorer. Do that enough, and everyone who can will move to a different country that doesn’t have that problem.

Loans are a serious answer to the problem in general: if the family is going to be able to work and pay for the house eventually, the problem isn’t one of money, it’s one of risk: whoever currently owns the land, or the building supplies, or whatever doesn’t want to take the risk they’ll never see anything for letting the house get built. But once you have someone with founds who is willing to take the risk, things can start happening without any change in government policies. Loaning directly to the family isn’t the only way; you could build a set of units on spec, and run a charity that finds disadvantaged families, and sets them up, and maybe provide them with training or administrative support to help them get into the workforce, at which point they can pay you back and you can either turn a profit, or help the next disadvantaged family; or maybe both.

Rob then asks himself a bunch of questions, which I’ll answer too:

  • What about the foreign account deficit? (It doesn’t matter in the first place, unless perhaps you’re anti-immigrant, and don’t want foreigners buying property)
  • What about the fact that lots of land is already owned by someone? (There’s enough land in Australia outside of Sydney/Melbourne that this isn’t an issue; I don’t have any idea what it’s like in NZ, but see Tokyo for ways of housing people on very little land if it is a problem)
  • How do we fairly get the family the house they deserve? (They don’t deserve a house; if they want a nice house, they should work and save for it. If they’re going through hard times, and just need a roof over their heads, that’s easily and cheaply done, and doesn’t need a lot of space)
  • Won’t some people just ride on the coat-tails of others? (Yes, of course they will. That’s why you target the assistance to help them survive and get back on their feet, and if they want to get whatever it is they think they deserve, they can work for it, like everyone else)
  • Isn’t this going to require taking things other people have already earnt? (Generally, no: people almost always buy houses with loans, for instance, rather than being given them for free, or buying them outright; there might be a need to raises taxes, but not to fundamentally change them, though there might be other reasons why larger reform is worthwhile)

This brings us back to the claim Rob makes at the start of his blog: that the whole “government cannot pay for healthcare” thing is nonsense. It’s not nonsense: at the extreme, government can’t pay for enough healthcare for everyone to live to 120 while feeling like they’re 30. Even paying enough for everyone to have the best possible medical care isn’t feasible: even if NZ has a uniform health care system with 100% of its economy devoted to caring for the sick and disabled, there’s going to be a specialist facility somewhere overseas that does a better job. If there isn’t a uniform healthcare system (and there won’t be, even if only due to some doctors/nurses being individually more talented), there’ll also be better and worse places to go in NZ. The reason we have worrying fiscal crises in healthcare and aged support isn’t just a matter of money that can be changed with inflation, it’s that the real economic resources we’re expecting to have don’t align with the promises we’re already making. Those resources are usually expressed in dollar terms, but that’s because having a unit of account makes talking about these things easier: we don’t have to explicitly say “we’ll need x surgeons and y administrators and z MRI machines and w beds” but instead can just collect it all and say “we’ll need x billion dollars”, and leave out a whole mass of complexity, while still being reasonably accurate.

(Similar with “education” — there are limits to how well you can educate everyone, and there’s a trade off between how many resources you might want to put into educating people versus how many resources other people would prefer. In a democracy, that’s just something that’s going to get debated. As far as land goes, on the other hand, I don’t think there’s a fundamental limit to the government taking control over land it controls, though at least in Australia I believe that’s generally considered to be against the vibe of the constitution. If you want to fairly compensate land holders for taking their land, that goes back to budget negotiations and government priorities, and doesn’t seem very interesting in the abstract)

Probably the worst part of Rob’s blog is this though: “We get 10% less things done. Big deal.” Getting 10% less things done is a disaster, for comparison the Great Recession in the US had a GDP drop of less than half that, at -4.2% between 2007Q4 and 2009Q2, and the Great Depression was supposedly about -15% between 1929 and 1932. Also, saying “we’d want 90% of folk not working” is pretty much saying “90% of folk have nothing of value to contribute to anyone else”, because if they did, they could do that, be paid for it, and voila, they’re working. That simply doesn’t seem plausible to me, and I think things would get pretty ugly if it ended up that way despite it’s implausibility.

(Aside: for someone who’s against carbs, “potato farmer” as the go to example seems an interesting choice… )

Rondam RamblingsThis isn't resistance, it's treason

Much as I want to see Donald Trump thwarted from advancing his xenophobic, misogynistic and downright dangerous agenda, this is not the way.  An anonymous "senior official" in the administration announced in the New York Times today that he (or she) is part of "the resistance inside the Trump Administration."  I am about as sympathetic an audience for that message as you are likely to find, but I

,

Krebs on SecurityBrowser Extensions: Are They Worth the Risk?

Popular file-sharing site Mega.nz is warning users that cybercriminals hacked its browser extension for Google Chrome so that usernames and passwords submitted through the browser were copied and forwarded to a rogue server in Ukraine. This attack serves as a fresh reminder that legitimate browser extensions can and periodically do fall into the wrong hands, and that it makes good security sense to limit your exposure to such attacks by getting rid of extensions that are no longer useful or actively maintained by developers.

In a statement posted to its Web site, Mega.nz said the extension for Chrome was compromised after its Chrome Web store account was hacked. From their post:

“On 4 September 2018 at 14:30 UTC, an unknown attacker uploaded a trojaned version of MEGA’s Chrome extension, version 3.39.4, to the Google Chrome webstore. Upon installation or autoupdate, it would ask for elevated permissions (Read and change all your data on the websites you visit) that MEGA’s real extension does not require and would (if permissions were granted) exfiltrate credentials for sites including amazon.com, live.com, github.com, google.com (for webstore login), myetherwallet.com, mymonero.com, idex.market and HTTP POST requests to other sites, to a server located in Ukraine. Note that mega.nz credentials were not being exfiltrated.”

Browser extensions can be incredibly handy and useful, but compromised extensions — depending on the level of “permissions” or access originally granted to them — also can give attackers access to all data on your computer and the Web sites you visit.

For its part, Google tries to communicate the potential risk of extensions using three “alert” levels: Low, medium and high, as detailed in the screenshot below. In practice, however, most extensions carry the medium or high alert level, which means that if the extension is somehow compromised (or malicious from the get-go), the attacker in control of it is going to have access to ton of sensitive information on a great many Internet users.

In many instances — as in this week’s breach with Mega — an extension gets compromised after someone with legitimate rights to alter its code gets phished or hacked. In other cases, control and ownership of an established extension may simply be abandoned or sold to shady developers. In either scenario, hacked or backdoored extensions can present a nightmare for users.

A basic tenet of cybersecurity holds that individuals and organizations can mitigate the risk of getting hacked to some degree by reducing their overall “attack surface” — i.e., the amount of software and services they rely upon that are potentially vulnerable to compromise. That precept holds fast here as well, because limiting one’s reliance on third-party browser extensions reduces one’s risk significantly.

Personally, I do not make much use of browser extensions. In almost every case I’ve considered installing an extension I’ve been sufficiently spooked by the permissions requested that I ultimately decided it wasn’t worth the risk. I currently trust just three extensions in my Google Chrome installation; two of them are made by Google and carry “low” risk alert levels. The other is a third-party extension I’ve used for years that carries a “medium” risk rating, but that is also maintained by an individual I know who is extremely paranoid and security-conscious.

If you’re the type of person who uses multiple extensions, it may be wise to adopt a risk-based approach going forward. In other words, given the high stakes that typically come with installing an extension, consider carefully whether having a given extension is truly worth it. By the way, this applies equally to plug-ins designed for Web site content management systems like WordPress and Joomla.

At the very least, do not agree to update an extension if it suddenly requests more permissions than a previous version. This should be a giant red flag that something is not right.

Also, never download and install an extension just because a Web site says you need it to view some type of content. Doing otherwise is almost always a high-risk proposition. Here, Rule #1 from KrebsOnSecurity’s Three Rules of Online Safety comes into play: “If you didn’t go looking for it, don’t install it.” Finally, in the event you do wish to install something, make sure you’re getting it directly from the entity that produced the software.

Google Chrome users can see any extensions they have installed by clicking the three dots to the right of the address bar, selecting “More tools” in the resulting drop-down menu, then “Extensions.” In Firefox, click the three horizontal bars next to the address bar and select “Add-ons,” then click the “Extensions” link on the resulting page to view any installed extensions.

CryptogramFriday Squid Blogging: Giant Squid Washes up on Wellington Beach

Another giant squid washed up on a beach, this time in Wellington, New Zealand.

Is this a global trend?

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramUsing a Smartphone's Microphone and Speakers to Eavesdrop on Passwords

It's amazing that this is even possible: "SonarSnoop: Active Acoustic Side-Channel Attacks":

Abstract: We report the first active acoustic side-channel attack. Speakers are used to emit human inaudible acoustic signals and the echo is recorded via microphones, turning the acoustic system of a smart phone into a sonar system. The echo signal can be used to profile user interaction with the device. For example, a victim's finger movements can be inferred to steal Android phone unlock patterns. In our empirical study, the number of candidate unlock patterns that an attacker must try to authenticate herself to a Samsung S4 Android phone can be reduced by up to 70% using this novel acoustic side-channel. Our approach can be easily applied to other application scenarios and device types. Overall, our work highlights a new family of security threats.

News article.

Worse Than FailureCodeSOD: Caught Up in the Captcha

Gregor needed to download a network driver. Upon clicking the link, a "captcha" appeared, presumably to prevent hotlinking to the driver files. It wasn't a real, image-based captcha, but a simple "here's some characters, type them into the box".

The code which popped up was "S i u x q F b j NaN 4". He hit the "new code" button, and got "T o A 0 J V s L NaN a". In fact, "NaN" showed up in the penultimate position in every code.

Curious, Gregor pulled up the debugger to see how the captcha was generated.

function Captcha(){ var alpha = new Array('A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z', 'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z', '0','1','2','3','4','5','6','7','8','9'); var i; for (i=0;i<20;i++){ var a = alpha[Math.floor(Math.random() * alpha.length)]; var b = alpha[Math.floor(Math.random() * alpha.length)]; var c = alpha[Math.floor(Math.random() * alpha.length)]; var d = alpha[Math.floor(Math.random() * alpha.length)]; var e = alpha[Math.floor(Math.random() * alpha.length)]; var f = alpha[Math.floor(Math.random() * alpha.length)]; var g = alpha[Math.floor(Math.random() * alpha.length)]; var h = alpha[Math.floor(Math.random() * alpha.length)]; var i = alpha[Math.floor(Math.random() * alpha.length)]; var j = alpha[Math.floor(Math.random() * alpha.length)]; } var code = a + ' ' + b + ' ' + ' ' + c + ' ' + d + ' ' + e + ' '+ f + ' ' + g + ' ' + h + ' ' + i + ' ' + j; document.getElementById("mainCaptcha").innerHTML = code document.getElementById("mainCaptcha").value = code }

The first thing that you might notice is that they wanted to be really random, so they generate the random code inside of a for loop, which claims to execute twenty times. Twenty random numbers are better than one, right?

Well, obviously not. The loop almost always executes only once. The variable i is not simply the loop variable, but also one of the variables holding the captcha characters. If it's set to a letter, i<20 is false, and the loop breaks. If it's set to a number, the loop executes again (meaning that if you're really unlucky, the loop could possibly execute for a very long time).

It's a bit embarrassing to admit, but I had to think for a minute to figure out why i was NaN, and not just whatever letter it got assigned. Then I remembered- a for loop is just a while loop, and the increment step is executed at the end of the loop. Calling ++ on a string assigns NaN to the variable.

In Gregor's case, typing the captcha, NaN and all, into the input box worked just fine, and he was able to download his drivers.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowI’m heading to New York for a lecture series at Columbia!



Columbia University’s Brown Institute is hosting me for a trio of lectures later this month in New York City: I kick off with a conversation with the Brown’s Dennis Tenen about science fiction, copyright, and the arts on Sept 25, then a lecture on copyright and surveillance on Sept 26, and wrap up with an onstage conversation with Radiolab’s Jad Abumrad about Big Tech, monopolies, and democratic technology on Sept 27. (I’m also dropping by Swarthmore for a lecture on Sept 28, details to follow).

Cory DoctorowInterview with EdSurge about educational technology, school surveillance, open access, and radical pedagogy

At this year’s World Science Fiction, Tina Nazerian from EdSurge interviewed me (MP3) for a podcast about the future of educational technology, open access, surveillance in schools, and educational freedom.

Cory DoctorowInterview with Fringe FM on Surveillance Capitalism, Big Tech and the future of the internet


While at the World Science Fiction Convention, I sat down with Matt Ward from the FringeFM podcast for an interview (MP3) about the future of the internet, and how Shoshanna Zuboff’s notion of surveillance capitalism connects up with mass inequality, the GDPR, the upcoming EU copyright rules, and the future of writing and science fiction.

Krebs on SecurityFor 2nd Time in 3 Years, Mobile Spyware Maker mSpy Leaks Millions of Sensitive Records

mSpy, the makers of a software-as-a-service product that claims to help more than a million paying customers spy on the mobile devices of their kids and partners, has leaked millions of sensitive records online, including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.

Less than a week ago, security researcher Nitish Shah directed KrebsOnSecurity to an open database on the Web that allowed anyone to query up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software. The database required no authentication.

A list of data points that can be slurped from a mobile device that is secretly running mSpy’s software.

Before it was taken offline sometime in the past 12 hours, the database contained millions of records, including the username, password and private encryption key of each mSpy customer who logged in to the mSpy site or purchased an mSpy license over the past six months. The private key would allow anyone to track and view details of a mobile device running the software, Shah said.

In addition, the database included the Apple iCloud username and authentication token of mobile devices running mSpy, and what appear to be references to iCloud backup files. Anyone who stumbled upon this database also would have been able to browse the Whatsapp and Facebook messages uploaded from mobile devices equipped with mSpy.

Usernames, passwords, text messages and loads of other more personal details were leaked from mobile devices running mSpy.

Other records exposed included the transaction details of all mSpy licenses purchased over the last six months, including customer name, email address, mailing address and amount paid. Also in the data set were mSpy user logs — including the browser and Internet address information of people visiting the mSpy Web site.

Shah said when he tried to alert mSpy of his findings, the company’s support personnel ignored him.

“I was chatting with their live support, until they blocked me when I asked them to get me in contact with their CTO or head of security,” Shah said.

KrebsOnSecurity alerted mSpy about the exposed database on Aug. 30. This morning I received an email from mSpy’s chief security officer, who gave only his first name, “Andrew.”

“We have been working hard to secure our system from any possible leaks, attacks, and private information disclosure,” Andrew wrote. “All our customers’ accounts are securely encrypted and the data is being wiped out once in a short period of time. Thanks to you we have prevented this possible breach and from what we could discover the data you are talking about could be some amount of customers’ emails and possibly some other data. However, we could only find that there were only a few points of access and activity with the data.”

Some of those “points of access” were mine. In fact, because mSpy’s Web site access logs were leaked I could view evidence of my own activity on their site in real-time via the exposed database, as could Shah of his own poking around.

A screen shot of the exposed database. The records shown here are non-sensitive “debug” logs.

WHO IS MSPY?

mSpy has a history of failing to protect data about its customers and — just as critically — data secretly collected from mobile devices being spied upon by its software. In May 2015, KrebsOnSecurity broke the news that mSpy had been hacked and its customer data posted to the Dark Web.

At the time, mSpy initially denied suffering a breach for more than a week, even as many of its paying customers confirmed that their information was included in the mSpy database uploaded to the Dark Web. mSpy later acknowledged a breach to the BBC, saying it had been the victim of a “predatory attack” by blackmailers, and that the company had not given in to demands for money.

mSpy pledged to redouble its security efforts in the wake of the 2015 breach. But more than two weeks after news of the 2015 mSpy breach broke, the company still had not disabled links to countless screenshots on its servers that were taken from mobile devices running mSpy.

Mspy users can track Android and iPhone users, snoop on apps like Snapchat and Skype, and keep a record of everything the target does with his or her phone.

It’s unclear exactly where mSpy is based; the company’s Web site suggests it has offices in the United States, Germany and the United Kingdom, although the firm does not appear to list an official physical address. However, according to historic Web site registration records, the company is tied to a now-defunct firm called MTechnology LTD out of the United Kingdom.

Documents obtained from Companies House, an official register of corporations in the U.K., indicate that the two founding members of the company are self-described programmers Aleksey Fedorchuk and Pavel Daletski. Those records (PDF) indicate that Daletski is a British citizen, and that Mr. Fedorchuk is from Russia. Neither men could be reached for comment.

Court documents (PDF) obtained from the U.S. District Court in Jacksonville, Fla. regarding a trademark dispute involving mSpy and Daletski state that mSpy has a U.S.-based address of 800 West El Camino Real, in Mountain View, Calif. Those same court documents indicate that Daletski is a director at a firm based in the Seychelles called Bitex Group LTD. Interestingly, that lawsuit was brought by Retina-X Studios, an mSpy competitor based in Jacksonville, Fla. that makes a product called MobileSpy.

The latest mSpy security lapse comes days after a hacker reportedly broke into the servers of TheTruthSpy — another mobile spyware-as-a-service company — and stole logins, audio recordings, pictures and text messages from mobile devices running the software.

U.S. regulators and law enforcers have taken a dim view of companies that offer mobile spyware services like mSpy. In September 2014, U.S. authorities arrested a 31-year-old Hammad Akbar, the CEO of a Lahore-based company that makes a spyware app called StealthGenie. The FBI noted that while the company advertised StealthGenie’s use for “monitoring employees and loved ones such as children,” the primary target audience was people who thought their partners were cheating. Akbar was charged with selling and advertising wiretapping equipment.

“Advertising and selling spyware technology is a criminal offense, and such conduct will be aggressively pursued by this office and our law enforcement partners,” U.S. Attorney Dana Boente said in a press release tied to Akbar’s indictment.

Akbar pleaded guilty to the charges in November 2014, and according to the Justice Department he is “the first-ever person to admit criminal activity in advertising and selling spyware that invades an unwitting victim’s confidential communications.”

A public relations pitch from mSpy to KrebsOnSecurity in March 2015 stated that approximately 40 percent of the company’s users are parents interested in keeping tabs on their kids. Assuming that is a true statement, it’s ironic that so many parents may now have unwittingly exposed their kids to predators, bullies and other ne’er-do-wells thanks to this latest security debacle at mSpy.

As I wrote in a previous story about mSpy, I hope it’s clear that it is foolhardy to place any trust or confidence in a company whose reason for existence is secretly spying on people. Alas, the only customers who can truly “trust” a company like this are those who don’t care about the privacy and security of the device owner being spied upon.

CryptogramNew Book Announcement: Click Here to Kill Everybody

I am pleased to announce the publication of my latest book: Click Here to Kill Everybody: Security and Survival in a Hyper-connected World. In it, I examine how our new immersive world of physically capable computers affects our security.

I argue that this changes everything about security. Attacks are no longer just about data, they now affect life and property: cars, medical devices, thermostats, power plants, drones, and so on. All of our security assumptions assume that computers are fundamentally benign. That, no matter how bad the breach or vulnerability is, it's just data. That's simply not true anymore. As automation, autonomy, and physical agency become more prevalent, the trade-offs we made for things like authentication, patching, and supply chain security no longer make any sense. The things we've done before will no longer work in the future.

This is a book about technology, and it's also a book about policy. The regulation-free Internet that we've enjoyed for the past decades will not survive this new, more dangerous, world. I fear that our choice is no longer between government regulation and no government regulation; it's between smart government regulation and stupid regulation. My aim is to discuss what a regulated Internet might look like before one is thrust upon us after a disaster.

Click Here to Kill Everybody is available starting today. You can order a copy from Amazon, Barnes & Noble, Books-a-Million, Norton's webpage, or anyplace else books are sold. If you're going to buy it, please do so this week. First-week sales matter in this business.

Reviews so far from the Financial Times, Nature, and Kirkus.

Worse Than FailureSLA-p the Salesman

A Service-Level Agreement (SLA) is meant to ensure customer issues receive the attention they deserve based on severity. It also protects the support company from having customers breathing down their neck for frivolous issues. All of the parameters are agreed upon in writing ahead of time and both sides know the expectations. That is, until a salesman starts to meddle and mess things up, as happened at the place Dominick worked for.

Dominick was a simple remote support tech who fixed things for clients well ahead of the SLA. On the rare occasion there was a priority 1 issue - something stopping anyone in the company from doing work - they had 24 hours to fix it before large monetary penalties would start to rack up. One Friday a priority 4 issue (5 business day SLA) came in from the CFO of a new client. The ticket was assigned to Dominick, who had higher priority work to do for other clients, so he decided it could wait until the following week.

Canon ir2270

Dominick came in Monday morning to find Benjamin, a senior salesman who happened to be a personal friend of the CFO, sitting on his desk with his huge arms crossed. Benjamin glanced at his watch to see it was 7:59 AM. "About time you showed up, Dom. I found out you didn't do the ticket that came in Friday and I want an explanation!"

Still in a pre-coffee Monday morning haze, Dominick had to think for a second to figure out what he was talking about. "Oh... that thing about ordering a new printer? That was only priority 4 and it literally said 'no rush' in it. I have 4 more days to get it done."

Benjamin sprang up off Dom's desk and used his beefy arms to forcefully shove an index finger into his chest. "You don't get it do you, bro?? When I made this deal with them, I assured them anything would be treated with the highest priority!" Ben shouted while spraying an unsanitary amount of saliva droplets. "I don't care what your silly numbering system says, it needs to get done today!

"Ok... well let me sit down and look at it," Dominick said timidly while rubbing the spot on his chest that received a mean poking. Benjamin stormed off to presumably consume another protein shake. He pulled up the ticket about ordering a new printer for the CFO's office. It seemed he'd read about this top of the line printer in some tech magazine and really wanted it. The problem was the printer wasn't even on the market yet - it would be released at the end of the month. Since there was literally nothing Dominick could do to get the printer, he closed the ticket and asked that a new one be submitted when the printer was available.

Later that afternoon, Dominick heard stomping behind him and before he could turn around, Benjamin spun him around in his chair and got in his face. "Hey there, bro. Where is my guy's printer?? He told me you closed his submission without ordering it!"

Dominick stood up to defend himself and weakly poked Ben in the chest. "Listen, bro! He wants a printer that isn't out yet. The best I can do is pre-order it and have it shipped to him in a couple weeks. I closed the ticket so we don't get dinged on the SLA to get this done in 5 days."

Benjamin furrowed his brow and got back within saliva-spraying distance, "You'll have to do better than that, Dom! While you were screwing around not resolving this I made an addendum to their SLA. Any ticket submitted by a CxO level executive will be treated as priority 1 by us. So you better pull whatever techie nerd strings you have to get that printer ordered in the next 24 hours!"

After Benjamin stormed off yet again, the reality of what he had done set in. Since the SLA for the printer was now 24 hours, they would start getting charged penalties by tomorrow. Dominick quickly began crafting an email to senior management to explain the situation and how the request wasn't able to be met. He wasn't sure what sort of "techie nerd" resources Benjamin thought he had, but it wasn't going to happen.

Predictably, the situation didn't end well. The financial penalties started adding up the following day, and the next day, and so on. It became so expensive that it was more cost-effective to pay the client to modify the addendum to the SLA that Benjamin made (they couldn't be compelled to do so otherwise) than to continue to rack up fines.

The end of the month came and the world's most expensive printer finally shipped, which was a relief to everyone. But that also meant the end-of-month financial statements showed the huge deficit caused by it. To compensate, the company decided to lay off 20% of the support staff including Dominick. Benjamin, of course, got to keep his job where he always put customer needs first.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaLev Lafayette: New Developments in Supercomputing

Over the past 33 years the International Super Computing conference in Germany has become one of the world's major computing events with the bi-annual announcement of the Top500 systems, which continues to be dominated in entirety by Linux systems. In June this year over 3,500 people attended ISC with a programme of tutorials, workshops and miniconferences, poster sessions, student competitions, a vast vendor hall, and numerous other events.

This presentation gives an overview of ISC and makes an attempt to cover many of the new developments and directions in supercomputing including new systems. metrics measurement, machine learning, and HPC education. In addition, the presentation will also feature material from the HPC Advisory Council conference in Fremantle held in August.

Planet Linux AustraliaMichael Still: Kubernetes Fundamentals: Setting up nginx ingress

Share

I’m doing the Linux Foundation Kubernetes Fundamentals course at the moment, and I was very disappointed in the chapter on Ingress Controllers. To be honest it feels like an after thought — there is no lab, and the provided examples don’t work if you re-type them into Kubernetes (you can’t cut and paste of course, just to add to the fun).

I found this super annoying, so I thought I’d write up my own notes on how to get nginx working as an Ingress Controller on Kubernetes.

First off, the nginx project has excellent installation resources online at github. The only wart with their instructions is that they changed the labels used on the pods for the ingress controller, which means the validation steps in the document don’t work until that is fixed. That is reported in a github issue and there was a proposed fix that didn’t have an associated issue that pre-dates the creation of the issue.

The basic process, assuming a baremetal Kubernetes install, is this:

$ NGINX_GITHUB="https://raw.githubusercontent.com/kubernetes/ingress-nginx"
$ kubectl apply -f $NGINX_GITHUB/master/deploy/mandatory.yaml
$ kubectl apply -f $NGINX_GITHUB/master/deploy/provider/baremetal/service-nodeport.yaml

Wait for the pods to fetch their images, and then check if the pods are healthy:

$ kubectl get pods -n ingress-nginx
NAME                                       READY     STATUS    RESTARTS   AGE
default-http-backend-6586bc58b6-tn7l5      1/1       Running   0          20h
nginx-ingress-controller-79b7c66ff-m8nxc   1/1       Running   0          20h

That bit is mostly explained by the Linux Foundation course. Well, he links to the github page at least and then you just read the docs. The bit that isn’t well explained is how to setup ingress for a pod. This is partially because kubectl doesn’t have a command line to do this yet — you have to POST an API request to get it done instead.

First, let’s create a target deployment and service:

$ kubectl run ghost --image=ghost
deployment.apps/ghost created
$ kubectl expose deployments ghost --port=2368
service/ghost exposed

The YAML to create an ingress for this new “ghost” service looks like this:

$ cat sample_ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ghost
spec:
  rules:
  - host: ghost.10.244.2.13.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: ghost
          servicePort: 2368

Where 10.244.2.13 is the IP that my CNI assigned to the nginx ingress controller. You can look that up with a describe of the nginx ingress controller pod:

$ kubectl describe pod nginx-ingress-controller-79b7c66ff-m8nxc -n ingress-nginx | grep IP
IP:                 10.244.2.13

Now we can create the ingress entry for this ghost deployment:

$ kubectl apply -f sample_ingress.yaml 
ingress.extensions/ghost created

This causes the nginx configuration to get re-created inside the nginx pod by magix pixies. Now, assuming we have a route from our desktop to 10.244.2.13, we can just go to http://ghost.10.244.2.13.nip.io in a browser and you should be greeted by the default front page for the ghost installation (which turns out to be a publishing platform, who knew?).

To cleanup the ingress, you can use the normal “get”, “describe”, and “delete” verbs that you use for other things in kubectl, with the object type of “ingress”.

Share

The post Kubernetes Fundamentals: Setting up nginx ingress appeared first on Made by Mikal.

,

Worse Than FailureClassic WTF: Security By Letterhead

It's a holiday in the US, so we're turning back the clock a bit.
How do you make sure nobody issues an unauthorized request for a domain transfer? This registrar has serious security to prevent just that kind of event. You know this must be a classic, because it involves fax machines. Original -- Remy

Security through obscurity is something we've all probably complained about. We've covered security by insanity and security by oblivity. And today, joining their ranks, we have security by letterhead.

John O'Rourke wrote in to tell us that as a part of his job, he often has to help clients transfer domain names. He's had to jump through all kinds of crazy hoops to transfer domain names in the past; including just about everything except literally jumping through hoops. After faxing in a transfer request and receiving a rejection fax an hour later, he knew he was in for a fight.

John called the number on the rejection letter to sort things out.

John: Yes, I'm calling to find out why request number 48931258 to transfer somedomain.com was rejected.
ISP: Oh, it was rejected because the request wasn't submitted on company letterhead.
John: Oh... sure... but... uh, just so we're on the same page, can you define exactly what you mean by 'company letterhead?'
ISP: Well, you know, it has the company's logo, maybe a phone number and web site address... that sort of thing. I mean, your fax looks like it could've been typed by anyone!
John: So you know what my company letterhead looks like?
ISP: Ye... no. Not specifically. But, like, we'd know it if we saw it.
John: And what if we don't have letterhead? What if we're a startup? What if we're redesigning our logo?
ISP: Well, you'd have to speak to customer-
John (clicking and typing): I could probably just pick out a semi-professional-looking MS Word template and paste my request in that and resubmit it, right?
ISP: Look, our policy-
John: Oh, it's ok, I just sent the request back in on letterhead.

The transfer was approved. John smiled, having successfully circumvented the ISP's security armed with sophisticated hacking tools like MS Word templates and a crappy LaserJet printer.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Krebs on SecurityAlleged ‘Satori’ IoT Botnet Operator Sought Media Spotlight, Got Indicted

A 20-year-old from Vancouver, Washington was indicted last week on federal hacking charges and for allegedly operating the “Satori” botnet, a malware strain unleashed last year that infected hundreds of thousands of wireless routers and other “Internet of Things” (IoT) devices. This outcome is hardly surprising given that the accused’s alleged alter ego has been relentless in seeking media attention for this global crime machine.

Schuchman, in an undated photo posted online and referenced in a “dox,” which alleged in Feb. 2018 that Schuchman was Nexus Zeta.

The Daily Beast‘s Kevin Poulsen broke the news last week that federal authorities in Alaska indicted Kenneth Currin Schuchman of Washington on two counts of violating the Computer Fraud and Abuse Act by using malware to damage computers between August and November 2017.

The 3-page indictment (PDF) is incredibly sparse, and includes few details about the meat of the charges against Schuchman. But according to Poulsen, the charges are related to Schuchman’s alleged authorship and use of the Satori botnet. Satori, also known as “Masuta,” is a variant of the Mirai botnet, a powerful IoT malware strain that first came online in July 2016.

“Despite the havoc he supposedly wreaked, the accused hacker doesn’t seem to have been terribly knowledgeable about hacking,” Poulsen notes.

Schuchman reportedly went by the handle “Nexus Zeta,” the nickname used by a fairly inexperienced and clumsy ne’er-do-well who has tried on multiple occasions to get KrebsOnSecurity to write about the Satori botnet. In January 2018, Nexus Zeta changed the login page for his botnet control panel that he used to remotely control his hacked routers to include a friendly backhanded reference to this author:

The login prompt for Nexus Zeta’s IoT botnet included the message “Masuta is powered and hosted on Brian Kreb’s [sic] 4head.” To be precise, it’s a 5head.

This wasn’t the first time Nexus Zeta said hello. In late November 2017, he chatted me up on on Twitter and Jabber instant message for several days. Most of the communications came from two accounts: “9gigs_ProxyPipe” on Twitter, and ogmemes123@jabber.ru (9gigs_ProxyPipe would later change its Twitter alias to Nexus Zeta, and Nexus Zeta himself admitted that 9gigs_ProxyPipe was his Twitter account.)

In each case, this person wanted to talk about a new IoT botnet that he was “researching” and that he thought deserved special attention for its size and potential disruptive impact should it be used in a massive Distributed Denial-of-Service (DDoS) attack aimed at knocking a Web site offline — something for which Satori would soon become known.

A Jabber instant message conversation with Nexus Zeta on Nov. 29, 2017.

Nexus Zeta’s Twitter nickname initially confused me because both 9gigs and ProxyPipe are names claimed by Robert Coelho, owner of ProxyPipe hosting (9gigs is a bit from one of Coelho’s Skype account names). Coelho’s sleuthing was quite instrumental in helping to unmask 21-year-old New Jersey resident Paras Jha as the author of the original Mirai IoT botnet (Jha later pleaded guilty to co-authoring and using Mirai and is due to be sentenced this month in Alaska and New Jersey). “Ogmemes” is from a nickname used by Jha and his Mirai botnet co-author.

On Nov. 28, 2017, 9gigs_ProxyPipe sent a message to the KrebsOnSecurity Twitter account:

“I have some information in regards to an incredibly dangerous IoT botnet you may find interesting,” the Twitter message read. “Let me know how you would prefer to communicate assuming you are interested.”

We connected on Jabber instant message. In our chats, Ogmemes123 said he couldn’t understand why nobody had noticed a botnet powered by a Mirai variant that had infected hundreds of thousands of IoT devices (he estimated the size of the botnet to be about 300,000-500,000 at the time). He also talked a lot about how close he was with Jha. Nexus Zeta’s Twitter account profile photo is a picture of Paras Jha. He also said he knew this new botnet was being used to attack ProxyPipe.

Less than 24 hours after that tweet from Nexus Zeta, I heard from ProxyPipe’s Coelho. They were under attack from a new Mirai variant.

“We’ve been mitigating attacks recently that are about 270 gigabits [in volume],” Coelho wrote in an email. “Looks like somebody tagged you on Twitter pretending to be from ProxyPipe — likely the attacker? Just wanted to give you a heads up since that is not us, or anyone that works with ProxyPipe.”

From reviewing Nexus Zeta’s myriad postings on the newbie-friendly hacker forum Hackforums-dot-net, it was clear that Nexus Zeta was an inexperienced, impressionable young man who wanted to associate himself with people closely tied to the 2017 whodunnit over the original Mirai IoT botnet variant. He also asked other Hackforums members for assistance in assembling his Mirai botnet:

Some of Nexus Zeta’s posts on Hackforums, where he asks for help in setting up a Mirai botnet variant. Click to enlarge.

In one conversation with Ogmemes123, I lost my cool and told him to quit running botnets or else go bore somebody else with his quest for publicity. He mostly stopped bugging me after that. That same day, Nexus Zeta spotted a tweet from security researcher Troy Mursch about the rapid growth of a new Mirai-like botnet.

“This is an all-time record for the most new unique IP addresses that I’ve seen added to the botnet in one day,” Mursch tweeted of the speed with which this new Mirai strain was infecting devices.

For weeks after that tweet, Nexus Zeta exchanged private twitter messages with Mursch and his team of botnet hunters at Bad Packets LLC in a bid to get them to Tweet or write about Satori/Masuta.

The following screenshots from their private Twitter discussions, republished with Mursch’s permission, showed that Nexus Zeta kept up the fiction about his merely “researching” the activities of Satori. Mursch played along, and asked gently probing questions about the size, makeup and activities of a rapidly growing Satori botnet.

9gigs_ProxyPipe (a.k.a. Nexus Zeta allegedly a.k.a Kenneth Schuchman) reaches out to security researcher Troy Mursch of Bad Packets LLC.

Early in their conversations, Nexus Zeta says he is merely following the visible daily Internet scanning that Satori generated in a constant search for newly infectable IoT devices. But as their conversations continue over several weeks, Nexus Zeta intimates that he has much deeper access to Satori.

In this conversation from Nov. 29, 2017 between Nexus Zeta/9gigs_Proxypipe and Troy Mursch, the former says he is seeing lots of Satori victims from Argentina, Colombia and Egypt.

Although it long ago would have been easy to write a series of stories about this individual and his exploits, I had zero interest in giving him the attention he clearly craved. But thanks to naivete and apparently zero sense of self-preservation, Nexus Zeta didn’t have to wait long for others to start connecting his online identities to his offline world.

On Dec. 5, Chinese cybersecurity firm Netlab360 released a report on Satori noting that the IoT malware was spreading rapidly to Chinese-made Huawei routers with the help of two security vulnerabilities, including one “zero day” flaw that was unknown to researchers at the time. The report said a quarter million infected devices were seen scanning for vulnerable systems, and that much of the scanning activity traced back to infected systems in Argentina, Colombia and Egypt, the same hotspots that Nexus Zeta cited in his Nov. 29 Twitter chat with Troy Mursch (see screen shot directly above).

In a taunting post published Dec. 29, 2017 titled “Good Zero Day Kiddie,” researchers at Israeli security firm CheckPoint pointed out that the domain name used as a control server to synchronize the activities of the Satori botnet — nexusiotsolutions-dot-net — was registered in 2016 to the email address nexuszeta1337@gmail.com. The CheckPoint report noted the name supplied in the original registration records for that domain was a “Caleb Wilson,” although the researchers correctly noted that this could be a pseudonym.

Perhaps the CheckPoint folks also knew the following tidbit, but chose not to publish it in their report: The email address nexuszeta1337@gmail.com was only ever used to register a single domain name (nexusiotsolutions-dot-net), according to a historic WHOIS record search at Domaintools.com [full disclosure: DomainTools is an advertiser on this site.] But the phone number in that original domain name record was used to register one other domain: zetastress-dot-net (a “stresser” is another name for a DDoS-for-hire-service). The registrant name listed in that original record? You guessed it:

Registrant Name: kenny Schuchman
Registrant Organization: ZetaSec Inc.
Registrant Street: 8709 Ne Mason Dr, No. 4
Registrant City: Vancouver
Registrant State/Province: Washington
Registrant Postal Code: 98662
Registrant Country: US
Registrant Phone: +1.3607267966
Registrant Phone Ext:
Registrant Fax:
Registrant Fax Ext:
Registrant Email: kenny.windwmx79@outlook.com

In April 2018 I heard from a source who said he engaged Nexus Zeta in a chat about his router-ravaging botnet and asked what kind of router Nexus Zeta trusted. According to my source, Nexus Zeta shared a screen shot of the output from his wireless modem’s Web interface, which revealed that he was connecting from an Internet service provider in Vancouver, Wash., where Schuchman lives.

The Satori botnet author shared this screen shot of his desktop, which indicated he was using an Internet connection in Vancouver, Washington — where Schuchman currently lives with his father.

“During our discussions, I learned we have the same model of router,” the source said. “He asked me my router model, and I told him. He shared that his router was also an ActionTec model, and sent a picture. This picture contains his home internet address.”

This matched a comprehensive “dox” that someone published on Pastebin in Feb. 2018, declaring Nexus Zeta to be 20-year-old Kenneth Currin Schuchman from Vancouver, Washington. The dox said Schuchman used the aliases Nexus Zeta and Caleb Wilson, and listed all of the email addresses tied to Nexus Zeta above, plus his financial data and physical address.

“Nexus is known by many to be autistic and a compulsive liar,” the dox begins.

“He refused to acknowledge that he was wrong or apologize, and since he has extremely poor opsec (uses home IP on everything), we have decided to dox him.

He was only hung around by few for the servers he had access to.
He lies about writing exploits that were made before his time, and faking bot counts on botnets he made.
He’s lied about having physical contact with Anna Senpai (Author of Mirai Botnet).”

As detailed in the Daily Beast story and Nexus Zeta’s dox, Schuchman was diagnosed with Asperger Syndrome and autism disorder, and at one point when he was 15 Schuchman reportedly wandered off while visiting a friend in Bend, Ore., briefly prompting a police search before he was found near his mother’s home in Vancouver, Wash.

Nexus Zeta clearly had limited hacking skills initially and almost no operational security. Indeed, his efforts to gain notoriety for his illegal hacking activities eventually earned him just that, as it usually does.

But it’s clear he was a quick learner; in the span of about a year, Nexus Zeta was able to progress from a relatively clueless newbie to the helm of an international menace that launched powerful DDoS attacks while ravaging hundreds of thousands of systems.

Planet Linux AustraliaSimon Lyall: Audiobooks – August 2018

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari

An interesting listen. Covers both history of humanity and then extrapolates ways things might go in the future. Many plausible ideas (although no doubt some huge misses). 8/10

Higher: A Historic Race to the Sky and the Making of a City by Neal Bascomb

The architects, owners & workers behind the Manhattan Trust Building, the Chrysler Building and the Empire State Building all being built New York at the end of the roaring 20s. Fascinating and well done. 9/10

The Invention Of Childhood by Hugh Cunningham

The story of British childhood from the year 1000 to the present. Lots of quotes (by actors) from primary sources such as letters (which is less distracting than sometimes). 8/10

The Sign of Four by Arthur Conan Doyle – Read by Stephen Fry

Very well done reading by Fry. Story excellent of course. 8/10

My Happy Days in Hollywood: A Memoir by Garry Marshall

Memoir by writer, producer (Happy Days, etc) and director (Pretty Woman, The Princess Diaries, etc). Great stories mostly about the positive side of the business. Very inspiring 8/10

Napoleon by J. Christopher Herold

A biography of Napoleon with a fair amount about the history of the rest of Europe during the period thrown in. A fairly short (11 hours) book but some but not exhausting detail. 7/10

Storm in a Teacup: The Physics of Everyday Life by Helen Czerski

A good popular science book linking everyday situations and objects with bigger concepts (eg coffee stains to blood tests). A fun listen. 7/10

All These Worlds Are Yours: The Scientific Search for Alien Life by Jon Willis

The author reviews recent developments in the search for life and suggests places it might be found and how missions to search them (he gives himself a $4 billion budget) should be prioritised. 8/10

Ready Player One by Ernest Cline

I’m right in the middle of the demographic for most of the references here so I really enjoyed it. Good voicing by Wil Wheaton too. Story is straightforward but all pretty fun. 8/10

Share

,

Rondam RamblingsTrump strips more citizenships

Last month I wrote about how the Trump administration is moving to rescind the citizenship of naturalized U.S. citizens.  Now it is doing the same thing to natural-born citizens (but, of course, only to natural-born citizens with poor parents and brown skin). I'm not sure what is more disturbing, that this is happening, or that it hasn't gotten more attention.  Because once you start to strip

,

Planet Linux AustraliaRussell Coker: Suggestions for Trump Supporters

I’ve had some discussions with Trump supporters recently. Here are some suggestions for anyone who wants to have an actual debate about political issues. Note that this may seem harsh to Trump supporters. But it seems harsh to me when Trump supporters use a social event to try and push their beliefs without knowing any of the things I list in this post. If you are a Trump supporter who doesn’t do these things then please try to educate your fellow travellers, they are more likely to listen to you than to me.

Facts

For a discussion to be useful there has to be a basis in facts. When one party rejects facts there isn’t much point. Anyone who only takes their news from an ideological echo chamber is going to end up rejecting facts. The best thing to do is use fact checking sites of which Snopes [1] is the best known. If you are involved in political discussions you should regularly correct people who agree with you when you see them sharing news that is false or even merely unsupported by facts. If you aren’t correcting mistaken people on your own side then you do your own cause a disservice by allowing your people to discredit their own arguments. If you aren’t regularly seeking verification of news you read then you are going to be misled. I correct people on my side regularly, at least once a week. How often do you correct your side?

The next thing is that some background knowledge of politics is necessary. Politics is not something that you can just discover by yourself from first principles. If you aren’t aware of things like Dog Whistle Politics [2] then you aren’t prepared to have a political debate. Note that I’m not suggesting that you should just learn about Dog Whistle Politics and think you are ready to have a debate, it’s one of many things that you need to know.

Dog whistle politics is nothing new or hidden, if you don’t know about such basics you can’t really participate in a discussion of politics. If you don’t know such basics and think you can discuss politics then you are demonstrating the Dunning-Kruger effect [3].

The Southern Strategy [4] is well known by everyone who knows anything about US politics. You can think it’s a good thing if you wish and you can debate the extent to which it still operates, but you can’t deny it happened. If you are unaware of such things then you can’t debate US politics.

The Civil rights act of 1964 [5] is one of the most historic pieces of legislation ever passed in the US. If you don’t know about it then you just don’t know much about US politics. You may think that it is a bad thing, but you can’t deny that it happened, or that it happened because of the Democratic party. This was the time in US politics when the Republicans became the party of the South and the Democrats became the centrist (possibly left) party that they are today. It is ridiculous to claim that Republicans are against racism because Abraham Lincoln was a Republican. Ridiculous claims might work in an ideological echo chamber but they won’t convince anyone else.

Words Have Meanings

To communicate we need to have similar ideas of what words mean. If you use words in vastly different ways to other people then you can’t communicate with them. Some people in the extreme right claim that because the Nazi party in Germany was the
“Nationalsozialistische Deutsche Arbeiterpartei” (“NSDAP”) which translates to English as “National Socialist German Workers Party” that means that they were “socialists”. Then they claim that “socialists” are “leftist” so therefore people on the left are Nazis. That claim requires using words like “left” and “socialism” in vastly different ways to most people.

Snopes has a great article about this issue [6], I recommend that everyone read it, even those who already know that Nazis weren’t (and aren’t) on the left side of politics.

The Wikipedia page of the Unite the Right rally [7] (referenced in the Snopes article) has a photo of people carrying Nazi flags and Confederate flags. Those people are definitely convinced that Nazis were not left wing! They are also definitely convinced that people on the right side of politics (which in the US means the Republican party) support the Confederacy and oppose equal rights for Afro-American people. If you want to argue that the Republican party is the one opposed to racism then you need to come up with an explanation for how so many people who declare themselves on the right of politics got it wrong.

Here’s a local US news article about the neo-Nazi who had “commie killer” written on his helmet while beating a black man almost to death [8]. Another data point showing that Nazis don’t like people on the left.

In other news East Germany (the German Democratic Republic) was not a
democracy. North Korea (the Democratic People’s Republic of Korea) is not a democracy either. The use of “socialism” by the original Nazis shouldn’t be taken any more seriously than the recent claims by the governments of East Germany and North Korea.

Left vs right is a poor summary of political positions, the Political Compass [9] is better. While Hitler and Stalin have different positions on economics I think that citizens of those countries didn’t have very different experiences, one extremely authoritarian government is much like another. I recommend that you do the quiz on the Political Compass site and see if the people it places in similar graph positions to you are ones who you admire.

Sources of Information

If you are only using news sources that only have material you agree with then you are in an ideological echo chamber. When I recommend that someone look for other news sources what I don’t expect in response is an email analysing a single article as justification for rejecting that entire news site. I recommend sites like the New York Times as having good articles, but they don’t only have articles I agree with and they sometimes publish things I think are silly.

A news source that makes ridiculous claims such as that Nazis are “leftist” is ridiculous and should be disregarded. A news source that merely has some articles you disagree with might be worth using.

Also if you want to convince people outside your group of anything related to politics then it’s worth reading sites that might convince them. I often read The National Review [10], not because I agree with their articles (that is a rare occurrence) but because they write for rational conservatives and I hope that some of the extreme right wing people will find their ideas appealing and come back to a place where we can have useful discussions.

When evaluating news articles and news sources one thing to consider is Occam’s Razor [11]. If an article has a complex and implausible theory when a simpler theory can explain it then you should be sceptical of that article. There are conspiracies but they aren’t as common as some people believe and they are generally of limited complexity due to the difficulty people have in keeping secrets. An example of this is some of the various conspiracy theories about storage of politicians’ email. The simplest explanation (for politicians of all parties) is that they tell someone like me to “just make the email work” and if their IT staff doesn’t push back and refuse to do it without all issues being considered then it’s the IT staff at fault. Stupidity explains many things better than conspiracies. Regardless of the party affiliation, any time a politician is accused of poor computer security I’ll ask whether someone like me did their job properly.

Covering for Nazis

Decent people have to oppose Nazis. The Nazi belief system is based on the mass murder of people based on race and the murder of people who disagree with them. In Germany in the 1930s there were some people who could claim not to know about the bad things that Nazis were doing and they could claim to only support Nazis for other reasons. Neo-Nazis are not about creating car companies like VolksWagen all they are about is hatred. The crimes of the original Nazis are well known and well documented, it’s not plausible that anyone could be unaware of them.

Mitch McConnell has clearly stated “There are no good neo-Nazis” [12] in clear opposition to Trump. While I disagree with Mitch on many issues, this is one thing we can agree on. This is what decent people do, they work together with people they usually disagree with to oppose evil. Anyone who will support Nazis out of tribal loyalty has demonstrated the type of person they are.

Here is an article about the alt-right meeting to celebrate Trump’s victory where Richard Spencer said “hail Trump, hail our people, hail victory” while many audience members give the Nazi salute [13]. You can skip to 42 seconds in if you just want to see that part. Trump supporters try to claim it’s the “Roman salute”, but that’s not plausible given that there’s no evidence of Romans using such a salute and it was first popularised in Fascist Italy [14]. The Wikipedia page for the Nazi Salute [15] notes that saying “hail Hitler” or “hail victory” was standard practice while giving the salute. I think that it’s ridiculous to claim that a group of people offering the Hitler salute while someone says “hail Trump” and “hail victory” are anything but Nazis. I also think it’s ridiculous to claim to not know of any correlation between the alt-right and Nazis and then immediately know about the “Roman Salute” defence.

The Americans used to have a salute that was essentially the same as the Nazi Salute, the Bellamy Salute was officially replaced by the hand over heart salute in 1942 [16]. They don’t want anything close to a Nazi salute, and no-one did until very recently when neo-Nazis stopped wearing Klan outfits in the US.

Every time someone makes claims about a supposed “Roman salute” explanation for Richard Spencer’s fans I wonder if they are a closet Nazi.

Anti-Semitism

One final note, I don’t debate people who are open about neo-Nazi beliefs. When someone starts talking about a “Jewish Conspiracy” or use other Nazi phrases then the conversation is over. Nazis should be shunned. One recent conversation with a Trump supported ended quickly after he started talking about a “Jewish conspiracy”. He tried to get me back into the debate by claiming “there are non-Jews in the conspiracy too” but I was already done with him.

Decent Trump Supporters

If you want me to believe that you are one of the decent Trump supporters a good way to start is to disclaim the horrible ideas that other Trump supporters endorse. If you can say “I believe that black people and Jews are my equal and I will not stand next to or be friends with anyone who carries a Nazi flag” then we can have a friendly discussion about politics. I’m happy to declare “I have never supported a Bolshevik revolution or the USSR and will never support such things” if there is any confusion about my ideas in that regard. While I don’t think any reasonable person would think that I supported the USSR I’m happy to make my position clear.

I’ve had people refuse to disclaim racism when asked. If you can’t clearly say that you consider people of other races to be your equal then everyone will think that you are racist.

,

CryptogramI'm Doing a Reddit AMA

On Thursday, September 6, starting at 10:00 am CDT, I'll be doing a Reddit "Ask Me Anything" in association with the Ford Foundation. It's about my new book, but -- of course -- you can ask me anything.

No promises that I will answer everything....

Sociological ImagesThe Role of Replays

Our lives are a team effort, often influenced by larger social forces outside of our control. At the same time, we love stories about singular heroes rising to the occasion. Sociologists often argue that focusing too much on individual merit teaches us not to see the rest of the social world at play.

Photo Credit: Vadu Amka, Flickr CC

One of my favorite recent examples of this is a case from Christopher A. Paul’s book, The Toxic Meritocracy of Video Games, about the popular team-based competitive shooter Overwatch. After the match is over and a winning team declared, one player is awarded “play of the game,” and everyone watches an automatically-generated replay. Paul writes that the replay…

… takes one moment out of context and then chooses to only celebrate one of twelve players when the efforts of the other members of the team often make the moment possible…a more dynamic, holistic system would likely be harder to judge and code, which is a problem at the heart of meritocracy. Actually judging skill or effort is ridiculously difficult to do…the algorithm built into selecting what is the play of the game and which statistics will be highlighted rewards only what it can count and judge, stripping out situation and context. (2018, Pp. 34-35)

It isn’t just the computer doing this; there is a whole genre of youtube videos devoted to top plays and personalized highlight reels from games like Overwatch, Paladins, and Counter Strike: Global Offensive.

 

Paul’s point got me thinking about the structure and culture of replays in general. They aren’t always about a star player. Sometimes we see the execution of a brilliant team play. Other times, it’s all about the star’s slam dunk. But replays do highlight one of the weird structural features about modern competition. Many of the most popular video games in the massive esports industry are team based, but because these are often played in a first-person or limited third person view, replays and highlight reels from these games are often cast from the perspective of a single “star” player.

This is a great exercise for thinking sociologically in everyday life and in the classroom. Watch some replays from your favorite team sport. Are the highlights emphasizing teamwork or are they focusing on a single player’s achievements? Do you think different replays would have been possible without the efforts of the rest of the team? How does the structure of the shot—the composition and perspective—shape the viewer’s interpretation of what happened on the field?

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet Linux AustraliaLev Lafayette: Exploring Issues in Event-Based HPC Cloudbursting

The use of cloud compute, especially in proportion to single-node tasks, provides a more effective allocation of financial resources. The introduction of cloud-bursting to scheduling systems could ideally provide on-demand compute resources for High Performance Computing (HPC) systems, where queue wait-times are a source of user consternation.

Using experiential examples in Slurm's Cloudbursting capability (an extension of the scheduler's power management features), initial successes and bug discoveries highlight the problems of replication and latency that limit the scope of cloudbursting. Nevertheless, under such circumstances wrapper scripts for particular subsets of jobs are still considered viable; an example of this approach is indicated by MOAB/NODUS.

A presentation to HPC:AI 2018 Perth Conference

CryptogramEavesdropping on Computer Screens through the Webcam Mic

Yet another way of eavesdropping on someone's computer activity: using the webcam microphone to "listen" to the computer's screen.

Worse Than FailureCodeSOD: Why I Hate Conference Swag (and What can be Done About it)

Hey everyone - I'm at a conference this week I'd like to cover a WTF that I've been seeing this week - VENDOR SWAG.

IGC Show trade show floor Aug 2016

Ok, if you are one of those poor souls who are always heads down in code and never attend workshops or conferences, this won't make much sense to you, but here's the deal - companies will set up a booth or a table and will pass out swag in exchange for your contact info and (possibly) a lead. To me, this is easily the dirtiest transaction one can make and rife with inequalities. Your information is super valuable - as in possibly generating thousands or millions of sales dollars. The 'gift' will be SUPER cheap by comparison.

How cheap? WELL, here are some examples:

  1. Pens - My bag runneth over with pens. Really, I'm fine, I don't nee any more. Also, if you must make that your swag, don't make me 'work' for a pen. Really? I'm not going to sit through a session about hybridized lizards in the cloud for your crummy mass ordered pens.

  2. Buttons - If the button is made of money, and doesn't have a pin in the back, and it's actually just a $5 bill, then yeah, sure. Other than that? Big nope.

  3. Poor quality shirts or outright badly designed shirts - I know what I'm talking about here. Give me a shirt that YOU'D like to wear, and I don't mean just because you work there. Also - when you hand me a shirt that's made out of tissue paper thin material, it's like "Hey, let's not kid each other - you're just going to turn this thing into a rag."

  4. Four words - Fun Size Candy Bars - If you do this, I hate you on sight. You didn't plan ahead and ended up hitting Target on your way from the airport. Just because vendor halls are like Halloween trick-or-treating doesn't mean you have to handle it that way. Shame on you and your company for dialing that one in.

True - I can, and certainly do, just say "no thanks" to that miniature Butterfinger swag ought be good. An equal exchange that's not more like selling your soul for a pen.

So, with that said...the solution? Enter Virtual Swag or vSwag.

This year, I participated in my first Hackathon ever and our team's theme was Blockchain. After registering we were given a while to prepare which I took advantage of to research the topic and for me, that was pretty fun (but actually we only just made a team after arriving).

The way we planned it, conference attendees can collect vSwag tokens from vendors and redeem them for swag items.

Below is the Solidity file I fudged together based heavily on the Ethereum Crowdsale example, not knowing anything before the Hackathon (hence my 'horrible' comment). Now, if you actually know what you're doing (not me) and want to contribute, the project is out on Github or if you want to skip to the good stuff (and like memes) check out the PowerPoint (*chef kiss*).


pragma solidity ^0.4.18;

/* This is horrible code. */

contract vSwag {
    address public vendorName;
    uint public swagGoal;
    uint public amountRaised;
    uint public deadline;
    uint public price;
    //token public tokenReward;
    mapping(address => uint256) public balanceOf;
    bool swagGoalReached = false;
    bool vSwagClosed = false;

    event GoalReached(address recipient, uint totalAmountRaised);
    event SwagswagToken(address backer, uint amount, bool isContribution);

	 /* This creates an array with all balances */
   // mapping (address => uint256) public balanceOf;
	
    /**
     * Constructor function
     *
     * Setup the owner
     */
    function vSwagConstructor(
        address ifSuccessfulSendTo,
        uint swagGoalInEthers,
        uint durationInMinutes,
        uint etherCostOfEachToken,
        address addressOfTokenUsedAsReward
    ) public {
        vendorName = ifSuccessfulSendTo;
        swagGoal = swagGoalInEthers * 1 ether;
        deadline = now + durationInMinutes * 1 minutes;
        price = etherCostOfEachToken * 1 ether;
		balanceOf[msg.sender] = 1000;              // How much swag coin you got, captain?

    }

	/* Send swag */
    function transfer(address _to, uint256 _value) public returns (bool success) {
        require(balanceOf[msg.sender] >= _value);           // Check if the sender has enough
        require(balanceOf[_to] + _value >= balanceOf[_to]); // Check for overflows
        balanceOf[msg.sender] -= _value;                    // Subtract from the sender
        balanceOf[_to] += _value;                           // Add the same to the recipient
        return true;
    }
	
    /**
     * Fallback function
     *
     * The function without name is the default function that is called whenever anyone sends funds to a contract
     */
    function () payable public {
        uint amount = msg.value;
        balanceOf[msg.sender] += amount;
        amountRaised += amount;
        transfer(msg.sender, amount / price);
       emit SwagswagToken(msg.sender, amount, true);
    }

    modifier afterDeadline() { if (now >= deadline) _; }

    /**
     * Check if goal was reached
     *
     * Checks if the goal or time limit has been reached and ends the campaign
     */
    function checkGoalReached() public afterDeadline {
        if (amountRaised >= swagGoal){
            swagGoalReached = true;
            emit GoalReached(vendorName, amountRaised);
        }
    }


    /**
     * Decrement Swag
     *
     */
    function safeWithdrawal() public afterDeadline {
        if (!swagGoalReached) {
            uint amount = balanceOf[msg.sender];
            balanceOf[msg.sender] = 0;
            if (amount > 0) {
                if (msg.sender.send(amount)) {
                   emit SwagswagToken(msg.sender, amount, false);
                } else {
                    balanceOf[msg.sender] = amount;
                }
            }
        }

        if (swagGoalReached && vendorName == msg.sender) {
            if (vendorName.send(amountRaised)) {
               emit SwagswagToken(vendorName, amountRaised, false);
			} else {
				//something something...give back swag coin
                swagGoalReached = false;
            }
		
        }
    }
}

So, vendors, hear me now - steal this idea!

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Sam VargheseAll Blacks win because they have developed a winning culture

For the last 16 years, New Zealand has been winning the annual Bledisloe Cup rugby union competition against Australia, with 2002 being the last time they lost. It is a symbol of rugby supremacy, and for the two countries involved the next best after the World Cup itself.

Over the last few years, every time the games approach, the Australian media hype up the chances of their national team and for the uninitiated, it would appear to be some kind of equal contest. But in the end, New Zealand always runs away with the trophy, though some games can indeed be close.

Last year, for example, New Zealand came to Sydney for the first game as usual. By half-time, despite predictions of a close game being in the offing, New Zealand was ahead 40-6. The game ended in a 54-34 win to the All Blacks.

The following week, surprisingly, Australia was well-placed to win the game until the last few minutes, having scored three tries at the beginning of the game before New Zealand could even get one. But with about five minutes left, and trailing 28-29, New Zealand got a try and won 35-28.

There is a generally a third game most years, more of a means of raising money as it is played in some third country – Japan, or Hong Kong, for example. Australia won that third fixture last year, but by then the Cup had already been retained by the Kiwis.

This year, New Zealand won the first two games by resounding margins – 38-13 and 40-12. The third game will be in October.

Every time Australia loses, the local media advance various reasons: a lack of fitness, inability to do well in the set-pieces, loose kicking, missed tackles, and so on.

The one thing they never seem to understand is that it is the All Blacks’ superior communication skills — which comes from the culture they have developed as their own — that makes them better players overall, gives them the mental edge and enables them to keep winning.

That culture has not come on its own. From 2004 onwards, the Kiwis have been working on it, after a 40-26 loss to South Africa. Senior players were distraught at the time, with one, Malili Muliaina, even saying that he would like to quit as he was not enjoying it.

The coaches of that time — Graham Henry and Wayne Smith — along with Gilbert Enoka, who is still with the team and serves as some kind of father figure and guide, have helped the players devise a system where they recognise each others’ abilities, respect each other and play for each other.

The players have learnt to take responsibility for themselves and, indeed, from the Thursday in a week when there is a game on Saturday, the players are on their own. The leaders act to formulate a game plan based on what they have discussed with the coaches earlier in the week and out on the field it is the players’ call.

You never see the head coach anywhere near the players during the game. He is up there in his box. The players take decisions and they run the show. This has led to remarkable results; while the All Blacks have always been quite successful, since 2004, they have been even more successful than earlier.

Right now, they have a winning percentage of something in the mid-80s, which is by far the best by any team in the world in any sport in the history of sport.

Their culture is all based on the culture of the Maoris, the warrior tribes who were the first inhabitants of New Zealand. In what is a truly remarkable practice, New Zealanders sing their national anthem first in Maori, and then in English. And before every game, the All Blacks perform a haka, a Maori challenge. They even have their own distinctive haka which has been devised by them along with Maori writers.

The old saying goes that the strength of the pack is the wolf and conversely the strength of wolf is the pack. The All Blacks have standards which they have accepted and thus they are enforced by the players themselves. This culture keeps them strong, it helps them when they debrief, it helps to forgive the errors of fellow players, and it helps when they analyse a game after it is over.

There is a lot on the shoulders of a group which is mostly in their mid- and late-20s, with a few being on the other side of 30. Since they are bound by a common fate — they are a massive financial asset to the country and countries where the sport is popular pay big money to have the All Blacks play there — they have to maintain standards and keep winning, playing attractive rugby.

I have been watching them since 1995, when I saw them play in the World Cup that year. The fast, free-flowing game was beautiful to watch and their skills were out of this world. That was the year when the late Jonah Lomu made the tournament all his own, especially the quarter-final against England when he scored four tries and ran right through Mike Catt en route to one of his tries.

But there were ups and downs until the team got together with the coaching staff in 2004 and started formulating the means whereby they would strengthen their beliefs in their own abilities and learn to trust each other. Since 2011, when they won their second World Cup, they have gone from strength to strength, with only the occasional loss.

When will Australia realise that it is not merely a better team that is beating them time and again? The ad hoc methods adopted in Australia do not amount to a system and will never forge anything like the culture which the All Blacks have built for themselves.

What’s more, the same structure and methods have been adopted by all five teams from New Zealand that take part in the premier rugby tournament in the region, Super Rugby. Thus, all players in the senior game learn this culture and, as a result, if they are called up for national duty, it does not take much tuning to fit in with the established All Blacks.

At times, players do not fit in. A case in point is Zac Guildford, a talented winger, who had problems with alcohol. He played 23 Tests between 2008 and 2013 but was dropped after that due to frequent disciplinary problems.

Planet Linux AustraliaMatthew Oliver: Keystone Federated Swift – Multi-region cluster, multiple federation, access same account

Welcome to the final post in the series, it has been a long time coming. If required/requested I’m happy to delve into any of these topics deeper, but I’ll attempt to explain the situation, the best approach to take and how I got a POC working, which I am calling the brittle method. It definitely isn’t the best approach but as it was solely done on the Swift side and as I am a OpenStack Swift dev it was the quickest and easiest for me when preparing for the presentation.

To first understand how we can build a federated environment where we have access to our account no matter where we go, we need to learn about how keystone authentication works from a Swift perspective. Then we can look at how we can solve the problem.

Swift’s Keystoneauth middleware

As mentioned in earlier posts, there isn’t any magic in the way Swift authentication works. Swift is an end-to-end storage solution and so authentication is handled via authentication middlewares. further a single Swift cluster can talk to multiple auth backends, which is where the `reseller_prefix` comes into play. This was the first approach I blogged about in these series.

 

There is nothing magical about how authentication works, keystoneauth has it’s own idiosyncrasies, but in general it simply makes a decision whether this request should be allowed. It makes writing your own simple, and maybe an easily way around the problem. Ie. write an auth middleware to auth directly to your existing company LDAP server or authentication system.

 

To setup keystone authentication, you use keystones authtoken middleware and directly afterwards in the pipeline place the Swift keystone middleware, configuring each of them in the proxy configuration:

pipeline = ... authtoken keystoneauth ... proxy-server

The authtoken middleware

Generally every request to Swift will include a token, unless it’s using tempurl, container-sync or to a container that has global read enabled but you get the point.

As the swift-proxy is a python wsgi app the request first hits the first middleware in the pipeline (left most) and works it’s way to the right. When it hits the authtoken middleware the token in the request will be sent to keystone to be authenticated.

The resulting metadata, ie the user, storage_url, groups, roles etc, and dumped into the request environment and then passed to the next middleware. The keystoneauth middleware.

The keystoneauth middleware

The keystoneauth middleware checks the request environment for the metadata dumped by the authtoken middleware and makes a decision based on that. Things like:

  • If the token was one for one of the reseller_admin roles, then they have access.
  • If the user isn’t a swift user of the account/project the request is for, is there an ACL that will allow it.
  • If the user has a role that identifies them as a swift user/operator of this Swift account then great.

 

When checking to see if the user has access to the given account (Swift account) it needs to know what account the request is for. This is easily determined as it’s defined by the path of the URL your hitting. The URL you send to the Swift proxy is what we call the storage url. And is in the form of:

http(s)://<url of proxy or proxy vip>/v1/<account>/<container>/<object>

The container and object elements are optional as it depends on what your trying to do in Swift. When the keystoneauth middleware is authenticating it’ll check that the project_id (or tenant_id) metadata dumped by authtoken, when this is concatenated with the reseller_prefix, matches the account in the given storage_url. For example let’s say the following metadata was dumped by authtoken:

{
"X_PROJECT_ID": 'abcdefg12345678',
"X_ROLES": "swiftoperator",
...
}

And the reseller_prefix for keystone auth was AUTH_ and we make any member of the swiftoperator role (in keystone) a swift operator (a swift user on the account). Then keystoneauth would allow access if the account in the storage URL matched AUTH_abcdefg12345678.

 

When you authenticate to keystone the object storage endpoint will point not only to the Swift endpoint (the swift proxy or swift proxy load balancer), but it will also include your account. Based on your project_id. More on this soon.

 

Does that make sense? So simply put to use keystoneauth in a multi federated environment, we just need to make sure no matter which keystone we end up using and asking for the swift endpoint always returns the same Swift account name.

And there lies our problem, the keystone object storage endpoint and the metadata authtoken dumps uses the project_id/tenant_id. This isn’t something that is synced or can be passed via federation metadata.

NOTE: This also means that you’d need to use the same reseller_prefix on all keystones in every federated environment. Otherwise the accounts wont match.

 

Keystone Endpoint and Federation Side

When you add an object storage endpoint in keystone, for swift, the url looks something like:

http://swiftproxy:8080/v1/AUTH_$(tenant_id)s

 

Notice the $(tenant_id)s at the end? This is a placeholder that keystone internally will replace with the tenant_id of the project you authenticated as. $(project_id)s can also be used and maps to the same thing. And this is our problem.

When setting up federation between keystones (assuming keystone 2 keystone federation) you generate a mapping. This mapping can include the project name, but not the project_id. Theses ids are auto-generated, not deterministic by name, so creating the same project on different federated keystone servers will have different project_id‘s. When a keystone service provider (SP) federates with a keystone identity provider (IdP) the mapping they share shows how the provider should map federated users locally. This includes creating a shadow project if a project doesn’t already exist for the federated user to be part of.

Because there is no way to sync project_id’s in the mapping the SP will create the project which will have a unique project_id. Meaning when the federated user has authenticated their Swift storage endpoint from keystone will be different, in essence as far as Swift is concerned they will have access but to a completely different Swift account. Let’s use an example, let’s say there is a project on the IdP called ProjectA.

           project_name        project_id
  IdP      ProjectA            75294565521b4d4e8dc7ce77a25fa14b
  SP       ProjectA            cb0d5805d72a4f2a89ff260b15629799

Here we have a ProjectA on both IdP and SP. The one on the SP would be considered a shadow project to map the federated user too. However the project_id’s are both different, because they are uniquely  generated when the project is created on each keystone environment. Taking the Object Storage endpoint in keystone as our example before we get:

 

          Object Storage Endpoint
  IdP     http://swiftproxy:8080/v1/AUTH_75294565521b4d4e8dc7ce77a25fa14b
  SP      http://swiftproxy:8080/v1/AUTH_cb0d5805d72a4f2a89ff260b15629799

So when talking to Swift you’ll be accessing different accounts, AUTH_75294565521b4d4e8dc7ce77a25fa14b and AUTH_cb0d5805d72a4f2a89ff260b15629799 respectively. This means objects you write in one federated environment will be placed in a completely different account so you wont be able access them from elsewhere.

 

Interesting ways to approach the problem

Like I stated earlier the solution would simply be to always be able to return the same storage URL no matter which federated environment you authenticate to. But how?

  1. Make sure the same project_id/tenant_id is used for _every_ project with the same name, or at least the same name in the domains that federation mapping maps too. This means direct DB hacking, so not a good solution, we should solve this in code, not make OPs go hack databases.
  2. Have a unique id for projects/tenants that can be synced in federation mapping, also make this available in the keystone endpoint template mapping, so there is a consistent Swift account to use. Hey we already have project_id which meets all the criteria except mapping, so that would be easiest and best.
  3. Use something that _can_ be synced in a federation mapping. Like domain and project name. Except these don’t map to endpoint template mappings. But with a bit of hacking that should be fine.

Of the above approaches, 2 would be the best. 3 is good except if you pick something mutable like the project name, if you ever change it, you’d now authenticate to a completely different swift account. Meaning you’d have just lost access to all your old objects! And you may find yourself with grumpy Swift Ops who now need to do a potentially large data migration or you’d be forced to never change your project name.

Option 2 being unique, though it doesn’t look like a very memorable name if your using the project id, wont change. Maybe you could offer people a more memorable immutable project property to use. But to keep the change simple being able simply sync the project_id should get us everything we need.

 

When I was playing with this, it was for a presentation so had a time limit, a very strict one, so being a Swift developer and knowing the Swift code base I hacked together a varient on option 3 that didn’t involve hacking keystone at all. Why, because I needed a POC and didn’t want to spend most my time figuring out the inner workings of Keystone, when I could just do a few hacks to have a complete Swift only version. And it worked. Though I wouldn’t recommend it. Option 3 is very brittle.

 

The brittle method – Swift only side – Option 3b

Because I didn’t have time to simply hack keystone, I took a different approach. The basic idea was to let authtoken authenticate and then finish building the storage URL on the swift side using the meta-data authtoken dumps into wsgi request env. Thereby modifying the way keystoneauth authenticates slightly.

Step 1 – Give the keystoneauth middleware the ability to complete the storage url

By default we assume the incoming request will point to a complete account, meaning the object storage endpoint in keystone will end with something like:

'<uri>/v1/AUTH_%(tenant_id)s'

So let’s enhance keystoneauth to have the ability to if given only the reseller_prefix to complete the account. So I added a use_dynamic_reseller option.

If you enable use_dynamic_reseller then the keystoneauth middleware will pull the project_id from authtoken‘s meta-data dumped in the wsgi environment. This allows a simplified keystone endpoint in the form:

'<uri>/v1/AUTH_'

This shortcut makes configuration easier, but can only be reliably used when on your own account and providing a token. API elements like tempurl  and public containers need the full account in the path.

This still used project_id so doesn’t solve our problem, but meant I could get rid of the $(tenant_id)s from the endpoints. Here is the commit in my github fork.

Step 2 – Extend the dynamic reseller to include completing storage url with names

Next, we extend the keystoneauth middleware a little bit more. Give it another option, use_dynamic_reseller_name, to complete the account with either project_name or domain_name and project_name but only if your using keystone authentication version 3.

If you are, and want to have an account based of the name of the project, then you can enable use_dynamic_reseller_name in conjuction with use_dynamic_reseller to do so. The form used for the account would be:

<reseller_prefix><project_domain_name>_<project_name>

So using our example previously with a reseller_preix of AUTH_, a project_domain_name of Domain and our project name of ProjectA, this would generate an account:

AUTH_Domain_ProjectA

This patch is also in my github fork.

Does this work, yes! But as I’ve already mentioned in the last section, this is _very_ brittle. But this also makes it confusing to know when you need to provide only the reseller_prefix or your full account name. It would be so much easier to just extend keystone to sync and create shadow projects with the same project_id. Then everything would just work without hacking.

Planet Linux AustraliaClinton Roy: Moving to Melbourne

Now that the paperwork has finally all been dealt with, I can announce that I’ll be moving down to Melbourne to take up a position with the Australian Synchrotron, basically a super duper x-ray machine used for research of all types. My official position is a >in< Senior Scientific Software Engineer <out> I’ll be moving down to Melbourne shortly, staying with friends (you remember that offer you made, months ago?) until I find a rental near Monash Uni, Clayton.

I will be leaving behind Humbug, the computer group that basically opened up my entire career, and The Edge, SLQ, my home-away-from-home study. I do hope to be able to find replacements for these down south.

I’m looking at having a small farewell nearby soon.

A shout out to Netbox Blue for supplying all my packing boxes. Allll of them.

Planet Linux AustraliaOpenSTEM: This Week in Australian History

The end of August and beginning of September is traditionally linked to the beginning of Spring in Australia, although the change in seasons is experienced in different ways in different parts of the country and was marked in locally appropriate ways by Aboriginal people. As a uniquely Australian celebration of Spring, National Wattle Day, celebrated […]

Planet Linux AustraliaPia Waugh: Mā te wā, Aotearoa

Today I have some good news and sad news. The good news is that I’ve been offered a unique chance to drive “Digital Government” Policy and Innovation for all of government, an agenda including open government, digital transformation, technology, open and shared data, information policy, gov as a platform, public innovation, service innovation and policy innovation. For those who know me, these are a few of my favourite things :)

The sad news, for some folk anyway, is I need to leave New Zealand Aotearoa to do it.

Over the past 18 months (has it only been that long!) I have been helping create a revolutionary new way of doing government. We have established a uniquely cross-agency funded and governed all-of-government function, a “Service Innovation Lab”, for collaborating on the design and development of better public services for New Zealand. By taking a “life journey” approach, government agencies have a reason to work together to improve the full experience of people rather than the usual (and natural) focus on a single product, service or portfolio. The Service Innovation Lab has a unique value in providing an independent place and way to explore design-led and evidence-based approaches to service innovation, in collaboration with service providers across public, private and non-profit sectors. You can see everything we’ve done over the past year here  and from the first 10 week experiment here. I have particularly enjoyed working with and exploring the role of the Citizen Advice Bureau in New Zealand as a critical and trusted service delivery organisation in New Zealand. I’m also particularly proud of both our work in exploring optimistic futures as a way to explore what is possible, rather than just iterate away from pain, and our exploration of better rules for government including legislation as code. The next stage for the Lab is very exciting! As you can see in the 2017-18 Final Report, there is an ambitious work programme to accelerate the delivery of more integrated and more proactive services, and the team is growing with new positions opening up for recruitment in the coming weeks!

Please see the New Zealand blog (which includes my news) here

Professionally, I get most excited about system transformation. Everything we do in the Lab is focused on systemic change, and it is doing a great job at having an impact on the NZ (and global) system around it, especially for its size. But a lot more needs to be done to scale both innovation and transformation. Personally, I have a vision for a better world where all people have what they need to thrive, and I feel a constant sense of urgency in transitioning our public institutions into the 21st century, from an industrial age to the information age, so they can more effectively support society as the speed of change and complexity exponentially grows. This is going to take a rethink of how the entire system functions, especially at the policy and legislative levels.

With this in mind, I have been offered an extraordinary opportunity to explore and demonstrate systemic transformation of government. The New South Wales Department of Finance, Services and Innovation (NSW DFSI) has offered me the role of Executive Director for Digital Government, a role responsible for the all-of-government policy and innovation for ICT, digital, open, information, data, emerging tech and transformation, including a service innovation lab (DNA). This is a huge opportunity to drive systemic transformation as part of a visionary senior leadership team with Martin Hoffman (DFSI Secretary) and Greg Wells (GCIDO). I am excited to be joining NSW DFSI, and the many talented people working in the department, to make a real difference for the people of NSW. I hope our work and example will help raise the bar internationally for the digital transformation of governments for the benefit of the communities we serve.

Please see the NSW Government media release here.

One of the valuable lessons from New Zealand that I will be taking forward in this work has been in how public services can (and should) engage constructively and respectfully with Indigenous communities, not just because they are part of society or because it is the right thing to do, but to integrate important principles and context into the work of serving society. Our First Australians are the oldest cluster of cultures in the world, and we have a lot to learn from them in how we live and work today.

I want to briefly thank the Service Innovation team, each of whom is utterly brilliant and inspiring, as well as the wonderful Darryl Carpenter and Karl McDiarmid for taking that first leap into the unknown to hire me and see what we could do. I think we did well :) I’m delighted that Nadia Webster will be taking over leading the Lab work and has an extraordinary team to take it forward. I look forward to collaborating between New Zealand and New South Wales, and a race to the top for more inclusive, human centred, digitally enabled and values drive public services.

My last day at the NZ Government Service Innovation Lab is the 14th September and I start at NSW DFSI on the 24th September. We’ll be doing some last celebratory drinks on the evening of the 13th September so hold the date for those in Wellington. For those in Sydney, I can’t wait to get started and will see you soon!

,

CryptogramCheating in Bird Racing

I've previously written about people cheating in marathon racing by driving -- or otherwise getting near the end of the race by faster means than running. In China, two people were convicted of cheating in a pigeon race:

The essence of the plan involved training the pigeons to believe they had two homes. The birds had been secretly raised not just in Shanghai but also in Shangqiu.

When the race was held in the spring of last year, the Shanghai Pigeon Association took all the entrants from Shanghai to Shangqiu and released them. Most of the pigeons started flying back to Shanghai.

But the four specially raised pigeons flew instead to their second home in Shangqiu. According to the court, the two men caught the birds there and then carried them on a bullet train back to Shanghai, concealed in milk cartons. (China prohibits live animals on bullet trains.)

When the men arrived in Shanghai, they released the pigeons, which quickly fluttered to their Shanghai loft, seemingly winning the race.

Worse Than FailureKeeping Up Appearances

Just because a tool is available doesn't mean people will use it correctly. People have abused booleans, dates, enums, databases, Go-To's, PHP, reinventing the wheel and even Excel to the point that this forum will never run out of material!

Bug and issue trackers are Good Things™. They let you keep track of multiple projects, feature requests, and open and closed problems. They let you classify the issues by severity/urgency. They let you specify which items are going into which release. They even let you track who did the work, as well as all sorts of additional information.

An optical illusion in which two squares that are actually the same color appear to be different colors

Every project, no matter how big or small, should make use of them.

Ideally, they would be used correctly.

Ideally.

Matt had just released the project that he'd been working on for the past few years. As always happens, some "issues" cropped up. Some were genuine defects. Some were sort-of-enhancements based on the fact that a particular input screen was unwieldy and needed to be improved. At least one was a major restructuring of a part of the project that did not flow too well. In all, Matt had seven issues that needed to be addressed. Before he could deliver them, he needed defect tickets in the bug tracking system. He wasn't authorized to raise such tickets; only the Test team could do that.

It took a while, but he finally received a ticket to do the work to... but everything had been bundled up into one ticket. This made it very difficult to work with, because now he couldn't just clear each issue as he went. Instead, he had to package them up in abeyance, so to speak, and only release them when they were all complete. This also meant that the test documentation, providing instructions to the Test team as to how to ensure that the fix was working as required, all had to be bundled up into one big messy document, making it more difficult for Testers to do their job as well.

So he questioned them: If we can raise a single defect ticket, then what stops us from raising all 7 needed tickets so that the issues can be addressed separately? The answer was: Because these defects appear post-release, it is clear that they weren't caught at the pre-release stage, which means that Somebody Wasn't Doing Their Job Properly in the Test team, which makes them Look Bad; it brings their failure to catch the issues to the attention of Management.

In other words, in order to keep the Test team from looking bad, they would only ever raise a single ticket (encompassing all detected issues) for any given release.

Imagine if all of the bugs from your last major release were assigned to you personally, in a single ticket. Good luck estimating, scheduling, coding, debugging and documenting how to test the single logical bug!

Matt raised this bad practice with management, and explained that while the reason for why they do this is to hide their inadequacy, it also prevents any meaningful way to control work distribution, changes and subsequent testing. It also obscures the actual number of issues (Why is it taking you seven weeks to fix one issue?).

Management was not amused at having been misled.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaDavid Rowe: Simple Keras “Hello World” Example – Mean Removal

Inspired by the Wavenet work with Codec 2 I’m dipping my toe into the word of Deep Learning (DL) using Keras. I’ve read Deep Learning with Python (quite an enjoyable read) and set up a Linux box with a GTX graphics card that is making my teenage sons weep with jealousy.

So after a couple of days of messing about here is my first “hello world” Keras example: mean_removal.py. It might be helpful for other Keras noobs. Assuming you have all the packages installed, it runs with either Python 2:

$ python mean_removal.py

Or Python 3:

$ python3 mean_removal.py

It removes the mean from vectors, using just a single layer regression model. The script runs OK on a regular PC without a chunky graphics card.

So I start by generating vectors from random numbers with a zero mean. I then add a random offset to each sample in the vector. Here are 5 vectors with random offsets added to them:

The Keras script is then trained to estimate and remove the offsets, so the output vectors look like:

Estimating the offset is the same as finding the “mean” of the vector. Yes I know we can do that with a “mean” function, but where’s the fun in that!

Here are some other plots that show the training and validation measures, and error metrics at the output:



The last two plots show pretty much all of the offset is removed and it restores the original (non offset) vectors with just a tiny bit of noise. I had to wind back the learning rate to get it to converge without “NAN” losses, possibly as I’m using fairly big input numbers. I’m familiar with the idea of learning rate from NLMS adaptive filters, such as those used for my work in echo cancellation.

Deep Learning for Codec 2

My initial ambitions for DL are somewhat more modest than the sample-by-sample synthesis used in the Wavenet work. I have some problems with Vector Quantisation (VQ) in the low rate Codec 2 modes. The VQ is used to compactly describe the speech spectrum, which carries the intelligibility of the signal.

The VQ gets upset with different microphones, speakers, or minor spectral shaping like gentle high pass/low pass filtering. This shaping often causes a poor vector to be chosen, which results in crappy speech quality. The current VQ error measure can’t tell the difference between spectral features that matter and those that don’t.

So I’d like to try DL to address those issues, and train a system to say “look, this speech and this speech are actually the same. Yes I know one of them has a 3dB/octave low pass filter, please ignore that”.

As emphasised in the text book above, some feature extraction can help with DL problems. For my first pass I’ll be working on parameters extracted by the Codec 2 model (like a compact version of the speech spectrum) rather than speech samples like Wavenet. This will reduce my CPU load significantly, at the expense of speech quality, which will be limited by the unquantised Codec 2 model. But that’s OK as a first step. A notch or two up on Codec 2 at 700 bit/s would be very useful, especially if it can run on a CPU from the first two decades of the 21st century.

Mean Removal on Speech Vectors

So to get started with Keras I chose mean removal. The mean level or constant offset is like the volume or energy in a speech signal, its the simplest form of spectral shaping I could imagine. I trained and tested it with vectors of random numbers, using numbers in the range of the speech spectral samples that Codec 2 plays with.

It’s a bit like an equaliser, vectors with arbitrary spectral shaping go in, “flat” unshaped vectors come out. They can then be sent to a Vector Quantiser. There are probably smarter ways to do this, but I need to start somewhere.

So as a next step I tried offset removal with vectors that represent the spectrum of 40ms speech frame:


This is pretty cool – the network was trained on random numbers but works well with real speech frames. You can also see the spectral slope I mentioned above, the speech energy gradually falls off at high frequencies. This doesn’t affect the intelligibility of the speech but tends to upset traditional Vector Quantisers. Especially mine.

Now that I have something super-basic working, the next step is to train and test networks to deal with some non-trivial spectral shaping.

Reading Further

Deep Learning with Python
WaveNet and Codec 2
Codec 2 700C, the current Codec 2 700 bit/s mode. With better VQ we can improve on this.
Codec 2 at 450 bit/s, some fine work from Thomas and Stefan, that uses a form of machine learning to synthesise 16 kHz speech from 8 kHz inputs.
FreeDV 700D, the recently released FreeDV mode that uses Codec 2 700C. A FreeDV Mode also includes a modem, FEC, protocol.
RNNoise: Learning Noise Suppression, Jean-Marc’s DL network for noise reduction. Thanks Jean-Marc for the brainstorming emails!

Planet Linux AustraliaMichael Still: What’s missing from the ONAP community — an open design process

Share

I’ve been thinking a fair bit about ONAP and its future releases recently. This is in the context of trying to implement a system for a client which is based on ONAP. Its really hard though, because its hard to determine how various components of ONAP are intended to work, or interoperate.

It took me a while, but I’ve realised what’s missing here…

OpenStack has an open design process. If you want to add a new feature to Nova for example, the first step is you need to write down what the feature is intended to do, how it integrates with the rest of Nova, and how people might use it. The target audience for that document is both the Nova development team, but also people who operate OpenStack deployments.

ONAP has no equivalent that I can find. So for example, they say that in Casablanca they are going to implement a “AAI Enricher” to ease lookup of data from external systems in their inventory database, but I can’t find anywhere where they explain how the integration between arbitrary external systems and ONAP AAI will work.

I think ONAP would really benefit from a good hard look at their design processes and how approachable they are for people outside their development teams. The current use case proposal process (videos, conference talks, and powerpoint presentations) just isn’t great for people who are trying to figure out how to deploy their software.

Share

The post What’s missing from the ONAP community — an open design process appeared first on Made by Mikal.