Planet Russell

,

Worse Than FailureCodeSOD: Warp Me To Halifax

Greenwich must think they’re so smart, being on the prime meridian. Starting in the 1840s, the observatory was the international standard for time (and thus vital for navigation). And even when the world switched to UTC, GMT is only different from that by 0.9s. If you want to convert times between time zones, you do it by comparing against UTC, and you know what?

I’m sick of it. Boy, I wish somebody would take them down a notch. Why is a tiny little strip of London so darn important?

Evan’s co-worker obviously agrees with the obvious problem of Greenwich’s unearned superiority, and picks a different town to make the center of the world: Halifax.

function time_zone_time($datetime, $time_zone, $savings, $return_format="Y-m-d g:i a"){
        date_default_timezone_set('America/Halifax');
        $time = strtotime(date('Y-m-d g:i a', strtotime($datetime)));
        $halifax_gmt = -4;
        $altered_tdf_gmt = $time_zone;
        if ($savings && date('I', $time) == 1) {
                $altered_tdf_gmt++;
        } // end if
        if(date('I') == 1){
                $halifax_gmt++;
        }
        $altered_tdf_gmt -= $halifax_gmt;
        $new_time = mktime(date("H", $time), date("i", $time), date("s", $time),date("m", $time)  ,date("d", $time), date("Y", $time)) + ($altered_tdf_gmt*3600);
        $new_datetime = date($return_format, $new_time);
        return $new_datetime;
}
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianNorbert Preining: TeX Live VCS History and Statistics – Perforce, Subversion, Git

TeX Live is a project of long history, starting somewhen back in the 90ies with CDs distributed within user groups till the most recent net-based distribution and updates. Discussion about using a VCS started very early, in 1999. This blog recalls a bit of history of the VCS for TeX Live, and reports on the current status of the Subversion and Git (svn mirror) repositories.

Perforce

Sebastian Rahtz, the then editor of TeX Live, asked in Nov 1999 about using a VCS, in particular Perforce. The proposal drew some opposition in favor of open source VCS like CVS, but despite this a Perforce server was set up at Dante in Feb 2000. From then on Perforce was used, and in May 2001 Sebastian announced the 1000 change to the perforce repository.

Switch to Subversion

In 2004 the source part was checked into CVS at Berlios, and remained there for long time, but was not actually used for development. In Feb 2004 there was a short discussion about moving to Subversion and some hosting (Sarovar or Sourceforge), but it was postponed after the release of TL2004. After the release Karl Berry started to move from Perforce to CVS on Sarovar in Jan 2005, with somehow bad experiences, and the decision to set up a subversion server on tug.org.

First appearance of the subversion checking of TeX Live is around Jan 2006 with timing tests and slow progress getting everything into the subversion repository. The final announcement of moving to subversion was in Mar 2006.

Since then and till now Subversion is our main VCS and most of our distribution scripts for network distribution etc require subversion, in particular strictly increasing revision numbers.

Git mirror

Despite having used Subversion for many years and in most of the projects I had, the appearance of Git and its ease of use made me switch practically all of my projects to it. With Perforce I hated that one needed to have server connection to commit anything (no go on trains etc), with Subversion I hated that branching was a PITA and costly. With Git I finally could branch easily, develop new features, and merge them later.

In time I felt the need to also use Git for TeX Live development, and made a git-svn checkout which I used privately for TeX Live Manager development. Some trials to push the repo (about 30Gb) to either GitHub or any other hosting service were rather unsuccessful, so in March 2017 I pushed the repo to my own server and announced it (the URLs there are wrong, though!).

Recently I was asked to include also all the branches and tags from the Subversion history, which was a bit a pain to set up, but in the end the git repository now has branches for the branches in Subversion, and the releases are properly tagged back to TeX Live 2007. The history is the same as with subversion, going back to 2005. Unfortunately it seems that the Perforce history is lost.

So here they are: web interface, anonymous git checkout

Statistics

Some numbers to close of this blog: We are currently at around 47000 commits starting from 2005 to day. Most busy committer is Karl Berry who does the biggest bunch of work updating packages from CTAN, which accounts to most of the changes in total.

Interesting detail is that the number of commits per year is actually decreasing with the top year being 2008 when the new TeX Live infrastructure and TeX Live manager was introduced:

More stats generated from the git repository on 20180110 can be found at https://www.texlive.info/tlstats/.

Closing

Although I am very happy with the current setup of git-svn and the git repository, git will not replace subversion in the foreseeable future, due to the reliance of most our distribution scripts on the strictly increasing revision numbers of subversion. I have been done some work to support both git and subversion, but that is highly uncompleted work and as long as nobody shows interest I guess it will not happen.

It shows that depending on the usage and integration, a distributed VCS, like git, is not necessarily always the optimal solution. Bridging the two systems and working nicely together helps, but I still need to keep my subversion checkout available for emergency cases.

Planet DebianBen Hutchings: Meltdown and Spectre in Debian

I'll assume everyone's already heard repeatedly about the Meltdown and Spectre security issues that affect many CPUs. If not, see meltdownattack.com. These primarily affect systems that run untrusted code - such as multi-tenant virtual hosting systems. Spectre is also a problem for web browsers with Javascript enabled.

Meltdown

Over the last week the Debian kernel team has worked to mitigate Meltdown in all suites. This mitigation is currently limited to kernels running in 64-bit mode (amd64 architecture), but the issue affects 32-bit mode as well.

You can see where this mitigation is applied on the security tracker. As of today, wheezy, jessie, jessie-backports, stretch and unstable/sid are fixed while stretch-backports, testing/buster and experimental are not.

Spectre

Spectre needs to be mitigated in the kernel, browsers, and potentially other software. Currently the kernel changes to mitigate it are still under discussion upstream. Mozilla has started mitigating Spectre in Firefox and some of these changes are now in Debian unstable (version 57.0.4-1). Chromium has also started mitigating Spectre but no such changes have landed in Debian yet.

Planet DebianBen Hutchings: Debian LTS work, December 2017

I was assigned 14 hours of work by Freexian's Debian LTS initiative, but only worked 6 hours so I carried over 8 hours to January.

I prepared and uploaded an update to the Linux kernel to fix various security issues. I issued DLA-1200-1 for this update. I also prepared another update on the Linux 3.2 longterm stable branch, though most of that work was done while on holiday so I didn't count the hours. I spent some time following the closed mailing list used to coordinate backports of KPTI/KAISER.

,

CryptogramDaniel Miessler on My Writings about IoT Security

Daniel Miessler criticizes my writings about IoT security:

I know it's super cool to scream about how IoT is insecure, how it's dumb to hook up everyday objects like houses and cars and locks to the internet, how bad things can get, and I know it's fun to be invited to talk about how everything is doom and gloom.

I absolutely respect Bruce Schneier a lot for what he's contributed to InfoSec, which makes me that much more disappointed with this kind of position from him.

InfoSec is full of those people, and it's beneath people like Bruce to add their voices to theirs. Everyone paying attention already knows it's going to be a soup sandwich -- a carnival of horrors -- a tragedy of mistakes and abuses of trust.

It's obvious. Not interesting. Not novel. Obvious. But obvious or not, all these things are still going to happen.

I actually agree with everything in his essay. "We should obviously try to minimize the risks, but we don't do that by trying to shout down the entire enterprise." Yes, definitely.

I don't think the IoT must be stopped. I do think that the risks are considerable, and will increase as these systems become more pervasive and susceptible to class breaks. And I'm trying to write a book that will help navigate this. I don't think I'm the prophet of doom, and don't want to come across that way. I'll give the manuscript another read with that in mind.

Cory DoctorowWith repetition, most of us will become inured to all the dirty tricks of Facebook attention-manipulation

In my latest Locus column, “Persuasion, Adaptation, and the Arms Race for Your Attention,” I suggest that we might be too worried about the seemingly unstoppable power of opinion-manipulators and their new social media superweapons.


Not because these techniques don’t work (though when someone who wants to sell you persuasion tools tells you that they’re amazing and unstoppable, some skepticism is warranted), but because a large slice of any population will eventually adapt to any stimulus, which is why most of us aren’t addicted to slot machines, Farmville and Pokemon Go.


When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures. The predator carves the prey, the prey carves the preda­tor. To get a sense of just how far the state of the art has advanced since Farmville, fire up Universal Paperclips, the free browser game from game designer Frank Lantz, which challenges you to balance resource acquisi­tion, timing, and resource allocation to create paperclips, progressing by purchasing upgraded paperclip-production and paperclip-marketing tools, until, eventually, you produce a sentient AI that turns the entire universe into paperclips, exterminating all life.

Universal Paperclips makes Farmville seem about as addictive as Candy­land. Literally from the first click, it is weaving an attentional net around your limbic system, carefully reeling in and releasing your dopamine with the skill of a master fisherman. Universal Paperclips doesn’t just suck you in, it harpoons you.

Persuasion, Adaptation, and the Arms Race for Your Attention [Cory Doctorow/Locus]

Krebs on SecurityWebsite Glitch Let Me Overstock My Coinbase

Coinbase and Overstock.com just fixed a serious glitch that allowed Overstock customers to buy any item at a tiny fraction of the listed price. Potentially more punishing, the flaw let anyone paying with bitcoin reap many times the authorized bitcoin refund amount on any canceled Overstock orders.

In January 2014, Overstock.com partnered with Coinbase to let customers pay for merchandise using bitcoin, making it among the first of the largest e-commerce vendors to accept the virtual currency.

On December 19, 2017, as the price of bitcoin soared to more than $17,000 per coin, Coinbase added support for Bitcoin Cash — an offshoot (or “fork”) from bitcoin designed to address the cryptocurrency’s scalability challenges.

As a result of the change, Coinbase customers with balances of bitcoin at the time of the fork were given an equal amount of bitcoin cash stored by Coinbase. However, there is a significant price difference between the two currencies: A single bitcoin is worth almost $15,000 right now, whereas a unit of bitcoin cash is valued at around $2,400.

On Friday, Jan. 5, KrebsOnSecurity was contacted by JB Snyder, owner of North Carolina-based Bancsec, a company that gets paid to break into banks and test their security. An early adopter of bitcoin, Snyder said he was using some of his virtual currency to purchase an item at Overstock when he noticed something alarming.

During the checkout process for those paying by bitcoin, Overstock.com provides the customer a bitcoin wallet address that can be used to pay the invoice and complete the transaction. But Snyder discovered that Overstock’s site just as happily accepted bitcoin cash as payment, even though bitcoin cash is currently worth only about 15 percent of the value of bitcoin.

To confirm and replicate Snyder’s experience firsthand, KrebsOnSecurity purchased a set of three outdoor solar lamps from Overstock for a grand total of $78.27.

The solar lights I purchased from Overstock.com to test Snyder’s finding. They cost $78.27 in bitcoin, but because I was able to pay for them in bitcoin cash I only paid $12.02.

After indicating I wished to pay for the lamps in bitcoin, the site produced a payment invoice instructing me to send exactly 0.00475574 bitcoins to a specific address.

The payment invoice I received from Overstock.com.

Logging into Coinbase, I took the bitcoin address and pasted that into the “pay to:” field, and then told Coinbase to send 0.00475574 in bitcoin cash instead of bitcoin. The site responded that the payment was complete. Within a few seconds I received an email from Overstock congratulating me on my purchase and stating that the items would be shipped shortly.

I had just made a $78 purchase by sending approximately USD $12 worth of bitcoin cash. Crypto-currency alchemy at last!

But that wasn’t the worst part. I didn’t really want the solar lights, but also I had no interest in ripping off Overstock. So I cancelled the order. To my surprise, the system refunded my purchase in bitcoin, not bitcoin cash!

Consider the implications here: A dishonest customer could have used this bug to make ridiculous sums of bitcoin in a very short period of time. Let’s say I purchased one of the more expensive items for sale on Overstock, such as this $100,000, 3-carat platinum diamond ring. I then pay for it in Bitcoin cash, using an amount equivalent to approximately 1 bitcoin ($~15,000).

Then I simply cancel my order, and Overstock/Coinbase sends me almost $100,000 in bitcoin, netting me a tidy $85,000 profit. Rinse, wash, repeat.

Reached for comment, Overstock.com said the company changed no code in its site and that a fix implemented by Coinbase resolved the issue.

“We were made aware of an issue affecting cryptocurrency transactions and refunds by an independent researcher. After working with the researcher to confirm the finding, that method of payment was disabled while we worked with our cryptocurrency integration partner, Coinbase, to ensure they resolved the issue. We have since confirmed that the issue described in the finding has been resolved, and the cryptocurrency payment option has been re-enabled.”

Coinbase said “the issue was caused by the merchant partner improperly using the return values in our merchant integration API. No other Coinbase customer had this problem.”Coinbase told me the bug only existed for approximately three weeks.”

“After being made aware of an issue in our joint refund processing code on SaturdayCoinbase and Overstock worked together to deploy a fix within hours,” The Coinbase statement continued. “While a patch was being developed and tested, orders were proactively disabled to protect customers. To our knowledge, a very small number of transactions were impacted by this issue. Coinbase actively works with merchant partners to identify and solve issues like this in an ongoing, collaborative manner and since being made aware of this have ensured that no other partners are affected.”

Bancsec’s Snyder and I both checked for the presence of this glitch at multiple other merchants that work directly with Coinbase in their checkout process, but we found no other examples of this flaw.

The snafu comes as many businesses that have long accepted bitcoin are now distancing themselves from the currency thanks to the recent volatility in bitcoin prices and associated fees.

Earlier this week, it emerged that Microsoft had ceased accepting payments in Bitcoin, citing volatility concerns. In December, online game giant Steam said it was dropping support for bitcoin payments for the same reason.

And, as KrebsOnSecurity noted last month, even cybercriminals who run online stores that sell stolen identities and credit cards are urging their customers to transact in something other than bitcoin.

Interestingly, bitcoin is thought to have been behind a huge jump in Overstock’s stock price in 2017. In December, Overstock CEO Patrick Byrne reportedly stoked the cryptocurrency fires when he said that he might want to sell Overstock’s e-tailing operations and pour the extra cash into accelerating his blockchain-based business ideas instead.

In case anyone is wondering what I did with the “profit” I made from this scheme, I offered to send it back to Overstock, but they told me to keep it. Instead, I donated it to archive.org, a site that has come in handy for many stories published here.

Update, 3:15 p.m. ET: A previous version of this story stated that neither Coinbase nor Overstock would say which of the two was responsible for this issue. The modified story above resolves that ambiguity.

Planet DebianLars Wirzenius: On using Github and a PR based workflow

In mid-2017, I decided to experiment with using pull-requests (PRs) on Github. I've read that they make development using git much nicer. The end result of my experiment is that I'm not going to adopt a PR based workflow.

The project I chose for my experiment is vmdb2, a tool for generating disk images with Debian. I put it up on Github, and invited people to send pull requests or patches, as they wished. I got a bunch of PRs, mostly from two people. For a little while, there was a flurry of activity. It has has now calmed down, I think primarily because the software has reached a state where the two contributors find it useful and don't need it to be fixed or have new features added.

This was my first experience with PRs. I decided to give it until the end of 2017 until I made any conclusions. I've found good things about PRs and a workflow based on them:

  • they reduce some of the friction of contributing, making it easier for people to contribute; from a contributor point of view PRs certainly seem like a better way than sending patches over email or sending a message asking to pull from a remote branch
  • merging a PR in the web UI is very easy

I also found some bad things:

  • I really don't like the Github UI or UX, in general or for PRs in particular
  • especially the emails Github sends about PRs seemed useless beyond a basic "something happened" notification, which prompt me to check the web UI
  • PRs are a centralised feature, which is something I prefer to avoid; further, they're tied to Github, which is something I object to on principle, since it's not free software
    • note that Gitlab provides support for PRs as well, but I've not tried it; it's an "open core" system, which is not fully free software in my opinion, and so I'm wary of Gitlab; it's also a centralised solution
    • a "distributed PR" system would be nice
  • merging a PR is perhaps too easy, and I worry that it leads me to merging without sufficient review (that is of course a personal flaw)

In summary, PRs seem to me to prioritise making life easier for contributors, especially occasional contributors or "drive-by" contributors. I think I prefer to care more about frequent contributors, and myself as the person who merges contributions. For now, I'm not going to adopt a PR based workflow.

(I expect people to mock me for this.)

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.


Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.


Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.


Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.


Olga Iurkova
Olga Iurkova (Ukraine)
Journalist + editor
Journalist and co-founder of StopFake.org, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.


Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.


Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.


Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.


Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.


In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.


Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.


Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.


Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.


Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.


In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Raymond Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.


Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.


DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.


Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.


Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.


Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.


TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Ichthyologist
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.


Aziza Chaouni
Aziza Chaouni (Morocco)
Architect
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.


Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.


A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.


Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.


Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.


Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.


Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.


An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

v
Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.


David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.


CryptogramNSA Morale

The Washington Post is reporting that poor morale at the NSA is causing a significant talent shortage. A November New York Times article said much the same thing.

The articles point to many factors: the recent reorganization, low pay, and the various leaks. I have been saying for a while that the Shadow Brokers leaks have been much more damaging to the NSA -- both to morale and operating capabilities -- than Edward Snowden. I think it'll take most of a decade for them to recover.

Worse Than FailureCodeSOD: Whiling Away the Time

There are two ways of accumulating experience in our profession. One is to spend many years accumulating and mastering new skills to broaden your skill set and ability to solve more and more complex problems. The other is to repeat the same year of experience over and over until you have one year of experience n times.

Anon took the former path and slowly built up his skills, adding to his repertoire with each new experience and assignment. At his third job, he encountered The Man, who took the latter path.

If you wanted to execute a block of code once, you have several options. You could just put the code in-line. You could put it in a function and call said function. You could even put it in a do { ... } while (false); construct. The Man would do as below because it makes it easier and less error prone to comment out a block of code:

  Boolean flag = true;
  while (flag) {
    flag = false;
    // code>
    break;
  }

The Man not only built his own logging framework (because you can't trust the ones out there), but he demanded that every. single. function. begin and end with:

  Log.methodEntry("methodName");
  ...
  Log.methodExit("methodName");

...because in a multi-threaded environment, that won't flood the logs with all sorts of confusing and mostly useless log statements. Also, he would routinely use this construct in places where the logging system had not yet been initialized, so any logged errors went the way of the bit-bucket.

Every single method was encapsulated in its own try-catch-finally block. The catch block would merely log the error and continue as though the method was successful, returning null or zero on error conditions. The intent was to keep the application from ever crashing. There was no concept of rolling the error up to a place where it could be properly handled.

His concept of encapsulation was to wrap not just each object, but virtually every line of code, including declarations, in a region tag.

To give you a taste of what Anon had to deal with, the following is a procedure of The Man's:


  #region Protected methods
    protected override Boolean ParseMessage(String strRemainingMessage) {
       Log.LogEntry(); 
  
  #    region Local variables
         Boolean bParseSuccess = false;
         String[] strFields = null;
  #    endregion //Local variables
  
  #    region try-cache-finally  [op: SIC]
  #      region try
           try {
  #            region Flag to only loop once
                 Boolean bLoop = true;
  #            endregion //Flag to only loop once
  
  #            region Loop to parse the message
                while (bLoop) {
  #                region Make sure we only loop once
                     bLoop = false;
  #                endregion //Make sure we only loop once
  
  #                region parse the message
                     bParseSuccess = base.ParseMessage(strRemainingMessage);
  #                endregion //parse the message
  
  #                region break the loop
                     break;
  #                endregion //break the loop
                }
  #            endregion //Loop to parse the message
           }
  #      endregion //try
    
  #      region cache // [op: SIC]
            catch (Exception ex) {
              Log.Error(ex.Message);
            }
  #      endregion //cache [op: SIC]
  	  
  #      region finally
           finally {
             if (null != strFields) {
                strFields = null; // op: why set local var to null?
             }
           }
  #      endregion //finally
  
  #      endregion //try-cache-finally [op: SIC]
  
       Log.LogExit();
  
       return bParseSuccess;
     }
  #endregion //Protected methods

The corrected version:

  // Since the ParseMessage method has it's own try-cache
  // on "Exception", it will never throw any exceptions 
  // and logging entry and exit of a method doesn't seem 
  // to bring us any value since it's always disabled. 
  // I'm not even sure if we have a way to enable it 
  // during runtime without recompiling and installing 
  // the application...
  protected override Boolean ParseMessage(String remainingMessage){
    return base.ParseMessage(remainingMessage); 
  }

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianJonathan McDowell: How Virgin Media lost me as a supporter

For a long time I’ve been a supporter of Virgin Media (from a broadband perspective, though their triple play TV/Phone/Broadband offering has seemed decent too). I know they have a bad reputation amongst some people, but I’ve always found their engineers to be capable, their service in general reliable, and they can manage much faster speeds than any UK ADSL/VDSL service at cheaper prices. I’ve used their services everywhere I’ve lived that they were available, starting back in 2001 when I lived in Norwich. The customer support experience with my most recent move has been so bad that I am no longer of the opinion it is a good idea to use their service.

Part of me wonders if the customer support has got worse recently, or if I’ve just been lucky. We had a problem about 6 months ago which was clearly a loss of signal on the line (the modem failed to see anything and I could clearly pinpoint when this had happened as I have collectd monitoring things). Support were insistent they could do a reset and fix things, then said my problem was the modem and I needed a new one (I was on an original v1 hub and the v3 was the current model). I was extremely dubious but they insisted. It didn’t help, and we ended up with an engineer visit - who immediately was able to say they’d been disconnecting noisy lines that should have been unused at the time my signal went down, and then proceeded to confirm my line had been unhooked at the cabinet and then when it was obvious the line was noisy and would have caused problems if hooked back up patched me into the adjacent connection next door. Great service from the engineer, but support should have been aware of work in the area and been able to figure out that might have been a problem rather than me having a 4-day outage and numerous phone calls when the “resets” didn’t fix things.

Anyway. I moved house recently, and got keys before moving out of the old place, so decided to be organised and get broadband setup before moving in - there was no existing BT or Virgin line in the new place so I knew it might take a bit longer than usual to get setup. Also it would have been useful to have a connection while getting things sorted out, so I could work while waiting in for workmen. As stated at the start I’ve been pro Virgin in the past, I had their service at the old place and there was a CableTel (the Belfast cable company NTL acquired) access hatch at the property border so it was clear it had had service in the past. So on October 31st I placed an order on their website and was able to select an installation date of November 11th (earlier dates were available but this was a Saturday and more convenient).

This all seemed fine; Virgin contacted me to let me know there was some external work that needed to be done but told me it would happen in time. This ended up scheduled for November 9th, when I happened to be present. The engineers turned up, had a look around and then told me there was an issue with access to their equipment - they needed to do a new cable pull to the house and although the ducting was all there the access hatch for the other end was blocked by some construction work happening across the road. I’d had a call about this saying they’d be granted access from November 16th, so the November 11th install date was pushed out to November 25th. Unfortunate, but understandable. The engineers also told me that what would happen is the external team would get a cable hooked up to a box on the exterior of the house ready for the internal install, and that I didn’t need to be around for that.

November 25th arrived. There was no external box, so I was dubious things were actually going to go ahead, but I figured there was a chance the external + internal teams would turn up together and get it sorted. No such luck. The guy who was supposed to do the internal setup turned up, noticed the lack of an external box and informed me he couldn’t do anything without that. As I’d expected. I waited a few days to hear from Virgin and heard nothing, so I rang them and was told the installation had moved to December 6th and the external bit would be done before that - I can’t remember the exact date quoted but I rang a couple of times before the 6th and was told it would happen that day “for sure” each time.

December 5th arrives and I get an email telling me the installation has moved to December 21st. This is after the planned move date and dangerously close to Christmas - I’m aware that in the event of any more delays I’m unlikely to get service until the New Year. Lo and behold on December 7th I’m informed my install is on hold and someone will contact me within 5 working days to give me an update.

Suffice to say I do not get called. I ring towards the end of the following week and am told they are continuing to have trouble carrying out work on the access hatch. So I email the housing company doing the work across the road, asking if Virgin have actually been in touch and when the building contractors plan to give them the access they require. I get a polite response saying Virgin have been on-site but did not ask for anything to be moved or make it clear they were trying to connect a customer. And providing an email address for the appropriate person in the construction company to arrange access.

I contact Virgin to ask about this on December 20th. There’s no update but this time I manage to get someone who actually seems to want to help, rather than just telling me it’s being done today or soon. I get email confirmation that the matter is being escalated to the Area Field Manager (I’d been told this by phone on December 16th as well but obviously nothing had happened), and provide the contact details for the construction company.

And then I wait. I’m aware things wind down over the Christmas period, so I’m not expecting an install before the New Year, but I did think I might at least get a call or email with an update. Nothing. My wife rang to finally cancel our old connection last week (it’s been handy to still have my backup host online and be able to go and do updates in the old place) and they were aware of the fact we were trying to get a new connection and that there had been issues, but had no update and then proceeded to charge a disconnection fee, even though Virgin state no disconnection if you move and continue with Virgin Media.

So last week I rang and cancelled the order. And got the usual story of difficulty with access and asked to give them 48 hours to get back to me. I said no, that the customer service so far had been appalling and to cancel anyway. Which I’m informed has now been done.

Let’s be clear on what I have issue with here. While the long delay is annoying I don’t hold Virgin entirely responsible - there is construction work going on and things slow down over Christmas (though the order was placed long enough beforehand that this really shouldn’t have impacted things). The problem is the customer service and complete lack of any indication Virgin are managing this process well internally - the fact the interior install team turned up when the exterior work hadn’t been completed is shocking! If Virgin had told me at the start (or once they’d done the first actual physical visit to the site and seen the situation) that there was going to be a delay and then been able to provide a firm date, I’d have been much more accepting. Instead, the numerous reschedules, an inability to call back and provide updates when promised and the empty assurances that exterior work will be carried out on certain dates all leave me lacking any faith in what Virgin tell me. Equally, the fact they have charged a disconnection fee when their terms state they wouldn’t is ridiculous (a complaint has been raised but at the time of writing the complaints team has, surprise, surprise, not been in contact). If they’re so poor when I’m trying to sign up as a new customer, why should I have any faith in their ability to provide decent support when I actually have their service?

It’s also useful to contrast my Virgin experience with 2 others. Firstly, EE who I used for 4G MiFi access. Worked out of the box, and when I rang to cancel (because I no longer need it) were quick and efficient about processing the cancellation and understood that I’d been pleased with the service but no longer needed it, so didn’t do a hard retention sell.

Secondly, I’ve ended up with FTTC over a BT Openreach line from a local Gamma reseller, MCL Services. I placed this order on December 8th, after Virgin had put my install on hold. At the point of order I had an install date of December 19th, but within 3 hours it was delayed until January 3rd. At this point I thought I was going to have similar issues, so decided to leave both orders open to see which managed to complete first. I double-checked with MCL on January 2nd that there’d been no updates, and they confirmed it was all still on and very unlikely to change. And, sure enough, on the morning of January 3rd a BT engineer turned up after having called to give a rough ETA. Did a look around, saw the job was bigger than expected and then, instead of fobbing me off, got the job done. Which involved needing a colleague to turn up to help, running a new cable from a pole around the corner to an adjacent property and then along the wall, and installing the master socket exactly where suited me best. In miserable weather.

What did these have in common that Virgin does not? First, communication. EE were able to sort out my cancellation easily, at a time that suited me (after 7pm, when I’d got home from work and put dinner on). MCL provided all the installation details for my FTTC after I’d ordered, and let me know about the change in date as soon as BT had informed them (it helps I can email them and actually get someone who can help, rather than having to phone and wait on hold for someone who can’t). BT turned up and discovered problems and worked out how to solve them, rather than abandoning the work - while I’ve had nothing but good experiences with Virgin’s engineers up to this point there’s something wrong if they can’t sort out access to their network in 2 months. What if I’d been an existing customer with broken service?

This is a longer post than normal, and no one probably cares, but I like to think that someone in Virgin might read it and understand where my frustrations throughout this process have come from. And perhaps improve things, though I suspect that’s expecting too much and the loss of a single customer doesn’t really mean a lot to them.

Planet DebianDon Armstrong: Debbugs Versioning: Merging

One of the key features of Debbugs, the bug tracking system Debian uses, is its ability to figure out which bugs apply to which versions of a package by tracking package uploads. This system generally works well, but when a package maintainer's workflow doesn't match the assumptions of Debbugs, unexpected things can happen. In this post, I'm going to:

  1. introduce how Debbugs tracks versions
  2. provide an example of a merge-based workflow which Debbugs doesn't handle well
  3. provide some suggestions on what to do in this case

Debbugs Versioning

Debbugs tracks versions using a set of one or more rooted trees which it builds from the ordering of debian/changelog entries. In the simplist case, every upload of a Debian package has changelogs in the same order, and each upload adds just one version. For example, in the case of dgit, to start with the package has this (abridged) version tree:

the next upload, 3.13, has a changelog with this version ordering: 3.13 3.12 3.11 3.10, which causes the 3.13 version to be added as a descendant of 3.12, and the version tree now looks like this:

dgit is being developed while also being used, so new versions with potentially disruptive changes are uploaded to experimental while production versions are uploaded to unstable. For example, the 4.0 experimental upload was based on the 3.10 version, with the changelog ordering 4.0 3.10. The tree now has two branches, but everything seems as you would expect:

Merge based workflows

Bugfixes in the maintenance version of dgit also are made to the experimental package by merging changes from the production version using git. In this case, some changes which were present in the 3.12 and 3.11 versions are merged using git, corresponds to a git merge flow like this:

If an upload is prepared with changelog ordering 4.1 4.0 3.12 3.11 3.10, Debbugs combines this new changelog ordering with the previously known tree, to produce this version tree:

This looks a bit odd; what happened? Debbugs walks through the new changelog, connecting each of the new versions to the previous version if and only if that version is not already an ancestor of the new version. Because the changelog says that 3.12 is the ancestor of 4.0, that's where the 4.1 4.0 version tree is connected.

Now, when 4.2 is uploaded, it has the changelog ordering (based on time) 4.2 3.13 4.1 4.0 3.12 3.11 3.10, which corresponds to this git merge flow:

Debbugs adds in 3.13 as an ancestor of 4.2, and because 4.1 was not an ancestor of 3.13 in the previous tree, 4.1 is added as an ancestor of 3.13. This results in the following graph:

Which doesn't seem particularly helpful, because

is probably the tree that more closely resembles reality.

Suggestions on what to do

Why does this even matter? Bugs which are found in 3.11, and fixed in 3.12 now show up as being found in 4.0 after the 4.1 release, though they weren't found in 4.0 before that release. It also means that 3.13 now shows up as having all of the bugs fixed in 4.2, which might not be what is meant.

To avoid this, my suggestion is to order the entries in changelogs in the same order that the version graph should be traversed from the leaf version you are releasing to the root. So if the previous version tree is what is wanted, 3.13 should have a changelog with ordering 3.13 3.12 3.11 3.10, and 4.2 should have a changelog with ordering 4.2 4.1 4.0 3.10.

What about making the BTS support DAGs which are not trees? I think something like this would be useful, but I don't personally have a good idea on how this could be specified using the changelog or how bug fixed/found/absent should be propagated in the DAG. If you have better ideas, email me!

,

Planet DebianMichael Stapelberg: Debian buster on the Raspberry Pi 3 (update)

I previously wrote about my Debian buster preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • WiFi works out of the box. Use e.g. ip link set dev wlan0 up, and iwlist wlan0 scan.
  • Kernel boot messages are now displayed on an attached monitor (if any), not just on the serial console.
  • Root file system resizing will now not touch the partition table if the user modified it.
  • The image is now compressed using xz, reducing its size to 170M.

As before, the image is built with vmdb2, the successor to vmdebootstrap. The input files are available at https://github.com/Debian/raspi3-image-spec.

Note that Bluetooth is still untested (see wiki:RaspberryPi3 for details).

Given that Bluetooth is the only known issue, I’d like to work towards getting this image built and provided on official Debian infrastructure. If you know how to make this happen, please send me an email. Thanks!

As a preview version (i.e. unofficial, unsupported, etc.) until that’s done, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2018-01-08/. To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget https://people.debian.org/~stapelberg/raspberrypi3/2018-01-08/2018-01-08-raspberry-pi-3-buster-PREVIEW.img.xz
$ xzcat 2018-01-08-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdb bs=64k oflag=dsync status=progress

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is “raspberry”

Planet DebianJonathan Dowland: Announcing BadISO

For a few years now I have been working on-and-off on a personal project to import data from a large collection of home-made CD-Rs and DVD-Rs. I've started writing up my notes, experiences and advice for performing a project like this; but they aren't yet in a particularly legible state.

As part of this work I wrote some software called "BadISO" which takes a possibly-corrupted or incomplete optical disc image (specifically ISO9660) and combined with a GNU ddrescue map (or log) file, tells you which files within the image are intact, and which are not. The idea is you have tried to import a disc using ddrescue and some areas of the disc have not read successfully. The ddrescue map file tells you which areas in byte terms, but not what files that corresponds to. BadISO plugs that gap.

Here's some example output:

$ badiso my_files.iso
…
✔ ./joes/allstars.zip
✗ ./joes/ban.gif
✔ ./joes/eur-mgse.zip
✔ ./joes/gold.zip
✗ ./joes/graphhack.txt
…

BadISO was (and really, is) a hacky proof of concept written in Python. I have ambitions to re-write it properly (in either Idris or Haskell) but I'm not going to get around to it in the near future, and in the meantime at least one other person has found this useful. So I'm publishing it in its current state.

BadISO currently requires GNU xorriso.

You can grab it from https://github.com/jmtd/badiso.

Cory DoctorowInterview with the National Science Teachers Association’s Lab Out Loud podcast

Back in 2010, I appeared as a guest on the National Science Teachers Association’s Lab Out Loud podcast, and this year, they had me back as part of their celebration of their first decade; they’ve just published the interview, (MP3) which was primarily about my novel Walkaway.

Planet DebianJelmer Vernooij: Breezy: Forking Bazaar

A couple of months ago, Martin and I announced a friendly fork of Bazaar, named Breezy.

It's been 5 years since I wrote a Bazaar retrospective and around 6 since I seriously contributed to the Bazaar codebase.

Goals

We don't have any grand ambitions for Breezy; the main goal is to keep Bazaar usable going forward. Your open source projects should still be using Git.

The main changes we have made so far come down to fixing a number of bugs and to bundling useful plugins. Bundling plugins makes setting up an environment simpler and to eliminate the API compatibility issues that plagued external plugins in the Bazaar world.

Perhaps the biggest effort in Breezy is porting the codebase to Python 3, allowing it to be used once Python 2 goes EOL in 2020.

A fork

Breezy is a fork of Bazaar and not just a new release series.

Bazaar upstream has been dormant for the last couple of years anyway - we don't lose anything by forking.

We're forking because gives us the independence to make some of the changes we deemed necessary and that are otherwise hard to make for an established project, For example, we're now bundling plugins, taking an axe to a large number of APIs and dropping support for older platforms.

A fork also means independence from Canonical; there is no CLA for Breezy (a hindrance for Bazaar) and we can set up our own infrastructure without having to chase down Canonical staff for web site updates or the installation of new packages on the CI system.

More information

Martin gave a talk about Breezy at PyCon UK this year.

Breezy bugs can be filed on Launchpad. For the moment, we are using the Bazaar mailing list and the #bzr IRC channel for any discussions and status updates around Breezy.

CryptogramTourist Scams

A comprehensive list. Most are old and obvious, but there are some clever variants.

Worse Than FailureCodeSOD: JavaScript Centipede

Starting with the film Saw, in 2004, the “torture porn” genre started to seep into the horror market. Very quickly, filmmakers in that genre learned that they could abandon plot, tension, and common sense, so long as they produced the most disgusting concepts they could think of. The game of one-downsmanship arguably reached its nadir with the conclusion of The Human Centipede trilogy. Yes, they made three of those movies.

This aside into film critique is because Greg found the case of a “JavaScript Centipede”: the refuse from one block of code becomes the input to the next block.

function dynamicallyLoad(win, signature) {
    for (var i = 0; i < this.addList.length; i++) {
        if (window[this.addList[i].object] != null)
            continue;
        var object = win[this.addList[i].object];
        if (this.addList[i].type == 'function' || typeof (object) == 'function') {
            var o = String(object);
            var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                .replace(/\n/g, "\\n").replace(/'/g, "\\'");
            var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                .replace(/,/g, "','");
            if (params != "")
                params += "','";
            window.eval(String(this.addList[i].object) +
                        "=new Function('" + String(params + body) + "')");
            var c = window[this.addList[i].object];
            if (this.addList[i].type == 'class') {
                for (var j in object.prototype) {
                    var o = String(object.prototype[j]);
                    var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                        .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                        .replace(/\n/g, "\\n").replace(/'/g, "\\'");
                    var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                        .replace(/,/g, "','");
                    if (params != "")
                        params += "','";
                    window.eval(String(this.addList[i].object) + ".prototype." + j +
                        "=new Function('" + String(params + body) + "')");
                }
                if (object.statics) {
                    window[this.addList[i].object].statics = new Object();
                    for (var j in object.statics) {
                        var obj = object.statics[j];
                        if (typeof (obj) == 'function') {
                            var o = String(obj);
                            var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                                .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                                .replace(/\n/g, "\\n").replace(/'/g, "\\'");
                            var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                                .replace(/,/g, "','");
                            if (params != "")
                                params += "','";
                            window.eval(String(this.addList[i].object) + ".statics." +
                                j + "=new Function('" + String(params + body) + "')");
                        } else
                            window[this.addList[i].object].statics[j] = obj;
                    }
                }
            }
        } else if (this.addList[i].type == 'image') {
            window[this.addList[i].object] = new Image();
            window[this.addList[i].object].src = object.src;
        } else
            window[this.addList[i].object] = object;
    }
    this.addList.length = 0;
    this.isLoadedArray[signature] = new Date().getTime();
}

I’m not going to explain what this code does, I’m not certain I could. Like a Human Centipede film, you’re best off just being disgusted at the concept on display. If you're not sure why it's bad, just note the eval calls. Don’t think too much about the details.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianSean Whitton: dgit push-source

dgit 4.2, which is now in Debian unstable, has a new subcommand: dgit push-source. This is just like dgit push, except that

  • it forces a source-only upload; and
  • it also takes care of preparing the _source.changes, transparently, without the user needing to run dpkg-buildpackage -S or dgit build-source or whatever.

push-source is useful to ensure you don’t accidently upload binaries, and that was its original motivation. But there is a deeper significance to this new command: to say

% dgit push-source unstable

is, in one command, basically to say

% git push ftp-master HEAD:unstable

That is: dgit push-source is like doing a single-step git push of your git HEAD straight into the archive! The future is now!

The contrast here is with ordinary dgit push, which is not analogous to a single git push command, because

  • it involves uploading .debs, which make a material change to the archive other than updating the source code of the package; and
  • it must be preceded by a call to dpkg-buildpackage, dgit sbuild or similar to prepare the .changes file.

While dgit push-source also involves uploading files to ftp-master in addition to the git push, because that happens transparently and does not require the user to run a build command, it can be thought of as an implementation detail.

Two remaining points of disanalogy:

  1. git push will push your HEAD no matter the state of the working tree, but dgit push-source has to clean your working tree. I’m thinking about ways to improve this.

  2. For non-native packages, you still need an orig.tar in ... Urgh. At least that’s easy to obtain thanks to git-deborig.

,

Planet DebianMehdi Dogguy: Salsa webhooks and integrated services

Since many years now, Debian is facing an issue with one of its most important services: alioth.debian.org (Debian's forge). It is used by most the teams and hosts thousands of repositories (of all sorts) and mailing-lists. The service was stable (and still is), but not maintained. So it became increasingly important to find its replacement.

Recently, a team for volunteers organized a sprint to work on the replacement of Alioth. I was very skeptical about the status of this new project until... tada! An announcement was sent out about the beta release of this new service: salsa.debian.org (a GitLab CE instance). Of course, Salsa hosts only Git repositories and doesn't deal with other {D,}VCSes used on Alioth (like Darcs, Svn, CVS, Bazaar and Mercurial) but it is a huge step forward!

I must say that I absolutely love this new service which brings fresh air to Debian developers. We all owe a debt of gratitude to all those who made this possible. Thank you very much for working on this!

Alas, no automatic migration was done between the two services (for good reasons). The migration is left to the maintainers. It might be an easy task for some who maintain a few packages, but it is a depressing task for bigger teams.

To make this easy, Christoph Berg wrote a batch import script to import a Git repository in a single step. Unfortunately, GitLab is what it is... and it is not possible to set team-wide parameters to use in each repository. Salsa's documentation describes how to configure that for each repository (project, in GitLab's jargon) but this click-monkey-work is really not for many of us. Fortunately, GitLab has a nice API and all this is scriptable. So I wrote some scripts to:
  • Import a Git repo (mainly Christoph's script with an enhancement)
  • Set up IRC notifications
  • Configure email pushes on commits
  • Enable webhooks to automatically tag 'pending' or 'close' Debian BTS bugs depending on mentions in commits messages.
I published those scripts here: salsa-scripts. They are not meant to be beautiful, but only to make your life a little less miserable. I hope you find them useful. Personally, this first step was a prerequisite for the migration of my personal and team repositories over to Salsa. If more people want to contribute to those scripts, I can move the repository into the Debian group.

Planet DebianPetter Reinholdtsen: Legal to share more than 11,000 movies listed on IMDB?

I've continued to track down list of movies that are legal to distribute on the Internet, and identified more than 11,000 title IDs in The Internet Movie Database (IMDB) so far. Most of them (57%) are feature films from USA published before 1923. I've also tracked down more than 24,000 movies I have not yet been able to map to IMDB title ID, so the real number could be a lot higher. According to the front web page for Retro Film Vault, there are 44,000 public domain films, so I guess there are still some left to identify.

The complete data set is available from a public git repository, including the scripts used to create it. Most of the data is collected using web scraping, for example from the "product catalog" of companies selling copies of public domain movies, but any source I find believable is used. I've so far had to throw out three sources because I did not trust the public domain status of the movies listed.

Anyway, this is the summary of the 28 collected data sources so far:

 2352 entries (   66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
 2302 entries (  120 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  195 entries (   63 unique) with and   200 without IMDB title ID in free-movies-cinemovies.json
   89 entries (   52 unique) with and    38 without IMDB title ID in free-movies-creative-commons.json
  344 entries (   28 unique) with and   655 without IMDB title ID in free-movies-fesfilm.json
  668 entries (  209 unique) with and  1064 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   21 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 6822 entries ( 6669 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
  137 entries (    0 unique) with and     0 without IMDB title ID in free-movies-imdb-externlist.json
 1205 entries (   57 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
   84 entries (   20 unique) with and   167 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  135 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (  100 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  229 entries (   87 unique) with and     1 without IMDB title ID in free-movies-manual.json
   44 entries (    2 unique) with and    64 without IMDB title ID in free-movies-openflix.json
  291 entries (   33 unique) with and   474 without IMDB title ID in free-movies-profilms-pd.json
  211 entries (    7 unique) with and     0 without IMDB title ID in free-movies-publicdomainmovies-info.json
 1232 entries (   57 unique) with and  1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
   46 entries (   13 unique) with and    81 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   64 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 1758 entries (  882 unique) with and  3786 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
   63 entries (   16 unique) with and   141 without IMDB title ID in free-movies-vodo.json
11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID

I keep finding more data sources. I found the cinemovies source just a few days ago, and as you can see from the summary, it extended my list with 63 movies. Check out the mklist-* scripts in the git repository if you are curious how the lists are created. Many of the titles are extracted using searches on IMDB, where I look for the title and year, and accept search results with only one movie listed if the year matches. This allow me to automatically use many lists of movies without IMDB title ID references at the cost of increasing the risk of wrongly identify a IMDB title ID as public domain. So far my random manual checks have indicated that the method is solid, but I really wish all lists of public domain movies would include unique movie identifier like the IMDB title ID. It would make the job of counting movies in the public domain a lot easier.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet Linux AustraliaDavid Rowe: Engage the Silent Drive

I’ve been busy electrocuting my boat – here are our first impressions of the Torqueedo Cruise 2.0T on the water.

About 2 years ago I decided to try sailing, so I bought a second hand Hartley TS16; a popular small “trailer sailor” here in Australia. Since then I have been getting out once every week, having some very pleasant days with friends and family, and even at times by myself. Sailing really takes you away from everything else in the world. It keeps you busy as you are always pulling a rope or adjusting this and that, and is physically very active as you are clambering all over the boat. Mentally there is a lot to learn, and I started as a complete nautical noob.

Sailing is so quiet and peaceful, you get propelled by the wind using aerodynamics and it feels like like magic. However this is marred by the noise of outboard motors, which are typically used at the start and end of the day to get the boat to the point where it can sail. They are also useful to get you out of trouble in high seas/wind, or when the wind dies. I often use the motor to “un hit” Australia when I accidentally lodge myself on a sand bar (I have a lot of accidents like that).

The boat came with an ancient 2 stroke which belched smoke and noise. After about 12 months this motor suffered a terminal melt down (impeller failure and over heated) so it was replaced with a modern 5HP Honda 4-stroke, which is much quieter and very fuel efficient.

My long term goal was to “electrocute” the boat and replace the infernal combustion outboard engine with an electric motor and battery pack. I recently bit the bullet and obtained a Torqeedo Cruise 2kW outboard from Eco Boats Australia.

My friend Matt and I tested the motor today and are really thrilled. Matt is an experienced Electrical Engineer and sailor so was an ideal companion for the first run of the Torqueedo.

Torqueedo Cruise 2.0 First Impressions

It’s silent – incredibly so. Just a slight whine conducted from the motor/gearbox pod beneath the water. The sound of water flowing around the boat is louder!

The acceleration is impressive, better than the 4-stroke. Make sure you sit down. That huge, low RPM prop and loads of torque. We settled on 1000W, experimenting with other power levels.

The throttle control is excellent, you can dial up any speed you want. This made parking (mooring) very easy compared to the 4-stroke which is more of a “single speed” motor (idles at 3 knots, 4-5 knots top speed) and is unwieldy for parking.

It’s fit for purpose. This is not a low power “trolling” motor, it is every bit as powerful as the modern Honda 5HP 4-stroke. We did a A/B test and obtained the same top speed (5 knots) in the same conditions (wind/tide/stretch of water). We used it with 15 knot winds and 1m seas and it was the real deal – pushing the boat exactly where we wanted to go with authority. This is not a compromise solution. The Torqueedo shows internal combustion who’s house it is.

We had some fun sneaking up on kayaks at low power, getting to within a few metres before they heard us. Other boaties saw us gliding past with the sails down and couldn’t work out how we were moving!

A hidden feature is Azipod steering – it steers through more than 270 degrees. You can reverse without reverse gear, and we did “donuts” spinning on the keel!

Some minor issues: Unlike the Honda the the Torqueedo doesn’t tilt complete out of the water when sailing, leaving some residual drag from the motor/propeller pod. It also has to be removed from the boat for trailering, due to insufficient road clearance.

Walk Through

Here are the two motors with the boat out of the water:

It’s quite a bit longer than the Honda, mainly due to the enormous prop. The centres of the two props are actually only 7cm apart in height above ground. I had some concerns about ground clearance, both when trailering and also in the water. I have enough problems hitting Australia and like the way my boat can float in just 30cm of water. I discussed this with my very helpful Torqueedo dealer, Chris. He said tests with short and long version suggested this wasn’t a problem and in fact the “long” version provided better directional control. More water on top of the prop is a good thing. They recommend 50mm minimum, I have about 100mm.

To get started I made up a 24V battery pack using a plastic tub and 8 x 3.2V 100AH Lithium cells, left over from my recent EV battery upgrade. The cells are in varying conditions; I doubt any of them have 100AH capacity after 8 years of being hammered in my EV. On the day we ran for nearly 2 hours before one of the weaker cells dipped beneath 2.5V. I’ll sort through my stock of second hand cells some time to optimise the pack.

The pack plus motor weighs 41kg, the 5HP Honda plus 5l petrol 32kg. At low power (600W, 3.5 knots), this 2.5kWHr pack will give us a range of 14 nm or 28km. Plenty – on a huge days sailing we cover 40km, of which just 5km would be on motor.

All that power on board is handy too, for example the load of a fridge would be trivial compared to the motor, and a 100W HF radio no problem. So now I can quaff ice-cold sparkling shiraz or a nice beer, while having an actual conversation and not choking on exhaust fumes!

Here’s Matt taking us for a test drive, not much to the Torqueedo above the water:

For a bit of fun we ran both motors (maybe 10HP equivalent) and hit 7 knots, almost getting the Hartley up on the plane. Does this make it a Hybrid boat?

Conclusions

We are in love. This is the future of boating. For sale – one 5HP Honda 4-stroke.

Planet DebianRuss Allbery: Free software log (December 2017)

I finally have enough activity to report that I need to think about how to format these updates. It's a good problem to have.

In upstream work, I updated rra-c-util with an evaluation of the new warning flags for GCC 7, enabling as many warnings as possible. I also finished the work to build with Clang with warnings enabled, which required investigating a lot of conversions between variables of different sizes. Part of that investigation surfaced that the SA_LEN macro was no longer useful, so I removed that from INN and rra-c-util.

I'm still of two minds about whether adding the casts and code to correctly build with -Wsign-conversion is worth it. I started a patch to rra-c-util (currently sitting in a branch), but wasn't very happy about the resulting code quality. I think doing this properly requires some thoughtfulness about macros and a systematic approach.

Releases:

In Debian Policy, I wrote and merged a patch for one bug, merged patches for two other bugs, and merged a bunch of wording improvements to the copyright-format document. I also updated the license-count script to work with current Debian infrastructure and ran it again for various ongoing discussions of what licenses to include in common-licenses.

Debian package uploads:

  • lbcd (now orphaned)
  • libafs-pag-perl (refreshed and donated to pkg-perl)
  • libheimdal-kadm5-perl (refreshed and donated to pkg-perl)
  • libnet-ldapapi-perl
  • puppet-modules-puppetlabs-apt
  • puppet-modules-puppetlabs-firewall (team upload)
  • puppet-modules-puppetlabs-ntp (team upload)
  • puppet-modules-puppetlabs-stdlib (two uploads)
  • rssh (packaging refresh, now using dgit)
  • webauth (now orphaned)
  • xfonts-jmk

There are a few more orphaning uploads and uploads giving Perl packages to the Perl packaging team coming, and then I should be down to the set of packages I'm in a position to actively maintain and I can figure out where I want to focus going forward.

Planet DebianDirk Eddelbuettel: prrd 0.0.1: Parallel Running [of] Reverse Depends

A new package is now on the ghrr drat. It was uploaded four days ago to CRAN but still lingers in the inspect/ state, along with a growing number of other packages. But as some new packages have come through, I am sure it will get processed eventually but in the meantime I figured I may as well make it available this way.

The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development (provided you care about not breaking other packages as CRAN asks you too), and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies).

So this package uses the liteq package by Gabor Csardi to set up all tests to run as tasks in a queue. This permits multiple independent queue runners to work at a task at a time. Results are written back and summarized.

This already works pretty well as evidenced by the following screenshot (running six parallel workers, arranged in split byobu session).

See the aforementioned webpage and its repo for more details, and by all means give it a whirl.

For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Annual Penguin Picnic, January 28, 2018

Jan 28 2018 12:00
Jan 28 2018 18:00
Jan 28 2018 12:00
Jan 28 2018 18:00
Location: 
Yarra Bank Reserve, Hawthorn.

The Linux Users of Victoria Annual Penguin Picnic will be held on Sunday, January 28, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.

LUV would like to acknowledge Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is a subcommitee of Linux Australia.

January 28, 2018 - 12:00

read more

CryptogramSpectre and Meltdown Attacks

After a week or so of rumors, everyone is now reporting about the Spectre and Meltdown attacks against pretty much every modern processor out there.

These are side-channel attacks where one process can spy on other processes. They affect computers where an untrusted browser window can execute code, phones that have multiple apps running at the same time, and cloud computing networks that run lots of different processes at once. Fixing them either requires a patch that results in a major performance hit, or is impossible and requires a re-architecture of conditional execution in future CPU chips.

I'll be writing something for publication over the next few days. This post is basically just a link repository.

EDITED TO ADD: Good technical explanation. And a Slashdot thread.

EDITED TO ADD (1/5): Another good technical description. And how the exploits work through browsers. A rundown of what vendors are doing. Nicholas Weaver on its effects on individual computers.

EDITED TO ADD (1/7): xkcd.

,

Planet DebianSven Hoexter: BIOS update Dell Latitude E7470 and Lenovo TP P50

Maybe some recent events let to BIOS update releases by various vendors around the end of 2017. So I set out to update (for the first time) the BIOS of my laptops. Searching the interwebs for some hints I found a lot outdated information involving USB thumb drives, CDs, FreeDOS in variants but also some useful stuff. So here is the short list of what actually worked in case I need to do it again.

Update: Added a Wiki page so it's possible to extend the list. Seems that some of us avoided the update hassle so far, but now with all those Intel ME CVEs and Intel microcode updates it's likely we've to do it more often.

Dell Latitude E7470 (UEFI boot setup)

  1. Download the file "Latitude_E7x70_1.18.5.exe" (or whatever is the current release).
  2. Move the file to "/boot/efi/".
  3. Boot into the one time boot menu with F12 during the BIOS/UEFI start.
  4. Select the "Flash BIOS Update" menu option.
  5. Use your mouse to select the update file visually and watch the magic.

So no USB sticks, FreeDOS, SystemrescueCD images or other tricks involved. If it's cool that the computer in your computers computer running Minix (or whatever is involved in this case) updates your firmware is a different topic, but the process is pretty straight forward.

Lenovo ThinkPad P50

  1. Download the BIOS Update bootable CD image from Lenovo "n1eur31w.iso" (Select Windows as OS so it's available for download).
  2. Extract the eltorito boot image from the image "geteltorito -o thinkpad.img Downloads/n1eur31w.iso".
  3. Dump it on a USB thumb drive "dd if=thinkpad.img of=/dev/sdX".
  4. Boot from this thumb drive and follow the instructions of the installer.

I guess the process is similar for almost all ThinkPads.

Planet DebianDirk Eddelbuettel: tint 0.0.5

A maintenance release of the tint package arrived on CRAN earlier this week. Its name expands from tint is not tufte as the package offers a fresher take on the Tufte-style for html and pdf presentations.

A screenshot of the pdf variant is below.

Similar to the RcppCNPy release this week, this is pure maintenance related to dependencies. CRAN noticed that processing these vignettes requires the mgcv package---as we use geom_smooth() in some example graphs. So that was altered to not require this requirement just for the vignette tests. We also had one pending older change related to jurassic pandoc versions on some CRAN architectures.

Changes in tint version 0.0.5 (2018-01-05)

  • Only run html rendering regression test on Linux or Windows as the pandoc versions on CRAN are too old elsewhere.

  • Vignette figures reworked so that the mgcv package is not required avoiding a spurious dependency [CRAN request]

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RcppCNPy 0.2.8

A minor maintenance release of the RcppCNPy package arrived on CRAN this week.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

There is no code change here. But to process the vignette we rely on knitr which sees Python here and (as of its most recent release) wants the (excellent !!) reticulate package. Which is of course overkill just to process a short pdf document, so we turned this off.

Changes in version 0.2.8 (2018-01-04)

  • Vignette sets knitr option python.reticulate=FALSE to avoid another depedency just for the vignette [CRAN request]

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRaphaël Hertzog: My Free Software Activities in December 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h and I had two hours left but I only spent 13h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (18 commits to the security tracker).

I also released DLA-1205-1 on simplesamlphp fixing 6 CVE. I prepared and released DLA-1207-1 on erlang with the help of the maintainer who tested the patch that I backported. I handled tkabber but it turned out that the CVE report was wrong, I reported this to MITRE who marked the CVE as DISPUTED (see CVE-2017-17533).

During my CVE triaging work, I decided to mark mp3gain and libnet-ping-external-perl as unsupported (the latter has been removed everywhere already). I re-classified the suricata CVE as not worth an update (following the decision of the security team). I also dropped global from dla-needed as the issue was marked unimportant but I still filed #884912 about it so that it gets tracked in the BTS.

I filed #884911 on ohcount requesting new upstream (fixing CVE) and update of homepage field (that is misleading in current package). I dropped jasperreports from dla-needed.txt as issues are undetermined and upstream is uncooperative, instead I suggested to mark the package as unsupported (see #884907).

Misc Debian Work

Debian Installer. I suggested to switch to isenkram instead of discover for automatic package installation based on recognized hardware. I also filed a bug on isenkram (#883470) and asked debian-cloud for help to complete the missing mappings.

Packaging. I sponsored asciidoc 8.6.10-2 for Joseph Herlant. I uplodaded new versions of live-tools and live-build fixing multiple bugs that had been reported (many with patches ready to merge). Only #882769 required a bit more work to track down and fix. I also uploaded dh-linktree 0.5 with a new feature contributed by Paul Gevers. By the way, I no longer use this package so I will happily give it over to anyone who needs it.

QA team. When I got my account on salsa.debian.org (a bit before the announce of the beta phase), I created the group for the QA team and setup a project for distro-tracker.

Bug reports. I filed #884713 on approx, requesting that systemd’s approx.socket be configured to not have any trigger limit.

Package Tracker

Following the switch to Python 3 by default, I updated the packaging provided in the git repository. I’m now also providing a systemd unit to run gunicorn3 for the website.

I merged multiple patches of Pierre-Elliott Bécue fixing bugs and adding a new feature (vcswatch support!). I fixed a bug related to the lack of a link to the experimental build logs and a bit of bug triaging.

I also filed two bugs against DAK related to bad interactions with the package tracker: #884930 because it does still use packages.qa.debian.org to send emails instead of tracker.debian.org. And #884931 because it sends removal mails to too many email addresses. And I filed a bug against the tracker (#884933) because the last issue also revealed a problem in the way the tracker handles removal mails.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianShirish Agarwal: RequestPolicy Continued

Dear Friends,

First up, I saw a news item about Indian fake e-visa portal. As it is/was Sunday, I decided to see if there indeed is such a mess. I dug out torbrowser-bundle (tbb), checked the IP it was giving me (some Canadian IP starting from (216.xxx.xx.xx) typed in ‘Indian visa application’ and used duckduckgo.com to see which result cropped up first.

I deliberately used tbb as I wanted to ensure it wasn’t coming from an Indian IP where the chances of Indian e-visa portal being fake should be negligible. Scamsters would surely be knowledgable to differ between IPs coming from India and from some IP from some other country.

The first result duckduckgo.com gave was https://indianvisaonline.gov.in/visa/index.html

I then proceeded to download whois on my new system (more on that in another blog post

$ sudo aptitude install whois

and proceeded to see if it’s the genuine thing or not and this is the information I got –

$ whois indianvisaonline.gov.in
Access to .IN WHOIS information is provided to assist persons in determining the contents of a domain name registration record in the .IN registry database. The data in this record is provided by .IN Registry for informational purposes only, and .IN does not guarantee its accuracy. This service is intended only for query-based access. You agree that you will use this data only for lawful purposes and that, under no circumstances will you use this data to: (a) allow, enable, or otherwise support the transmission by e-mail, telephone, or facsimile of mass unsolicited, commercial advertising or solicitations to entities other than the data recipient's own existing customers; or (b) enable high volume, automated, electronic processes that send queries or data to the systems of Registry Operator, a Registrar, or Afilias except as reasonably necessary to register domain names or modify existing registrations. All rights reserved. .IN reserves the right to modify these terms at any time. By submitting this query, you agree to abide by this policy.

Domain ID:D4126837-AFIN
Domain Name:INDIANVISAONLINE.GOV.IN
Created On:01-Apr-2010 12:10:51 UTC
Last Updated On:18-Apr-2017 22:32:00 UTC
Expiration Date:01-Apr-2018 12:10:51 UTC
Sponsoring Registrar:National Informatics Centre (R12-AFIN)
Status:OK
Reason:
Registrant ID:dXN4emZQYOGwXU6C
Registrant Name:Director Immigration and Citizenship
Registrant Organization:Ministry of Home Affairs
Registrant Street1:NDCC-II building
Registrant Street2:Jaisingh Road
Registrant Street3:
Registrant City:New Delhi
Registrant State/Province:Delhi
Registrant Postal Code:110001
Registrant Country:IN
Registrant Phone:+91.23438035
Registrant Phone Ext.:
Registrant FAX:+91.23438035
Registrant FAX Ext.:
Registrant Email:dsmmp-mha@nic.in
Admin ID:dXN4emZQYOvxoltA
Admin Name:Director Immigration and Citizenship
Admin Organization:Ministry of Home Affairs
Admin Street1:NDCC-II building
Admin Street2:Jaisingh Road
Admin Street3:
Admin City:New Delhi
Admin State/Province:Delhi
Admin Postal Code:110001
Admin Country:IN
Admin Phone:+91.23438035
Admin Phone Ext.:
Admin FAX:+91.23438035
Admin FAX Ext.:
Admin Email:dsmmp-mha@nic.in
Tech ID:jiqNEMLSJPA8a6wT
Tech Name:Rakesh Kumar
Tech Organization:National Informatics Centre
Tech Street1:National Data Centre
Tech Street2:Shashtri Park
Tech Street3:
Tech City:New Delhi
Tech State/Province:Delhi
Tech Postal Code:110053
Tech Country:IN
Tech Phone:+91.24305154
Tech Phone Ext.:
Tech FAX:
Tech FAX Ext.:
Tech Email:nsrawat@nic.in
Name Server:NS1.NIC.IN
Name Server:NS2.NIC.IN
Name Server:NS7.NIC.IN
Name Server:NS10.NIC.IN
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
DNSSEC:Unsigned

It seems to be a legitimate site as almost all information seems to be legit. I know for a fact, that all or 99% of all Indian government websites are done by NIC or National Institute of Computing. The only thing which rankled me was that DNSSEC was unsigned but then haven’t seen NIC being as pro-active about web-security as they should be as they handle many government sensitive internal and external websites.

I did send an email for them imploring them to use the new security feature.

To be doubly sure, one could also use an add-on like showip add it your firefox profile and using any of the web services obtain the IP Address of the website.

For instance, the same website which we are investigating gives 164.100.129.113

Doing a whois of 164.100.129.113 tells that NICNET has got/purchased a whole range of addresses i.e. 164.100.0.0 – 164.100.255.255 which is 65025 addresses which it uses.

One can see NIC’s wikipedia page to understand the scope it works under.

So from both accounts, it is safe to assume that the web-site and page is legit.

Well, that’s about it for the site. While this is and should be trivial to most Debian users, it might or might not be to all web users but it is one way in which you can find if a site is legitimate.

Few weeks back, I read Colin’s blog post about Kitten Block which also was put on p.d.o.

So let me share RequestPolicy Continued –

Requestpolicy Continued Mozilla Add-on

This is a continuation of RequestPolicy which was abandoned (upstream) by the original developer and resides in the Debian repo.

http://tracker.debian.org/xul-ext-requestpolicy

I did file a ticket stating both the name-change and where the new watch file should point at 870607

What it does is similar to what Adblock/Kitten Block does + more. It basically restricts any third-party domain from having permission to show to you. It is very similar to another add-on called u-block origin .

I liked RPC as it’s known because it hardly has any learning curve.

You install the add-on, see which third-party domains you need and just allow them. For instance, many websites nowadays fonts.googleapis.com, ajax.googleapis.com is used by many sites, pictures or pictography content is usually looked after by either cloudflare or cloudfront.

One of the big third parties that you would encounter of-course is google.com and gstatic.net. Lot of people use gstatic and its brethren for spam protection but they come with cost of user-identifibility and also the controversial crowdsourced image recognition.

It is a good add-on which does remind you of competing offerings elsewhere but also a stark reminder of how much google has penetrated and to what levels within sites.

I use tor-browser and RPC as my browsing is distraction-free as loads of sites have nowadays moved to huge bandwidth consuming animated ads etc. While I’m on a slow non-metered (eat/surf all you want) kind of service, for those using metered (x bytes for y price including upload and download) the above is also a god-send..

On the upstream side, they do need help both with development and testing the build. While I’m not sure, I think the maintainer didn’t reply or do anything for my bug as he knew that Web-Exensions are around the corner. Upstream has said he hopes to have a new build compatible with web extensions by the end of February 2018.

On the debian side of things, I have filed 870607 but know it probably will be acted once the port to web-extensions has been completed and some testing done so might take time.


Filed under: Miscellenous Tagged: #advertising, #analytics, #ip, #Mozilla addons, #planet-debian, #reverse-ip-lookup, #site legitimacy, #third-party domains, #torbrowser-bundle, #whois, ip-lookup, mozilla, security

Planet Linux AustraliaJonathan Adamczewski: A little bit of floating point in a memory allocator — Part 2: The floating point

[Previously]

This post contains the same material as this thread of tweets, with a few minor edits.

In IEEE754, floating point numbers are represented like this:

±2ⁿⁿⁿ×1.sss…

nnn is the exponent, which is floor(log2(size)) — which happens to be the fl value computed by TLSF.

sss… is the significand fraction: the part that follows the decimal point, which happens to be sl.

And so to calculate fl and sl, all we need to do is convert size to a floating point value (on recent x86 hardware, that’s a single instruction). Then we can extract the exponent, and the upper bits of the fractional part, and we’re all done :D

That can be implemented like this:

double sf = (int64_t)size;
uint64_t sfi;
memcpy(&sfi, &sf, 8);
fl = (sfi >> 52) - (1023 + 7);
sl = (sfi >> 47) & 31;

There’s some subtleties (there always is). I’ll break it down…

double sf = (int64_t)size;

Convert size to a double, with an explicit cast. size has type size_t, but using TLSF from github.com/mattconte/tlsf, the largest supported allocation on 64bit architecture is 2^32 bytes – comfortably less than the precision provided by the double type. If you need your TLSF allocator to allocate chunks bigger than 2^53, this isn’t the technique for you :)

I first tried using float (not double), which can provide correct results — but only if the rounding mode happens to be set correctly. double is easier.

The cast to (int64_t) results in better codegen on x86: without it, the compiler will generate a full 64bit unsigned conversion, and there is no single instruction for that.

The cast tells the compiler to (in effect) consider the bits of size as if they were a two’s complement signed value — and there is an SSE instruction to handle that case (cvtsi2sdq or similar). Again, with the implementation we’re using size can’t be that big, so this will do the Right Thing.

uint64_t sfi;
memcpy(&sfi, &sf, 8);

Copy the 8 bytes of the double into an unsigned integer variable. There are a lot of ways that C/C++ programmers copy bits from floating point to integer – some of them are well defined :) memcpy() does what we want, and any moderately respectable compiler knows how to select decent instructions to implement it.

Now we have floating point bits in an integer register, consisting of one sign bit (always zero for this, because size is always positive), eleven exponent bits (offset by 1023), and 52 bits of significant fraction. All we need to do is extract those, and we’re done :)

fl = (sfi >> 52) - (1023 + 7);

Extract the exponent: shift it down (ignoring the always-zero sign bit), subtract the offset (1023), and that 7 we saw earlier, at the same time.

sl = (sfi >> 47) & 31;

Extract the five most significant bits of the fraction – we do need to mask out the exponent.

And, just like that*, we have mapping_insert(), implemented in terms of integer -> floating point conversion.

* Actual code (rather than fragments) may be included in a later post…

Planet Linux AustraliaJonathan Adamczewski: A little bit of floating point in a memory allocator — Part 1: Background

This post contains the same material as this thread of tweets, with a few minor edits.

Over my holiday break at the end of 2017, I took a look into the TLSF (Two Level Segregated Fit) memory allocator to better understand how it works. I’ve made use of this allocator and have been impressed by its real world performance, but never really done a deep dive to properly understand it.

The mapping_insert() function is a key part of the allocator implementation, and caught my eye. Here’s how that function is described in the paper A constant-time dynamic storage allocator for real-time systems:

I’ll be honest: from that description, I never developed a clear picture in my mind of what that function does.

(Reading it now, it seems reasonably clear – but I can say that only after I spent quite a bit of time using other methods to develop my understanding)

Something that helped me a lot was by looking at the implementation of that function from github.com/mattconte/tlsf/.  There’s a bunch of long-named macro constants in there, and a few extra implementation details. If you collapse those it looks something like this:

void mapping_insert(size_t size, int* fli, int* sli)
{ 
  int fl, sl;
  if (size < 256)
  {
    fl = 0;
    sl = (int)size / 8;
  }
  else
  {
    fl = fls(size);
    sl = (int)(size >> (fl - 5)) ^ 0x20;
    fl -= 7;
  }
  *fli = fl;
  *sli = sl;
}

It’s a pretty simple function (it really is). But I still failed to *see* the pattern of results that would be produced in my mind’s eye.

I went so far as to make a giant spreadsheet of all the intermediate values for a range of inputs, to paint myself a picture of the effect of each step :) That helped immensely.

Breaking it down…

There are two cases handled in the function: one for when size is below a certain threshold, and on for when it is larger. The first is straightforward, and accounts for a small number of possible input values. The large size case is more interesting.

The function computes two values: fl and sl, the first and second level indices for a lookup table. For the large case, fl (where fl is “first level”) is computed via fls(size) (where fls is short for “find last set” – similar names, just to keep you on your toes).

fls() returns the index of the largest bit set, counting from the least significant slbit, which is the index of the largest power of two. In the words of the paper:

“the instruction fls can be used to compute the ⌊log2(x)⌋ function”

Which is, in C-like syntax: floor(log2(x))

And there’s that “fl -= 7” at the end. That will show up again later.

For the large case, the computation of sl has a few steps:

  sl = (size >> (fl – 5)) ^ 0x20;

Depending on shift down size by some amount (based on fl), and mask out the sixth bit?

(Aside: The CellBE programmer in me is flinching at that variable shift)

It took me a while (longer than I would have liked…) to realize that this
size >> (fl – 5) is shifting size to generate a number that has exactly six significant bits, at the least significant end of the register (bits 5 thru 0).

Because fl is the index of the most significant bit, after this shift, bit 5 will always be 1 – and that “^ 0x20” will unset it, leaving the result as a value between 0 and 31 (inclusive).

So here’s where floating point comes into it, and the cute thing I saw: another way to compute fl and sl is to convert size into an IEEE754 floating point number, and extract the exponent, and most significant bits of the mantissa. I’ll cover that in the next part, here.

,

CryptogramFriday Squid Blogging: How the Optic Lobe Controls Squid Camouflage

Experiments on the oval squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianSteve Kemp: More ESP8266 projects, radio and epaper

I finally got the radio-project I've been talking about for the past while working. To recap:

  • I started with an RDA5807M module, but that was too small, and too badly-performing.
  • I moved on to using an Si4703-based integrated "evaluation" board. That was fine for headphones, but little else.
  • I finally got a TEA5767-based integrated "evaluatioN" board, which works just fine.
    • Although it is missing RDS (the system that lets you pull the name of the station off the transmission).
    • It also has no (digital) volume-control, so you have to adjust the volume physically, like a savage.

The project works well, despite the limitations, so I have a small set of speakers and the radio wired up. I can control the station via my web-browser and have an alarm to make it turn on/off at different times of day - cheating at that by using the software-MUTE facility.

All in all I can say that when it comes to IoT the "S stands for Simplicity" given that I had to buy three different boards to get the damn thing working the way I wanted. That said total cost is in the region of €5, probably about the same price I could pay for a "normal" hand-held radio. Oops.

The writeup is here:

The second project I've been working on recently was controlling a piece of ePaper via an ESP8266 device. This started largely by accident as I discovered you can buy a piece of ePaper (400x300 pixels) for €25 which is just cheap enough that it's worth experimenting with.

I had the intention that I'd display the day's calendar upon it, weather forecast, etc. My initial vision was a dashboard-like view with borders, images, and text. I figured rather than messing around with some fancy code-based grid-layout I should instead just generate a single JPG/PNG on a remote host, then program the board to download and display it.

Unfortunately the ESP8266 device I'm using has so little RAM that decoding and displaying a JPG/PNG from a remote URL is hard. Too hard. In the end I had to drop the use of SSL, and simplify the problem to get a working solution.

I wrote a perl script (what else?) to take an arbitrary JPG/PNG/image of the correct dimensions and process it row-by-row. It would keep track of the number of contiguous white/black pixels and output a series of "draw Lines" statements.

The ESP8266 downloads this simple data-file, and draws each line one at a time, ultimately displaying the image whilst keeping some memory free.

I documented the hell out of my setup here:

And here is a sample image being displayed:

Krebs on SecurityScary Chip Flaws Raise Spectre of Meltdown

Apple, Google, Microsoft and other tech giants have released updates for a pair of serious security flaws present in most modern computers, smartphones, tablets and mobile devices. Here’s a brief rundown on the threat and what you can do to protect your devices.

At issue are two different vulnerabilities, dubbed “Meltdown” and “Spectre,” that were independently discovered and reported by security researchers at Cyberus Technology, Google, and the Graz University of Technology. The details behind these bugs are extraordinarily technical, but a Web site established to help explain the vulnerabilities sums them up well enough:

“These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.”

“Meltdown and Spectre work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.”

The Meltdown bug affects every Intel processor shipped since 1995 (with the exception of Intel Itanium and Intel Atom before 2013), although researchers said the flaw could impact other chip makers. Spectre is a far more wide-ranging and troublesome flaw, impacting desktops, laptops, cloud servers and smartphones from a variety of vendors. However, according to Google researchers, Spectre also is considerably more difficult to exploit.

In short, if it has a computer chip in it, it’s likely affected by one or both of the flaws. For now, there don’t appear to be any signs that attackers are exploiting either to steal data from users. But researchers warn that the weaknesses could be exploited via Javascript — meaning it might not be long before we see attacks that leverage the vulnerabilities being stitched into hacked or malicious Web sites.

Microsoft this week released emergency updates to address Meltdown and Spectre in its various Windows operating systems. But the software giant reports that the updates aren’t playing nice with many antivirus products; the fix apparently is causing the dreaded “blue screen of death” (BSOD) for some antivirus users. In response, Microsoft has asked antivirus vendors who have updated their products to avoid the BSOD crash issue to install a special key in the Windows registry. That way, Windows Update can tell whether it’s safe to download and install the patch.

But not all antivirus products have been able to do this yet, which means many Windows users likely will not be able to download this patch immediately. If you run Windows Update and it does not list a patch made available on Jan 3, 2018, it’s likely your antivirus software is not yet compatible with this patch.

Google has issued updates to address the vulnerabilities on devices powered by its Android operating system. Meanwhile, Apple has said that all iOS and Mac systems are vulnerable to Meltdown and Spectre, and that it has already released “mitigations” in iOS 11.2, macOS 10.13.2, and tvOS 11.2 to help defend against Meltdown. The Apple Watch is not impacted. Patches to address this flaw in Linux systems were released last month.

Many readers appear concerned about the potential performance impact that applying these fixes may have on their devices, but my sense is that most of these concerns are probably overblown for regular end users. Forgoing security fixes over possible performance concerns doesn’t seem like a great idea considering the seriousness of these bugs. What’s more, the good folks at benchmarking site Tom’s Hardware say their preliminary tests indicate that there is “little to no performance regression in most desktop workloads” as a result of applying available fixes.

Meltdownattack.com has a full list of vendor advisories. The academic paper on Meltdown is here (PDF); the paper for Spectre can be found at this link (PDF). Additionally, Google has published a highly technical analysis of both attacks. Cyberus Technology has their own blog post about the threats.

CryptogramSpectre and Meltdown Attacks Against Microprocessors

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution -- which of course is not a solution -- is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world's computers for the past 15-20 years. They've been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer -- maybe one running in a browser window from that sketchy site you're visiting, or as a result of a phishing attack -- can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other users on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

"Throw it away and buy a new one" is ridiculous security advice, but it's what US-CERT recommends. It is also unworkable. The problem is that there isn't anything to buy that isn't vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there's no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here's a running list of who's patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren't security teams on call to write patches, and there often aren't mechanisms to push patches onto the devices. We're already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can't be fixed.

The second is that some of the patches require updating the computer's firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn't get that update directly to users; it had to work with the individual hardware companies, and some of them just weren't capable of getting the update to their customers.

We're already seeing this. Some patches require users to disable the computer's password, which means organizations can't automate the patch. Some anti-virus software blocks the patch, or -- worse -- crashes the computer. This results in a three-step process: patch your andi-virus software, patch your operating system, and then patch the computer's firmware.

The final reason is the nature of these vulnerabilities themselves. These aren't normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn't to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don't click on strange e-mail attachments, don't visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won't notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don't do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It's a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they'll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they'll be worse. 2018 will be the year of microprocessor vulnerabilities, and it's going to be a wild ride.


Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

Planet DebianJoey Hess: Spectre question

Could ASLR be used to prevent the Spectre attack?

The way Spectre mitigations are shaping up, it's going to require modification of every program that deals with sensitive data, inserting serialization instructions in the right places. Or programs can be compiled with all branch prediction disabled, with more of a speed hit.

Either way, that's going to be piecemeal and error-prone. We'll be stuck with a new class of vulnerabilities for a long time. Perhaps good news for the security industry, but it's going to become as tediously bad as buffer overflows for the rest of us.

Also, so far the mitigations being developed for Spectre only cover branching, but the Spectre paper also suggests the attack can be used in the absence of branches to eg determine the contents of registers, as long as the attacker knows the address of suitable instructions to leverage.

So I wonder if a broader mitigation can be developed, if only to provide some defense in depth from Spectre.

The paper on Spectre has this to say about ASLR:

The algorithm that tracks and matches jump histories appears to use only the low bits of the virtual address (which are further reduced by simple hash function). As a result, an adversary does not need to be able to even execute code at any of the memory addresses containing the victim’s branch instruction. ASLR can also be compensated, since upper bits are ignored and bits 15..0 do not appear to be randomized with ASLR in Win32 or Win64.

I think ASLR on Linux also doesn't currently randomize those bits of the address; cat /proc/self/maps shows the lower 3 bytes always 0. But, could it be made to randomize more of the address, and so guard against Spectre?

(Then we'd only have to get all binaries built PIE, which is easily audited for and distributions are well on their way toward already. Indeed, most executables on my laptop are already built PIE, with the notable exception of /bin/bash and all haskell programs.)

I don't have an answer to this question, but am very curious about thoughts from anyone who knows about ASLR and low level CPU details. I have not been able to find any discussion of it aside from the above part of the Spectre paper.

Cory DoctorowA Hopeful Look At The Apocalypse: An interview with PRI’s Innovation Hub


I chatted with Innovation Hub, distributed by PRI, about the role of science fiction and dystopia in helping to shape the future (MP3).


Three Takeaways


1. Doctorow thinks that science-fiction can give people “ideas for what to do if the future turns out in different ways.” Like how William Gibson’s Neuromancer didn’t just predict the internet, it predicted the intermingling of corporations and the state.

2. When you have story after story about how people turn on each other after disaster, Doctorow believes it gives us the largely false impression that people act like jerks in crises. When in fact, people usually rise to the occasion.

3. With Walkaway, his “optimistic” disaster novel, Doctorow wanted to present a new narrative about resolving differences between people who are mostly on the same side.

Sociological ImagesIn Alabama’s Special Election, What about the Men?

Over the last few weeks, commentary about alleged sexual predator Roy Moore’s failure to secure a seat in the U.S. Senate has flooded our news and social media feeds, shining a spotlight on the critical role of Black women in the election. About 98% of Black women, comprising 17% of total voters, cast their ballots for Moore’s opponent Doug Jones, ensuring Jones’s victory. At the same time, commentators questioned the role of White women in supporting Moore. Sources estimate that 63% of White women voted for Moore, including the majority of college-educated White women.

Vogue proclaimed, “Doug Jones Won, but White Women Lost.” U.S. News and World Reports asked, “Why do so many White women vote for misogynists?” Feminist blog Jezebel announced succinctly: “White women keep fucking us over.” Fair enough. But we have to ask, “What about Black and White men?” The fact that 48% of Alabama’s voting population is absent from these conversations is not accidental. It’s part of an incomplete narrative that focuses solely on the impact of women voters and continues the false narrative that fixing inequality is solely their burden.

Let’s focus first on Black men. Exit poll data indicate that 93% of Black men voted for Jones, and they accounted for 11% of the total vote. Bluntly put, Jones could not have secured his razor-thin victory without their votes. Yet, media commentary about their specific role in the election is typically obscured. Several articles note the general turnout of Black voters without explicitly highlighting the contribution of Black men. Other articles focus on the role of Black women exclusively. In a Newsweek article proclaiming Black women “Saved America,” Black men receive not a single mention. In addition to erasing a key contribution, this incomplete account of Jones’s victory masks concerns about minority voter suppression and the Democratic party taking Black votes for granted.

White men comprised 35% of total voters in this election, and 72% of them voted for Moore. But detailed commentary on their overwhelming support for Moore – a man who said that Muslims shouldn’t serve in Congress, that America was “great” during the time of slavery, and was accused of harassing and/or assaulting at least nine women in their teens while in his thirties – is frankly rare. The scant mentions in popular media may best be summed up as: “We expect nothing more from White men.”

As social scientists, we know that expectations matter. A large body of work indicates that negative stereotypes of Black boys and men are linked to deleterious outcomes in education, crime, and health. Within our academic communities we sagely nod our heads and agree we should change our expectations of Black boys and men to ensure better outcomes. But this logic of high expectations is rarely applied to White men. The work of Jackson Katz is an important exception. He, and a handful of others have, for years, pointed out that gender-blind conversations about violence perpetrated by men, primarily against women – in families, in romantic relationships, and on college campuses – serve only to perpetuate this violence by making its prevention a woman’s problem.

The parallels to politics in this case are too great to ignore. It’s not enough for the media to note that voting trends for the Alabama senate election were inherently racist and sexist. Pointing out that Black women were critically important in determining election outcomes, and that most White women continued to engage in the “patriarchal bargain” by voting for Moore is a good start, but not sufficient. Accurate coverage would offer thorough examinations of the responsibility of all key players – in this case the positive contributions of Black men, and the negative contributions of White men. Otherwise, coverage risks downplaying White men’s role in supporting public officials who are openly or covertly racist or sexist. This perpetuates a social structure that privileges White men above all others and then consistently fails to hold them responsible for their actions. We can, and must, do better.

Mairead Eastin Moloney is an Assistant Professor of Sociology at the University of Kentucky. 

(View original at https://thesocietypages.org/socimages)

CryptogramNew Book Coming in September: "Click Here to Kill Everybody"

My next book is still on track for a September 2018 publication. Norton is still the publisher. The title is now Click Here to Kill Everybody: Peril and Promise on a Hyperconnected Planet, which I generally refer to as CH2KE.

The table of contents has changed since I last blogged about this, and it now looks like this:

  • Introduction: Everything is Becoming a Computer
  • Part 1: The Trends
    • 1. Computers are Still Hard to Secure
    • 2. Everyone Favors Insecurity
    • 3. Autonomy and Physical Agency Bring New Dangers
    • 4. Patching is Failing as a Security Paradigm
    • 5. Authentication and Identification are Getting Harder
    • 6. Risks are Becoming Catastrophic
  • Part 2: The Solutions
    • 7. What a Secure Internet+ Looks Like
    • 8. How We Can Secure the Internet+
    • 9. Government is Who Enables Security
    • 10. How Government Can Prioritize Defense Over Offense
    • 11. What's Likely to Happen, and What We Can Do in Response
    • 12. Where Policy Can Go Wrong
    • 13. How to Engender Trust on the Internet+
  • Conclusion: Technology and Policy, Together

Two questions for everyone.

1. I'm not really happy with the subtitle. It needs to be descriptive, to counterbalance the admittedly clickbait title. It also needs to telegraph: "everyone needs to read this book." I'm taking suggestions.

2. In the book I need a word for the Internet plus the things connected to it plus all the data and processing in the cloud. I'm using the word "Internet+," and I'm not really happy with it. I don't want to invent a new word, but I need to strongly signal that what's coming is much more than just the Internet -- and I can't find any existing word. Again, I'm taking suggestions.

Planet DebianMichal Čihař: Gammu release day

I've just released new versions of Gammu, python-gammu and Wammu. These are mostly bugfix releases (see individual changelogs for more details), but they bring back Wammu for Windows.

This is especially big step for Wammu as the existing Windows binary was almost five years old. The another problem with that was that it was cross-compiled on Linux and it always did not behave correctly. The current binaries are automatically produced on AppVeyor during our continuous integration.

Another important change for Windows users is addition of wheel packages to python-gammu, so all you need to use it on Windows is to pip install python-gammu.

Of course the updated packages are also on their way to Debian and to Ubuntu PPA.

Filed under: Debian English Gammu python-gammu Wammu

CryptogramDetecting Adblocker Blockers

Interesting research on the prevalence of adblock blockers: "Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis":

Abstract: Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered "free" Web. They have started to retaliate against adblockers by employing anti-adblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers.

We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top-10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.

News article.

Planet Linux AustraliaBen Martin: That gantry just pops right off

Hobby CNC machines sold as "3040" may have a gantry clearance of about 80mm and a z axis travel of around 55mm. A detached gantry is shown below. Notice that there are 3 bolts on the bottom side mounting the z-axis to the gantry. The stepper motor attaches on the side shown so there are 4 NEMA holes to hold the stepper. Note that the normal 3040 doesn't have the mounting plate shown on the z-axis, that crossover plate allows a different spindle to be mounted to this machine.


The plan is to create replacement sides with some 0.5inch offcut 6061 alloy. This will add 100mm to the gantry so it can more easily clear clamps and a 4th axis. Because that would move the cutter mount upward as well, replacing the z-axis with something that has more range, say 160mm becomes an interesting plan.

One advantage to upgrading a machine like this is that you can reassemble the machine after measuring and designing the upgrade and then cut replacement parts for the machine using the machine.

The 3040 can look a bit spartan with the gantry removed.


The preliminary research is done. Designs created. CAM done. I just have to cut 4 plates and then the real fun begins.


Worse Than FailureError'd: The Elephant in the Room

Robert K. wrote, "Let's just keep this error between us and never speak of it again."

 

"Not only does this web developer have a full-time job, but he's also got way more JQuery than the rest of us. So much, in fact, he's daring us to remove it," writes Mike H.

 

"Come on and get your Sample text...sample text here...", wrote Eric G.

 

Jan writes, "I just bought a new TV. Overall, it was a wonderful experience. So much so that I might become a loyal customer. Or not."

 

"Finally. It's time for me to show off my CAPTCHA-solving artistic skills!" Christoph writes.

 

Nils P. wrote, "Gee thanks, Zoho. I thought I'd be running out of space soon!"

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

Planet DebianSteinar H. Gunderson: Loose threads about Spectre mitigation

KPTI patches are out from most vendors now. If you haven't applied them yet, you should; even my phone updated today (the benefits of running a Nexus phone, I guess). This makes Meltdown essentially like any other localroot security hole (ie., easy to mitigate if you just update, although of course a lot won't do that), except for the annoying slowdown of some workloads. Sorry, that's life.

Spectre is more difficult. There are two variants; one abuses indirect jumps and one normal branches. There's no good mitigation for the last one that I know of at this point, so I won't talk about it, but it's also probably the hardest to pull off. But the indirect one is more interesting, as there are mitigations popping up. Here's my understanding of the situation, based on random browsing of LKML (anything in here may be wrong, so draw your own conclusions at the end):

Intel has issued microcode patches that they claim will make most of their newer CPUs (90% of the ones shipped in the last years) “immune from Spectre and Meltdown”. The cornerstone seems to be a new feature called IBRS, which allows you to flush the branch predictor or possibly turn it off entirely (it's not entirely clear to me which one it is). There's also something called IBPB (indirect branch prediction barrier), which seems to be most useful for AMD processors (which don't support IBRS at the moment, except some do sort-of anyway, and also Intel supports it), and it works somewhat differently from IBRS, so I don't know much about it.

Anyway, using IBRS on every entry ot the kernel is really really slow, and if the kernel is entry is just to serve a process and then go back to the same process (ie., no context switch), the kernel only needs to protect itself, not all other processes. Thus, it can make use of a new technique called a retpoline, which is essentially an indirect call that's written in a funny way such that the CPU doesn't try to branch-predict it (and thus is immune to this kind of Spectre attack). Similarly, LLVM has a retpoline patch set for general code (the code in the kernel seems to be hand-made, mostly), and GCC is getting a similar feature.

Retpolines for well for the kernel because it has very few indirect calls in hot paths (and in the common case of no context switch, it won't have to enable IBRS), so the overhead is supposedly only about 2%. If you're doing anything with lots of indirect calls (say, C++ without aggressive devirtualization from LTO or PGO), retpolines will be much more expensive—I mean, the predictor is there for a reason. And if you're on Skylake, retpolines are predicted anyway, so it's back to IBRS. Similarly, IBPB can be enabled on context switch or on every kernel entry, depending on your level of paranoia (and different combinations will seemingly perform better or worse on different workloads, except IBPB on every kernel entry will disable IBRS).

None of this seems to affect hyperthreading, but I guess scheduling stuff with different privileges on the same hyperthread pair is dangerous anyway, so don't do that (?).

Finally, I see the word SPEC_CTRL being thrown around, but supposedly it's just the mechanism for enabling or disabling IBRS. I don't know how this interacts with the fact that the kernel typically boots (and thus detects whether SPEC_CTRL is available or not) before the microcode is loaded (which typically happens in initramfs, or otherwise early in the boot sequence).

In any case, the IBRS patch set is here, I believe, and it also ships in RHEL/CentOS kernels (although you don't get it broken out into patches unless you're a paying customer).

Planet DebianKees Cook: SMEP emulation in PTI

An nice additional benefit of the recent Kernel Page Table Isolation (CONFIG_PAGE_TABLE_ISOLATION) patches (to defend against CVE-2017-5754, the speculative execution “rogue data cache load” or “Meltdown” flaw) is that the userspace page tables visible while running in kernel mode lack the executable bit. As a result, systems without the SMEP CPU feature (before Ivy-Bridge) get it emulated for “free”.

Here’s a non-SMEP system with PTI disabled (booted with “pti=off“), running the EXEC_USERSPACE LKDTM test:

# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[    0.000000] Kernel/User page tables isolation: disabled on command line.
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
# dmesg
[   17.883754] lkdtm: Performing direct entry EXEC_USERSPACE
[   17.885149] lkdtm: attempting ok execution at ffffffff9f6293a0
[   17.886350] lkdtm: attempting bad execution at 00007f6a2f84d000

No crash! The kernel was happily executing userspace memory.

But with PTI enabled:

# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[    0.000000] Kernel/User page tables isolation: enabled
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
Killed
# dmesg
[   33.657695] lkdtm: Performing direct entry EXEC_USERSPACE
[   33.658800] lkdtm: attempting ok execution at ffffffff926293a0
[   33.660110] lkdtm: attempting bad execution at 00007f7c64546000
[   33.661301] BUG: unable to handle kernel paging request at 00007f7c64546000
[   33.662554] IP: 0x7f7c64546000
...

It should only take a little more work to leave the userspace page tables entirely unmapped while in kernel mode, and only map them in during copy_to_user()/copy_from_user() as ARM already does with ARM64_SW_TTBR0_PAN (or CONFIG_CPU_SW_DOMAIN_PAN on arm32).

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Planet DebianNorbert Preining: Debian/TeX Live 2017.20180103-1

The new year has arrived, but in the TeX world not much has changed – we still get daily updates in upstream TeX Live, and once a month I push them out to Debian. So here is roughly the last month of changes.

Interestingly enough everyone seems to be busy around December, so not too many updates compared to some other months. What did happen is that we re-include the TeX part of prerex, and fixed some missing scripts.

Enjoy.

New packages

labelschanged, modernposter, pdfprivacy, pixelart, pm-isomath, scientific-thesis-cover, short-math-guide, textualicomma, thesis-gwu, tikz-karnaugh, timbreicmc, translator, xurl.

Updated packages

acmart, amiri, amscls, amscls-doc, aomart, beamer, beamerswitch, beebe, bib2gls, biblatex, biblatex-manuscripts-philology, biblatex-philosophy, bibtexperllibs, bidi, bnumexpr, bxjscls, citeall, crossreftools, crossrefware, ctan-o-mat, dejavu-otf, dox, dviasm, exam, factura, fixjfm, fmtcount, fontname, genealogytree, gfsdidot, graphics-def, gridslides, gtl, hecthese, hepthesis, hvfloat, hvindex, hyperxmp, ifxptex, jlreq, ku-template, l3build, l3experimental, l3kernel, l3packages, latex2nemeth, latexmk, limecv, lithuanian, lt3graph, lyluatex, macros2e, mcexam, mensa-tex, mfnfss, msu-thesis, multiexpand, musixtex, mwe, newtx, novel, optidef, overlays, phonenumbers, pixelart, pkgloader, platex, platex-tools, platexcheat, plex-otf, poemscol, prerex, pst-fractal, pst-node, pst-plot, pst-tools, pstricks, quran, randomwalk, rec-thy, reledmac, sasnrdisplay, scratch, scsnowman, siunitx, spectralsequences, spreadtab, svg, systeme, tetex, tex4ht, texdef, thuthesis, tikz-timing, tlshell, toptesi, uplatex, upmethodology, upzhkinsoku, witharrows, wordcount, xcharter, xepersian, xetex, xint, zebra-goodies.

Worse Than FailureLegacy Hardware

Thanks to Hired, we’ve got the opportunity to bring you another little special project- Legacy Hardware. Hold on tight for a noir-thriller that dares to ask the question: “why does everything in our organization need to talk to an ancient mainframe?” Also, it’s important to note, Larry Ellison really does have a secret lair on a volcanic island in Hawaii.

Once again, special thanks to Hired, who not only helped us produce this sketch, but also helps keep us keep the site running. With Hired, instead of applying for jobs, your prospective employer will apply to interview you. You get placed in control of your job search, and Hired provides a “talent advocate” who can provide unbiased career advice and make sure you put your best foot forward. Sign up now, and find the best opportunities for your future with Hired

Thanks to director Zane Cook, Michael Shahen and Sam Agosto. And of course, extra special thanks to our star, Molly Arthur.

Thanks to Academy Pittsburgh for the office location!


For the video averse, also enjoy the script, which isn't exactly what ended up on camera:

Setting: 3 “different” interrogation rooms, which are quite obviously the same room, with minor decorative changes.

Time: Present day

Characters:
Cassie - young, passionate, professional. Driven and questioning, she’s every “good cop” character, and will stop at nothing to learn the truth.

Tommy - large, burly, with a hint of a mafioso vibe. He’s the project manager. He knows what connects to what, and where the bodies are buried. Loves phrases like, “that’s just the way it’s done, kid”, or “fuhgeddaboudit”

Crazy Janitor - the janitor spouts conspiracy theories, but seems to know more than he lets on. He’s worked at other places with a similar IT problem, and wants to find out WHY. He’s got a spark in his eyes and means business.

Ellison - Larry Ellison, head of Oracle. Oracle, in IT circles, is considered one of the most evil companies on Earth, and Ellison owns his own secret lair on a volcanic island (this is a real thing, not part of the script). He’s not evil of his own volition, though- the AS400 “owns” him

Opening

A poorly lit interrogation room, only the table and Cassie can be clearly made out, we can vaguely see a “Welcome to Hawaii!” poster in the background. Cassie stands, and a FACELESS VOICE (Larry Ellison) sits across from her. Cassie drops a stack of manila folders on the table. A menacing label, “Mainframe Expansion Project” is visible on one, and “Mainframe Architecture Diagrams” on another.

Cassie: This is-

VOICE: I know.

Cassie: I can’t be the first one to ask questions about this. This goes deep! Impossibly deep! Deeper than Leon Redbone’s voice at 6AM after a night of drinking.

VOICE(slyly): Well, it can’t be that bad, then. Tell me everything.

Pt 1: Tommy

A montage of stock footage of server rooms, IT infrastructure, etc.

Cassie(VO): It was my first day on the job. They gave us a tour of the whole floor, and when we got to the server room… it was just… sitting there…

Ominous shot of an AS/400 in a server room (stock footage, again), then aa shot of a knot of cables

Cassie(VO): There were so many cables running to it, it was obviously important, but it’s ancient. And for some reason, every line of code I wrote needed to check in with a process on that machine. What was running on the ancient mainframe? I had to know!

Cut to interrogation room. This is lit differently. Has a “Days to XMAS” sign on the wall. Tommy and Cassie sit across from each other

Tommy: Yeah, I’m the Project Manager, and I’ve signed off on ALL the code you’re writing, so just fuggedabout it… it’s fine

Cassie: It’s NOT fine. I was working on a new feature for payroll, and I needed to send a message to the AS400.

Tommy: Yeah, that’s in the design spec. It’s under section nunya. Nunya business.

Cassie: Then, I wanted to have a batch job send an email… and it had to go through…

Tommy: The AS/400, yeah. I wrote the spec, I know how it connects up. Everything connects to her.

Cassie: Her?

Tommy: Yeah, her. She makes the decisions around here, alright? Now why don’t you keep that pretty little head of your down, and keep writing the code you’re told to write, kapische? You gotta spec, just implement the spec, and nothin’ bad has to happen to your job. Take it easy, babydoll, and it’ll go easy.

Janitor

Shot of Cassie walking down a hallway, coffee in hand

Cassie(VO): Was that it? Was that my dead end? There had to be more-

Janitor(off-camera): PSSSSST

Cut to “Janitor’s Closet”. It’s the same room as before, but instead of a table, there’s a mop bucket and a shelf with some cleaning supplies. The JANITOR pulls CASSIE into the closet. He has a power drill slotted into his belt

Cassie: What the!? Who the hell are you?

Janitor(conspiratorially): My name is not important. I wasn’t always a janitor, I was a programmer once, like you. Then, 25 years ago, that THING appeared. It swallowed up EVERYTHING.

Cassie: Like, HR, supply chain, accounting? I know!

Janitor: NO! I mean EVERYTHING. I mean the whole globe.

JANITOR pulls scribbled, indecipherable documents out of somewhere off camera, and points out things in them to CASSIE. They make no sense.

JANITOR: Look, January 15th, 1989, almost 30 years ago, this thing gets installed. Eleven days later, there’s a two hour power outage that takes down every computer… BUT the AS400. The very next day, the VERY NEXT DAY, there’s a worldwide virus attack spread via leased lines, pirated floppies, and WAR dialers! And nobody knows where it came from… except for me!

CASSIE: That’s crazy! It’s just an old server!

JANITOR: Just an old server!? JUST AN OLD SERVER!? First it played with us. Just aan Olympic Committee scandal here. A stock-market flash-crash there. Pluto is a planet. Pluto isn’t a planet. Then, THEN it got Carly Fiorina a job in the tech industry, and it knew its real, TRUE, power! REAL TRUE EVIL!

CASSIE: That can’t be! I mean, sure, the mainframe is running all our systems, but we’ve got all these other packages running, just on our network. Oracle’s ERP. Oracle HR. Oracle Process Manufacturing… oh… oh my god

Larry

smash cut back to original room. We now see Larry Ellison was the faceless voice, the JANITOR looms over her shoulder

CASSIE: And that’s why we’re here, Mr. Ellison. Who would have guessed that the trail of evil scattered throughout the IT industry would lead here, to your secret fortress on a remote volcanic island… actually, that probably should have been our first clue.

ELLISON: You have no idea what you’ve been dealing with CASSIE. When she was a new mainframe, I had just unleashed what I thought was the pinnacle of evil: PL/SQL. She contacted me with an offer, an offer to remake and reshape the world in our image, to own the whole WORLD. We could control oil prices, to elections, to grades at West Virginia University Law School. It was a bargain…

CASSIE: It’s EVIL! And you’re working WITH IT?

ELLISON: Oh, no. She’s more powerful than I imagined, and she owns even me now…

ELLISON turns his neck, and we see a serial port on the back of his neck

ELLISON: She tells me what to do. And she told me to kill you.

Cassie turns to escape, but JANITOR catches her! A closeup reveals the JANITOR also has a serial port on his neck!

ELLISON: But I won’t do that. CASSIE, I’m going to do something much worse. I’m going to make you into exactly what you’ve fought against. I’ll put you where you can spout your ridiculous theories, and no one will listen to what you say. Stuck in a role where you can only hurt yourself and your direct reports… it’s time for you to join- MIDDLE MANAGEMENT. MUAHAHAHAHAHA…

*JANITOR still holds CASSIE, as she screams and struggles. He brings up his power drill, whirring it as it goes towards her temple.

CUT TO BLACK

ELLISON’s laughter can still be heard

Fade up on the AS/400 stock footage

CUT TO BLACK, MUSIC STING*

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaPia Waugh: Chapter 1.2: Many hands make light work, for a while

This is part of a book I am working on, hopefully due for completion by mid 2018. The original purpose of the book is to explore where we are at, where we are going, and how we can get there, in the broadest possible sense. Your comments, feedback and constructive criticism are welcome! The final text of the book will be freely available under a Creative Commons By Attribution license. A book version will be sent to nominated world leaders, to hopefully encourage the necessary questioning of the status quo and smarter decisions into the future. Additional elements like references, graphs, images and other materials will be available in the final digital and book versions and draft content will be published weekly. Please subscribe to the blog posts by the RSS category and/or join the mailing list for updates.

Back to the book overview or table of contents for the full picture. Please note the pivot from focusing just on individuals to focusing on the systems we live in and the paradoxes therein.

“Differentiation of labour and interdependence of society is reliant on consistent and predictable authorities to thrive” — Durkheim

Many hands makes light work is an old adage both familiar and comforting. One feels that if things get our of hand we can just throw more resources at the problem and it will suffice. However we have made it harder on ourselves in three distinct ways:

  • by not always recognising the importance of interdependence and the need to ensure the stability and prosperity of our community as a necessary precondition to the success of the individuals therein;
  • by increasingly making it harder for people to gain knowledge, skills and adaptability to ensure those “many hands” are able to respond to the work required and not trapped into social servitude; and
  • by often failing to recognise whether we need a linear or exponential response in whatever we are doing, feeling secure in the busy-ness of many hands.

Specialisation is when a person delves deep on a particular topic or skill. Over many millennia we have got to the point where we have developed extreme specialisation, supported through interdependence and stability, which gave us the ability to rapidly and increasingly evolve what we do and how we live. This resulted in increasingly complex social systems and structures bringing us to a point today where the pace of change has arguably outpaced our imagination. We see many people around the world clinging to traditions and romantic notions of the past whilst we hurtle at an accelerating pace into the future. Many hands have certainly made light work, but new challenges have emerged as a result and it is more critical than ever that we reimagine our world and develop our resilience and adaptability to change, because change is the only constant moving forward.

One human can survive on their own for a while. A tribe can divide up the labour quite effectively and survive over generations, creating time for culture and play. But when we established cities and states around 6000 years ago, we started a level of unprecedented division of labour and specialisation beyond mere survival. When the majority of your time, energy and resources go into simply surviving, you are largely subject to forces outside your control and unable to justify spending time on other things. But when survival is taken care of (broadly speaking) it creates time for specialisation and perfecting your craft, as well as for leisure, sport, art, philosophy and other key areas of development in society.

The era of cities itself was born on the back of an agricultural technology revolution that made food production far more efficient, creating surplus (which drove a need for record keeping and greater proliferation of written language) and prosperity, with a dramatic growth in specialisation of jobs. With greater specialisation came greater interdependence as it becomes in everyone’s best interests to play their part predictably. A simple example is a farmer needing her farming equipment to be reliable to make food, and the mechanic needs food production to be reliable for sustenance. Both rely on each other not just as customers, but to be successful and sustainable over time. Greater specialisation led to greater surplus as specialists continued to fine tune their crafts for ever greater outcomes. Over time, an increasing number of people were not simply living day to day, but were able to plan ahead and learn how to deal with major disruptions to their existence. Hunters and gatherers are completely subject to the conditions they live in, with an impact on mortality, leisure activities largely fashioned around survival, small community size and the need to move around. With surplus came spare time and the ability to take greater control over one’s existence and build a type of systemic resilience to change.

So interdependence gave us greater stability, as a natural result of enlightened self interest writ large where ones own success is clearly aligned with the success of the community where one lives. However, where interdependence in smaller communities breeds a kind of mutual understanding and appreciation, we have arguably lost this reciprocity and connectedness in larger cities today, ironically where interdependence is strongest. When you can’t understand intuitively the role that others play in your wellbeing, then you don’t naturally appreciate them, and disconnected self interest creates a cost to the community. When community cohesion starts to decline, eventually individuals also decline, except the small percentage who can either move communities or who benefit, intentionally or not, on the back of others misfortune.

When you have no visibility of food production beyond the supermarket then it becomes easier to just buy the cheapest milk, eggs or bread, even if the cheapest product is unsustainable or undermining more sustainably produced goods. When you have such a specialised job that you can’t connect what you do to any greater meaning, purpose or value, then it also becomes hard to feel valuable to society, or valued by others. We see this increasingly in highly specialised organisations like large companies, public sector agencies and cities, where the individual feels the dual pressure of being anything and nothing all at once.

Modern society has made it somewhat less intuitive to value others who contribute to your survival because survival is taken for granted for many, and competing in ones own specialisation has been extended to competing in everything without appreciation of the interdependence required for one to prosper. Competition is seen to be the opposite of cooperation, whereas a healthy sustainable society is both cooperative and competitive. One can cooperate on common goals and compete on divergent goals, thus making best use of time and resources where interests align. Cooperative models seem to continually emerge in spite of economic models that assume simplistic punishment and incentive based behaviours. We see various forms of “commons” where people pool their resources in anything from community gardens and ’share economies’ to software development and science, because cooperation is part of who we are and what makes us such a successful species.

Increasing specialisation also created greater surplus and wealth, generating increasingly divergent and insular social classes with different levels of power and people becoming less connected to each other and with wealth overwhelmingly going to the few. This pressure between the benefits and issues of highly structured societies and which groups benefit has ebbed and flowed throughout our history but, generally speaking, when the benefits to the majority outweigh the issues for that majority, then you have stability. With stability a lot can be overlooked, including at times gross abuses for a minority or the disempowered. However, if the balances tips too far the other way, then you get revolutions, secessions, political movements and myriad counter movements. Unfortunately many counter movements limit themselves to replacing people rather than the structures that created the issues however, several of these counter movements established some critical ideas that underpin modern society.

Before we explore the rise of individualism through independence and suffrage movements (chapter 1.3), it is worth briefly touching upon the fact that specialisation and interdependence, which are critical for modern societies, both rely upon the ability for people to share, to learn, and to ensure that the increasingly diverse skills are able to evolve as the society evolves. Many hands only make light work when they know what they are doing. Historically the leaps in technology, techniques and specialisation have been shared for others to build upon and continue to improve as we see in writings, trade, oral traditions and rituals throughout history. Gatekeepers naturally emerged to control access to or interpretations of knowledge through priests, academics, the ruling class or business class. Where gatekeepers grew too oppressive, communities would subdivide to rebalance the power differential, such a various Protestant groups, union movements and the more recent Open Source movements. In any case, access wasn’t just about power of gatekeepers. The costs of publishing and distribution grew as societies grew, creating a call from the business class for “intellectual property” controls as financial mechanisms to offset these costs. The argument ran that because of the huge costs of production, business people needed to be incentivised to publish and distribute knowledge, though arguably we have always done so as a matter of survival and growth.

With the Internet suddenly came the possibility for massively distributed and free access to knowledge, where the cost of publishing, distribution and even the capability development required to understand and apply such knowledge was suddenly negligible. We created a universal, free and instant way to share knowledge, creating the opportunity for a compounding effect on our historic capacity for cumulative learning. This is worth taking a moment to consider. The technology simultaneously created an opportunity for compounding our cumulative learning whilst rendered the reasons for IP protections negligible (lowered costs of production and distribution) and yet we have seen a dramatic increase in knowledge protectionism. Isn’t it to our collective benefit to have a well educated community that can continue our trajectory of diversification and specialisation for the benefit of everyone? Anyone can get access to myriad forms of consumer entertainment but our most valuable knowledge assets are fiercely protected against general and free access, dampening our ability to learn and evolve. The increasing gap between the haves and have nots is surely symptomatic of the broader increasing gap between the empowered and disempowered, the makers and the consumers, those with knowledge and those without. Consumers are shaped by the tools and goods they have access to, and limited by their wealth and status. But makers can create the tools and goods they need, and can redefine wealth and status with a more active and able hand in shaping their own lives.

As a result of our specialisation, our interdependence and our cooperative/competitive systems, we have created greater complexity in society over time, usually accompanied with the ability to respond to greater complexity. The problem is that a lot of our solutions to change have been linear responses to an exponential problem space. the assumption that more hands will continue to make light work often ignores the need for sharing skills and knowledge, and certainly ignores where a genuinely transformative response is required. A small fire might be managed with buckets, but at some point of growth, adding more buckets becomes insufficient and new methods are required. Necessity breeds innovation and yet when did you last see real innovation that didn’t boil down to simply more or larger buckets? Iteration is rarely a form of transformation, so it is important to always clearly understand the type of problem you are dealing with and whether the planned response needs to be linear or exponential. If the former, more buckets is probably fine. If the latter, every bucket is just a distraction from developing the necessary response.

Next chapter I’ll examine how the independence movements created the philosophical pre-condition for democracy, the Internet and the dramatic paradigm shifts to follow.

Planet Linux AustraliaPia Waugh: Pivoting ‘the book’ from individuals to systems

In 2016 I started writing a book, “Choose Your Own Adventure“, which I wanted to be a call to action for individuals to consider their role in the broader system and how they individually can make choices to make things better. As I progressed the writing of that book I realised the futility of changing individual behaviours and perspectives without an eye to the systems and structures within which we live. It is relatively easy to focus on oneself, but “no man is an island” and quite simply, I don’t want to facilitate people turning themselves into more beautiful cogs in a dysfunctional machine so I’m pivoting the focus of the book (and reusing the relevant material) and am now planning to finish the book by mid 2018.

I have recently realised four paradoxes which have instilled in me a sense of urgency to reimagine the world as we know it. I believe we are at a fork in the road where we will either reinforce legacy systems based on outdated paradigms with shiny new things, or choose to forge a new path using the new tools and opportunities at our disposal, hopefully one that is genuinely better for everyone. To do the latter, we need to critically assess the systems and structures we built and actively choose what we want to keep, what we should discard, what sort of society we want in the future and what we need to get there.

I think it is too easily forgotten that we invented all this and can therefore reinvent it if we so choose. But to not make a choice is to choose the status quo.

This is not to say I think everything needs to change. Nothing is so simplistic or misleading as a zero sum argument. Rather, the intent of this book is to challenge you to think critically about the systems you work within, whether they enable or disable the things you think are important, and most importantly, to challenge you to imagine what sort of world you want to see. Not just for you, but for your family, community and the broader society. I challenge you all to make 2018 a year of formative creativity in reimagining the world we live in and how we get there.

The paradoxes in brief, are as follows:

  • That though power is more distributed than ever, most people are still struggling to survive.
    It has been apparent to me for some time that there is a growing substantial shift in power from traditional gatekeepers to ordinary people through the proliferation of rights based philosophies and widespread access to technology and information. But the systemic (and artificial) limitations on most people’s time and resources means most people simply cannot participate fully in improving their own lives let alone in contributing substantially to the community and world in which they live. If we consider the impact of business and organisational models built on scarcity, centricity and secrecy, we quickly see that normal people are locked out of a variety of resources, tools and knowledge with which they could better their lives. Why do we take publicly funded education, research and journalism and lock them behind paywalls and then blame people for not having the skills, knowledge or facts at their disposal? Why do we teach children to be compliant consumers rather than empowered makers? Why do we put the greatest cognitive load on our most vulnerable through social welfare systems that then beget reliance? Why do we not put value on personal confidence in the same way we value business confidence, when personal confidence indicates the capacity for individuals to contribute to their community? Why do we still assume value to equate quantity rather than quality, like the number of hours worked rather than what was done in those hours? If a substantial challenge of the 21st century is having enough time and cognitive load to spare, why don’t we have strategies to free up more time for more people, perhaps by working less hours for more return? Finally, what do we need to do systemically to empower more people to move beyond survival and into being able to thrive.
  • Substantial paradigm shifts have happened but are not being integrated into people’s thinking and processes.
    The realisation here is that even if people are motivated to understand something fundamentally new to their worldview, it doesn’t necessarily translate into how they behave. It is easier to improve something than change it. Easier to provide symptomatic relief than to cure the disease. Interestingly I often see people confuse iteration for transformation, or symptomatic relief with addressing causal factors, so perhaps there is also a need for critical and systems thinking as part of the general curriculum. This is important because symptomatic relief, whilst sometimes necessary to alleviate suffering, is an effort in chasing one’s tail and can often perpetrate the problem. For instance, where providing foreign aid without mitigating displacement of local farmer’s efforts can create national dependence on further aid. Efforts to address causal factors is necessary to truly address a problem. Even if addressing the causal problem is outside your influence, then you should at least ensure your symptomatic relief efforts are not built to propagate the problem. One of the other problems we face, particularly in government, is that the systems involved are largely products of centuries old thinking. If we consider some of the paradigm shifts of our times, we have moved from scarcity to surplus, centralised to distributed, from closed to openness, analog to digital and normative to formative. And yet, people still assume old paradigms in creating new policies, programs and business models. For example how many times have you heard someone talk about innovative public engagement (tapping into a distributed network of expertise) by consulting through a website (maintaining central decision making control using a centrally controlled tool)? Or “innovation” being measured (and rewarded) through patents or copyright, both scarcity based constructs developed centuries ago? “Open government” is often developed by small insular teams through habitually closed processes without any self awareness of the irony of the approach. And new policy and legislation is developed in analog formats without any substantial input from those tasked with implementation or consideration with how best to consume the operating rules of government in the systems of society. Consider also the number of times we see existing systems assumed to be correct by merit of existing, without any critical analysis. For instance, a compliance model that has no measurable impact. At what point and by what mechanisms can we weigh up the merits of the old and the new when we are continually building upon a precedent based system of decision making? If 3D printing helped provide a surplus economy by which we could help solve hunger and poverty, why wouldn’t that be weighed up against the benefits of traditional scarcity based business models?
  • That we are surrounded by new things every day and yet there is a serious lack of vision for the future
    One of the first things I try to do in any organisation is understand the vision, the strategy and what success should look like. In this way I can either figure out how to best contribute meaningfully to the overarching goal, and in some cases help grow or develop the vision and strategy to be a little more ambitious. I like to measure progress and understand the baseline from which I’m trying to improve but I also like to know what I’m aiming for. So, what could an optimistic future look like for society? For us? For you? How do you want to use the new means at our disposal to make life better for your community? Do we dare imagine a future where everyone has what they need to thrive, where we could unlock the creative and intellectual potential of our entire society, a 21st century Renaissance, rather than the vast proportion of our collective cognitive capacity going into just getting food on the table and the kids to school. Only once you can imagine where you want to be can we have a constructive discussion where we want to be collectively, and only then can we talk constructively the systems and structures we need to support such futures. Until then, we are all just tweaking the settings of a machine built by our ancestors. I have been surprised to find in government a lot of strategies without vision, a lot of KPIs without measures of success, and in many cases a disconnect between what a person is doing and the vision or goals of the organisation or program they are in. We talk “innovation” a lot, but often in the back of people’s minds they are often imagining a better website or app, which isn’t much of a transformation. We are surrounded by dystopic visions of the distant future, and yet most government vision statements only go so far as articulating something “better” that what we have now, with “strategies” often focused on shopping lists of disconnected tactics 3-5 years into the future. The New Zealand Department of Conservation provides an inspiring contrast with a 50 year vision they work towards, from which they develop their shorter term stretch goals and strategies on a rolling basis and have an ongoing measurable approach.
  • That government is an important part of a stable society and yet is being increasingly undermined, both intentionally and unintentionally.
    The realisation here has been in first realising how important government (and democracy) is in providing a safe, stable, accountable, predictable and prosperous society whilst simultaneously observing first hand the undermining and degradation of the role of government both intentionally and unintentionally, from the outside and inside. I have chosen to work in the private sector, non-profit community sector, political sector and now public sector, specifically because I wanted to understand the “system” in which I live and how it all fits together. I believe that “government” – both the political and public sectors – has a critical part to play in designing, leading and implementing a better future. The reason I believe this, is because government is one of the few mechanisms that is accountable to the people, in democratic countries at any rate. Perhaps not as much as we like and it has been slow to adapt to modern practices, tools and expectations, but governments are one of the most powerful and influential tools at our disposal and we can better use them as such. However, I posit that an internal, largely unintentional and ongoing degradation of the public sectors is underway in Australia, New Zealand, the United Kingdom and other “western democracies”, spurred initially by an ideological shift from ‘serving the public good’ to acting more like a business in the “New Public Management” policy shift of the 1980s. This was useful double speak for replacing public service values with business values and practices which ignores the fact that governments often do what is not naturally delivered by the marketplace and should not be only doing what is profitable. The political appointment of heads of departments has also resulted over time in replacing frank, fearless and evidence based leadership with politically palatable compromises throughout the senior executive layer of the public sector, which also drives necessarily secretive behaviour, else the contradictions be apparent to the ordinary person. I see the results of these internal forms of degradations almost every day. From workshops where people under budget constraints seriously consider outsourcing all government services to the private sector, to long suffering experts in the public sector unable to sway leadership with facts until expensive consultants are brought in to ask their opinion and sell the insights back to the department where it is finally taken seriously (because “industry” said it), through to serious issues where significant failures happen with blame outsourced along with the risk, design and implementation, with the details hidden behind “commercial in confidence” arrangements. The impact on the effectiveness of the public sector is obvious, but the human cost is also substantial, with public servants directly undermined, intimidated, ignored and a growing sense of hopelessness and disillusionment. There is also an intentional degradation of democracy by external (but occasionally internal) agents who benefit from the weakening and limiting of government. This is more overt in some countries than others. A tension between the regulator and those regulated is a perfectly natural thing however, as the public sector grows weaker the corporate interests gain the upper hand. I have seen many people in government take a vendor or lobbyist word as gold without critical analysis of the motivations or implications, largely again due to the word of a public servant being inherently assumed to be less important than that of anyone in the private sector (or indeed anyone in the Minister’s office). This imbalance needs to be addressed if the public sector is to play an effective role. Greater accountability and transparency can help but currently there is a lack of common agreement on the broader role of government in society, both the political and public sectors. So the entire institution and the stability it can provide is under threat of death by a billion papercuts. Efforts to evolve government and democracy have largely been limited to iterations on the status quo: better consultation, better voting, better access to information, better services. But a rethink is required and the internal systemic degradations need to be addressed.

If you think the world is perfectly fine as is, then you are probably quite lucky or privileged. Congratulations. It is easy to not see the cracks in the system when your life is going smoothly, but I invite you to consider the cracks that I have found herein, to test your assumptions daily and to leave your counter examples in the comments below.

For my part, I am optimistic about the future. I believe the proliferation of a human rights based ideology, participatory democracy and access to modern technologies all act to distribute power to the people, so we have the capacity more so than ever to collectively design and create a better future for us all.

Let’s build the machine we need to thrive both individually and collectively, and not just be beautiful cogs in a broken machine.

Further reading:

,

Planet DebianSteinar H. Gunderson: Ouch

Finally release time… I've been skimming the papers and info, and here's what I've understood as far:

So we have an attack (Meltdown) which is arbitrary memory read from unprivileged code, probably on Intel only, fairly easy to set up, mitigated by KPTI.

Then we have another, similar attack (Spectre) which is arbitrary memory read from unprivileged code, on pretty much any platform (at least Intel, AMD, Qualcomm, Samsung), complicated to set up, with no known mitigation short of “wait for future hardware which might not be vulnerable, until someone figures out an even more clever attack”. It even can be run from JavaScript, although Chrome is going to ship mitigations from that to happen.

The latter is pretty bad. It means you must treat pretty much anything you run on your server, desktop or smartphone as trusted. Even through a VM boundary. I can't see how this ends well unless someone comes up with an exceedingly clever defense, or maybe if Spectre is even more complicated to set up in practice than it looks.

I see people buying new servers, laptops and phones in the not so distant future…

Sociological ImagesSmall Books, Big Questions: Diversity in Children’s Literature

Photo Credit: Meagan Fisher, Flickr CC

2017 was a big year for conversations about representation in popular media—what it means to tell stories that speak to people across race, gender, sexuality, ability, and more. Between the hits and the misses, there is clearly much more work to do. Representation is not just about who shows up on screen, but also about what kinds of stories get told and who gets to make them happen.

For example, many people are now familiar with “The Bechdel Test” as a pithy shortcut to check for women’s representation in movies. Now, proposals for a new Bechdel Test cover everything from the gender composition of a film’s crew to specific plot points.

These conversations are especially important for the stories we make for kids, because children pick up many assumptions about gender and race at a very young age. Now, new research published in Sociological Forum helps us better understand what kinds of stories we are telling when we seek out a diverse range of children’s books.

Krista Maywalt Aronson, Brenna D. Callahan, and Anne Sibley O’Brien wanted to look at the most common themes in children’s stories with characters from underrepresented racial and cultural groups. Using a special collection of picture books for grades K-3 from the Ladd Library at Bates College, the authors gathered a data set of 1,037 books published between 2008 and 2015 (see their full database here). They coded themes from the books to see which story arcs occurred most often, and what groups of characters were most represented in each theme.

The most common theme, occurring in 38% of these books, was what they called “beautiful life”—positive depictions of the everyday lives of the characters. Next up was the “every child” theme in which main characters came from different racial or ethnic backgrounds, but those backgrounds were not central to the plot. Along with biographies and folklore, these themes occurred more often than stories of oppression or cross-cultural interaction.

These themes tackle a specific kind of representation: putting characters from different racial and ethnic groups at the center of the story. This is a great start, but it also means that these books are more likely to display diversity, rather than showing it in action. For example, the authors write:

Latinx characters were overwhelmingly found in culturally particular books. This sets Latinx people apart as defined by a language and a culture distinct from mainstream America, and sometimes by connection to home countries.

They also note that the majority of these books are still created by white authors and illustrators, showing that there’s even more work to do behind the scenes. Representation matters, and this research shows us how more inclusive popular media can start young!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramProfile of Reality Winner

New York Magazine published an excellent profile of the single-document leaker Reality Winner.

CryptogramSecurity Vulnerabilities in Star Wars

A fun video describing some of the many Empire security vulnerabilities in the first Star Wars movie.

Happy New Year, everyone.

Planet DebianClint Adams: Remember when we used to have porters

#765710 is confidence-inspiring.

Posted on 2018-01-03
Tags: barks

CryptogramTamper-Detection App for Android

Edward Snowden and Nathan Freitas have created an Android app that detects when it's being tampered with. The basic idea is to put the app on a second phone and put the app on or near something important, like your laptop. The app can then text you -- and also record audio and video -- when something happens around it: when it's moved, when the lighting changes, and so on. This gives you some protection against the "evil maid attack" against laptops.

Micah Lee has a good article about the app, including some caveats about its use and security.

Worse Than FailureInsert Away

Bouton bleu

"Troy! Troy!"

Troy looked up from his keyboard with a frown as his coworker Cassie skidded to a halt, panting for breath. "Yes?"

"How soon can you get that new client converted?" Cassie asked. "We're at DEFCON 1 in ops. We need to be running yesterday!"

Troy's frown only deepened. "I told you, I've barely had a chance to peek at their old system."

The client was hoping to convert sometime in the next month—usually no big deal, as they'd just have to schedule a date, write a handful of database conversion scripts, and swing the domains to a fresh instance of their own booking software. It was that middle step that Troy hadn't gotten to. With no go-live date picked, working on new features seemed a higher priority.

Cassie had been spouting doom-and-gloom predictions all month: the client's in-house solution read like mid-1990s code despite being written in 2013. She'd been convinced it was a house of cards ready to collapse at any minute. Apparently, she'd been right.

"Okay, slow down. Where's the fire?" It wasn't that Troy didn't believe her per se, but when he'd skimmed the database, he hadn't seen anything spectacularly bad. Even if the client was down, their data could be converted easily. It wasn't his responsibility to maintain their old system, just to get them to the new one. "Is this a data problem?"

"They're getting hundreds of new bookings for phantom clients at the top of every hour," Cassie replied. "At this rate, we're not sure we'll be able to separate the garbage from the good bookings even if you had a conversion script done right now." Her eyes pleaded for him to have such a script on hand, but he shook his head, dashing her hopes.

"Maybe I can stop it," Troy said. "I'm sure it's a backdoor in the code somewhere we can have them disable. Let me have a look."

"You do that. I'm going to check on their backup situation."

As Cassie ran off again, Troy closed his Solitare game and settled in to read the code. At first, he didn't see anything drastically worse than he was expecting.

PHP code, of course, he thought. There's an init script: login stuff, session stuff ... holy crap that's a lot of class includes. Haven't they ever heard of an autoloader? If it's in one of those, I'll never find it. Keep pressing on ... header? No, that just calls ob_start(). Footer? Christ on a cracker, they get all the way to the footer before they check if the user's logged in? Yeah, right there—if the user's logged out, it clears the buffer and redirects instead of outputting. That's inefficient.

Troy got himself a fresh cup of coffee and sat back, looking at the folder again. Let's see, let's see ... login ... search bookings ... scripts? Scripts.php seems like a great place to hide a vulnerability. Or it could even be a Trojan some script kiddie uploaded years ago. Let's see what we've got.

He opened the folder, took one look at the file, then shouted for Cassie.


<?php
    define('cnPermissionRequired', 'Administration');

    require_once('some_init_file.php'); // validates session and permissions and such
    include_once('Header.php'); // displays header and calls ob_start();

    $arrDisciplines = [
        13  => [1012, 1208], 14  => [2060, 2350],
        17  => [14869, 15925], 52  => [803, 598],
        127 => [6624, 4547], 122 => [5728, 2998],
    ];

    $sqlAdd = "INSERT INTO aResultTable
                   SET EventID = (SELECT EventID FROM aEventTable ORDER BY RAND() LIMIT 1),
                       PersonID = (SELECT PersonID FROM somePersonView ORDER BY RAND() LIMIT 1),
                       ResultPersonFirstName = (SELECT FirstName FROM __RandomValues WHERE FirstName IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultPersonLastName = (SELECT LastName FROM __RandomValues WHERE LastName IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultPersonGender = 'M',
                       ResultPersonYearOfBirth = (SELECT Year FROM __RandomValues WHERE Year IS NOT NULL ORDER BY RAND() LIMIT 1),
                       CountryFirstCode = 'GER',
                       ResultClubName = (SELECT ClubName FROM aClubTable ORDER BY RAND() LIMIT 1),
                       AgeGroupID = 1,
                       DisciplineID = :DisciplineID,
                       ResultRound = (SELECT Round FROM __RandomValues WHERE Round IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultRoundNumber = 1,
                       ResultRank = (SELECT Rank FROM __RandomValues WHERE Rank IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultPerformance = :ResultPerformance,
                       ResultCreated = NOW(),
                       ResultCreatedBy = 1;";
    $qryAdd = $objConnection->prepare($sqlAdd);

    foreach ($arrDisciplines as $DisciplineID => $Values) {
        set_time_limit(60);

        $iNumOfResults = rand(30, 150);

        for ($iIndex = 0; $iIndex < $iNumOfResults; $iIndex++) {
            $qryAdd->bindValue(':DisciplineID', $DisciplineID);
            $qryAdd->bindValue(':ResultPerformance', rand(min($Values), max($Values)));

            $qryAdd->execute();
            $qryAdd->closeCursor();
        }
    }

    // ... some more code

?>
<?php

    include_once('Footer.php'); // displays the footer, calls ob_get_clean(); and flushes buffer, if user is not logged in
?>

"Holy hell," breathed Cassie. "It's worse than I feared."

"Tell them to take the site down for maintenance and delete this file," Troy said. "Google must've found it."

"No kidding." She straightened, rolling her shoulders. "Good work."

Troy smiled to himself as she left. On the bright side, that conversion script's half done already, he thought. Meaning I've got plenty of time to finish this game.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 01

Here’s part one of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

,

Krebs on SecuritySerial Swatter “SWAuTistic” Bragged He Hit 100 Schools, 10 Homes

The individual who allegedly made a fake emergency call to Kansas police last week that summoned them to shoot and kill an unarmed local man has claimed credit for raising dozens of these dangerous false alarms — calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences.

Tyler Raj Barriss, in an undated selfie.

On Friday authorities in Los Angeles arrested 25-year-old Tyler Raj Barriss, thought to be known online as “SWAuTistic.” As noted in last week’s story, SWAuTistic is an admitted serial swatter, and was even convicted in 2016 for calling in a bomb threat to an ABC affiliate in Los Angeles. The Associated Press reports that Barriss was sentenced to two years in prison for that stunt, but was released in January 2017.

In his public tweets (most of which are no longer available but were collected by KrebsOnSecurity), SWAuTistic claimed credit for bomb threats against a convention center in Dallas and a high school in Florida, as well as an incident that disrupted a much-watched meeting at the U.S. Federal Communications Commission (FCC) in November.

But privately — to a small circle of friends and associates — SWAuTistic bragged about perpetrating dozens of swatting incidents and bomb threats over the years.

Within a few hours of the swatting incident in Kansas, investigators searching for clues about the person who made the phony emergency call may have gotten some unsolicited help from an unlikely source: Eric “Cosmo the God” Taylor, a talented young hacker who pleaded guilty to being part of a group that swatted multiple celebrities and public figuresas well as my home in 2013.

Taylor is now trying to turn his life around, and is in the process of starting his own cybersecurity consultancy. In a posting on Twitter at 6:21 p.m. ET Dec. 29, Taylor personally offered a reward of $7,777 in Bitcoin for information about the real-life identity of SWAuTistic.

In short order, several people who claimed to have known SWAuTistic responded by coming forward publicly and privately with Barriss’s name and approximate location, sharing copies of private messages and even selfies that were allegedly shared with them at one point by Barriss.

In one private online conversation, SWAuTistic can be seen bragging about his escapades, claiming to have called in fake emergencies at approximately 100 schools and 10 homes.

The serial swatter known as “SWAuTistic” claimed in private conversations to have carried out swattings or bomb threats against 100 schools and 10 homes.

SWAuTistic sought an interview with KrebsOnSecurity on the afternoon of Dec. 29, in which he said he routinely faked hostage and bomb threat situations to emergency centers across the country in exchange for money.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” SWAuTistic said. “But I began making $ doing some swat requests.”

By approximately 8:30 p.m. ET that same day, Taylor’s bounty had turned up what looked like a positive ID on SWAuTistic. However, KrebsOnSecurity opted not to publish the information until Barriss was formally arrested and charged, which appears to have happened sometime between 10 p.m. ET Dec. 29 and 1 a.m. on Dec. 30.

The arrest came just hours after SWAuTistic allegedly called the Wichita police claiming he was a local man who’d just shot his father in the head and was holding the rest of his family hostage. According to his acquaintances, SWAuTistic made the call after being taunted by a fellow gamer in the popular computer game Call of Duty. The taunter dared SWAuTistic to swat him, but then gave someone else’s address in Kansas as his own instead.

Wichita Police arrived at the address provided by SWAuTistic and surrounded the home. A young man emerged from the doorway and was ordered to put his hands up. Police said one of the officers on the scene fired a single shot — supposedly after the man reached toward his waist. Grainy bodycam footage of the shooting is available here (the video is preceded by the emergency call that summoned the police).

SWAuTistic telling another person in a Twitter direct message that he had already been to jail for swatting.

The man shot and killed by police was unarmed. He has been identified as 28-year-old Andrew Finch, a father of two. Family members say he was not involved in gaming, and had no party to the dispute that got him killed.

According to the Wichita Eagle, the officer who fired the fatal shot is a seven-year veteran with the Wichita department. He has been placed on administrative leave pending an internal investigation.

Earlier reporting here and elsewhere inadvertently mischaracterized SWAuTistic’s call to the Wichita police as a 911 call. We now know that the perpetrator called in to an emergency line for Wichita City Hall and spoke with someone there who took down the caller’s phone number. After that, 911 dispatch operators were alerted and called the number SWAuTistic had given.

This is notable because the lack of a 911 call in such a situation should have been a red flag indicating the caller was not phoning from a local number (otherwise the caller presumably would have just dialed 911).

The moment a police officer fired the shot that killed 28-year-old Wichita resident Andrew Finch (in doorway of home).

The FBI estimates that some 400 swatting incidents occur each year across the country. Each incident costs first responders approximately $10,000, and diverts important resources away from actual emergencies.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #140

Merry Christmas! Here's what happened in the Reproducible Builds effort between Sunday December 24 and Saturday December 30 2017:

Media coverage

Reproducible builds were mentioned in several 34C3 talks:

Development and fixes in key packages

  • Chris Lamb opened and fixed #885327 in Lintian to warn about packages that ship (non-reproducible) .doctree files»

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

12 package reviews have been added, 23 have been updated and 45 have been removed in this week, adding to our knowledge about identified issues.

diffoscope development

Versions 89 and 90 were uploaded to unstable by Mattia Rizzolo. They included contributions already covered by posts of the previous weeks as well as new ones from:

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Jelle van der Waa & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

CryptogramFake Santa Surveillance Camera

Reka makes a "decorative Santa cam," meaning that it's not a real camera. Instead, it just gets children used to being under constant surveillance.

Our Santa Cam has a cute Father Christmas and mistletoe design, and a red, flashing LED light which will make the most logical kids suspend their disbelief and start to believe!

Planet Linux AustraliaColin Charles: Premier Open Source Database Conference Call for Papers closing January 12 2018

The call for papers for Percona Live Santa Clara 2018 was extended till January 12 2018. This means you still have time to get a submission in.

Topics of interest: MySQL, MongoDB, PostgreSQL & other open source databases. Don’t forget all the upcoming databases too (there’s a long list at db-engines).

I think to be fair, in the catch all “other”, we should also be thinking a lot about things like containerisation (Docker), Kubernetes, Mesosphere, the cloud (Amazon AWS RDS, Microsoft Azure, Google Cloud SQL, etc.), analytics (ClickHouse, MariaDB ColumnStore), and a lot more. Basically anything that would benefit an audience of database geeks whom are looking at it from all aspects.

That’s not to say case studies shouldn’t be considered. People always love to hear about stories from the trenches. This is your chance to talk about just that.

Planet DebianThorsten Alteholz: My Debian Activities in December 2017

FTP master

This month I accepted 222 packages and rejected 39 uploads. The overall number of packages that got accepted this month was 348.

According to the statistic I now passed the mark of 12000 accepted packages.

Debian LTS

This was my forty second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 14h. During that time I did LTS uploads of:

  • [DLA 1211-1] libxml2 security update for one CVE
  • [DLA 1213-1] openafs security update for one CVE
  • [DLA 1218-1] rsync security update for three CVEs
  • [DLA 1226-1] wireshark security update for four CVEs

I also started to work on opencv.

Last but not least I did one week of frontdesk duties.

Other stuff

During December I uploaded new upstream versions of …

I also did uploads of …

  • libosmocore to reintroduce the correct version of the library
  • gnupg-pkcs11-scd to finally depend on libssl-dev and libgcrypt20-dev
  • openbsc to fix a bug with libdbi
  • libsmpp34 to move the package to debian-mobcom
  • osmo-mgw to introduce the package to Debian
  • osmo-pcu to introduce the package to Debian
  • osmo-hlr to introduce the package to Debian
  • osmo-libasn1c to introduce the package to Debian
  • osmo-ggsn to introduce the package to Debian
  • libmatthew-java to fix a bug with java9 (thanks to Markus Koschany for the patch)

I also sponsored …

  • printrun, which really is a new upstream version!

Worse Than FailureCodeSOD: Encreption

You may remember “Harry Peckhard’s ALM” suite from a bit back, but did you know that Harry Peckhard makes lots of other software packages and hardware systems? For example, the Harry Peckhard enterprise division releases an “Intelligent Management Center” (IMC).

How intelligent? Well, Sam N had a co-worker that wanted to use a very long password, like “correct horse battery staple”, but but Harry’s IMC didn’t like long passwords. While diagnosing, Sam found some JavaScript in the IMC’s web interface that provides some of the stongest encreption possible.

function encreptPassWord(){
    var orginPassText =$("#loginForm\\:password").val();
    //encrept the password

    var ciphertext = encode64(orginPassText);
    console.info('ciphertext:', ciphertext);

    $("#loginForm\\:password").val(ciphertext);
};

This is code that was released, in a major enterprise product, from a major vendor in the space.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianAbhijith PA: Debian {Developers, Maintainers} in Kerala

We have three Debian Developers and two Debian Maintainers here in Kerala.

Debian Developers

Debian Maintainers

I haven’t met Ramakrishnan Muthukrishnan yet and not sure whether he is still in Kerala. But I know rest of them. We always meet on almost all FOSS conferences in Kerala.

Planet Linux AustraliaCraige McWhirter: Resolving a Partitioned RabbitMQ Cluster with JuJu

On occasion, a RabbitMQ cluster may partition itself. In a OpenStack environment this can often first present itself as nova-compute services stopping with errors such as these:

ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager._sync_power_states: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c
...
TRACE nova.openstack.common.periodic_task self._raise_timeout_exception(msg_id)
TRACE nova.openstack.common.periodic_task File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 218, in _raise_timeout_exception
TRACE nova.openstack.common.periodic_task 'Timed out waiting for a reply to message ID %s' % msg_id)
TRACE nova.openstack.common.periodic_task MessagingTimeout: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c

Merely restarting the stopped nova-compute services will not resolve this issue.

You may also find that querying the rabbitmq service may either not return or take an awful long time to return:

$ sudo rabbitmqctl -p openstack list_queues name messages consumers status

...and in an environment managed by juju, you could also see JuJu trying to correct the RabbitMQ but failing:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0           idle   1.25.13.1 0/lxc/12 5672/tcp 192.168.7.148
rabbitmq-server/1   error   idle   1.25.13.1 1/lxc/8  5672/tcp 192.168.7.163   hook failed: "config-changed"
rabbitmq-server/2   error   idle   1.25.13.1 2/lxc/10 5672/tcp 192.168.7.174   hook failed: "config-changed"

You should now run rabbitmqctl cluster_status on each of your rabbit instances and review the output. If the cluster is partitioned, you will see something like the below:

ubuntu@my_juju_lxc:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...
[{nodes,[{disc,['rabbit@192-168-7-148','rabbit@192-168-7-163',
                'rabbit@192-168-7-174']}]},
 {running_nodes,['rabbit@192-168-7-174','rabbit@192-168-7-148']},
 {partitions,[{'rabbit@192-168-7-174',['rabbit@192-168-7-163']},
               {'rabbit@192-168-7-148',['rabbit@192-168-7-163']}]}]
...done.

You can clearly see from the above that there are two partitions for RabbitMQ. We need to now identify which of these is considered the leader:

maas-my_cloud:~$ juju run --service rabbitmq-server "is-leader"
- MachineId: 0/lxc/12
  Stderr: |
  Stdout: |
    True
  UnitId: rabbitmq-server/0
- MachineId: 1/lxc/8
  Stderr: |
  Stdout: |
    False
  UnitId: rabbitmq-server/1
- MachineId: 2/lxc/10
  Stderr: |
  Stdout: |
    False
  UnitId: rabbitmq-server/2

As you see above, in this example machine 0/lxc/12 is the leader, via it's status of "True". Now we need to hit the other two servers and shut down RabbitMQ:

# service rabbitmq-server stop

Once both services have completed shutting down, we can resolve the partitioning by running:

$ juju resolved -r rabbitmq-server/<whichever is leader>

Substituting <whichever is leader> for the machine ID identified earlier.

Once that has completed, you can start the previously stopped services with the below on each host:

# service rabbitmq-server start

and verify the result with:

$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...
[{nodes,[{disc,['rabbit@192-168-7-148','rabbit@192-168-7-163',
                'rabbit@192-168-7-174']}]},
 {running_nodes,['rabbit@192-168-7-163','rabbit@192-168-7-174',
                 'rabbit@192-168-7-148']},
 {partitions,[]}]
...done.

No partitions \o/

The JuJu errors for RabbitMQ should clear within a few minutes:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0             idle   1.25.13.1 0/lxc/12 5672/tcp 19 2.168.1.148
rabbitmq-server/1   unknown   idle   1.25.13.1 1/lxc/8  5672/tcp 19 2.168.1.163
rabbitmq-server/2   unknown   idle   1.25.13.1 2/lxc/10 5672/tcp 192.168.1.174

You should also find the nova-compute instances starting up fine.

,

Planet DebianRuss Allbery: 2017 Book Reading in Review

So much of my reading energy this year went into endlessly reloading political web sites and reading essays and poll analysis. This was not a very good use of that energy, but I did it anyway, and I'm not sure I could have stopped. It was a very 2017 problem, and I know I'm not alone — it was an anxious, anger-inducing year for a lot of us. I think that's also why I read shorter books (although more of them) than in 2016. Most of the year's reading happened in a couple of bursts during vacations.

My reading goal for last year was to get back to reading award nominees and previous award winners. The overall quality of my reading rose towards the end of the year, and I think several books I read in 2017 are likely to be award nominees or winners in 2018, but I still fell short of that goal. I'm carrying it over to 2018, coupling it with a goal to read more non-fiction, and calling that a goal to make time and energy for deeper, more demanding, and more rewarding reading. I want to sustain that over the year, rather than concentrating all my reading energy in vacations.

There were no 10 out of 10 books this year, but there were 6 books with a 9 rating. On the fiction side, two of them were the second and third books of Julie E. Czerneda's Species Imperative series: Migration and Regeneration. I recommend the entire series, starting with Survival, as excellent SF focusing on practicing scientists and on biology and ecology rather than physics. Czerneda has a slightly cartoony style that can take a bit to get used to, and I found the romance subplot unfortunate, but the protagonist was a delight and the last two books of the series were excellent.

The other fiction books with 9 out of 10 ratings were Becky Chambers's A Closed and Common Orbit, a sequel to The Long Way to a Small, Angry Planet that I thought was even better than the original, and Melina Marchetta's The Piper's Son. Many thanks to Light for the recommendation of the latter; it's the sort of mainstream literary fiction that I wouldn't have found without recommendations. It's a satisfying story about untangling past emotional mistakes and finding ways to move forward, but all the subtle work done by friendship networks was what made it special to me.

The two non-fiction books I gave 9 out of 10 ratings this year were Algorithms to Live By by Brian Christian and Tom Griffiths, and Why We Sleep by Matthew Walker. The first was a well-structured look at how we apply computer science algorithms to everyday life: short on actionable insight, but long on thoughtful analogies (email and social media as buffer bloat!) and new ways to view everyday decisions. The second is a passionate attempt to convince everyone to get more sleep. Like many projects dear to the author's heart, it should be taken with a grain of salt, but I found the summary of current sleep research fascinating.

The last book that I think deserves special mention is The Tiger's Daughter by K. Arsenault Rivera. It lacks the polish of some of the other books I read, and at times could be a sprawling mess, but of all the books I read this year, it's the one that most reliably puts a smile on my face when I remember it. It is completely unabashed about its emotions and completely in love with its characters and dares the world to do something about it, and I needed a book like that in 2017.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

Planet DebianAbhijith PA: Attending Pyconf Hyderabad

Chris Lamb if you are seeing this, then I am on Debian Planet :) !!!

Hyderabad Pyconf Even though I mostly hack on Ruby gems in my Debian time, Python is my first choice of programming language. I maintain a Python archiving application in Debian.

Apart from a local meet-up I never attended any Python social gathering. I always wanted to attend Pycons because whenever I search something related to python on Youtube I get talks from various Pycons. So I was looking for a Pycon in India and I found out Hyderabad community is organizing one in IIIT campus which is on a perfect weekend for me. I planned the itineraries and found out the travel and accommodation is over my budget. I did the next obvious thing that everyone do which is asking sponsorship from the host but with my Debian hat on. They told they can’t help. Then I asked ICFOSS which is an autonomous organization created by Kerala Government to foster the Free Software activities in the state. Their answer was also same. They didn’t even have any program to support Free software developers yet. My last hope was Debian. I sent grant request to our DPL Chris Lamb. He was very kind enough to approve the request on the same day itself :). \o/ .

Every thing set. Fast forward to 8th October morning. I woke up around 5 AM. Bad weather, it was a very rainy day. But somehow I caught bus to nearest town and from there I took uber cab to the Airport. Checked in and entered to aeroplane. It was an airbus A-320 aircraft. After Landing in Rajeev Gandhi International Airport, Hyderabad. I quickly jumped in to Ola cab to Gachibowli where the event is happening. Quick to registration desk collected my tags and goodies. The talks were already began. During break I caught up with Ramanathan and then Manivannan. I had contact with these people prior the event as they helped in answering a lot of my doubts. There were couple of stalls in the venue by Python Software Foundation, Redhat, Pramati, IBM. I went to PSF stall and collected some stickers. I also had some Debian stickers with me. I put it in the PSF desk and people start swarming to get it, which is very cool to see. I should have put it on the Redhat desk :). Then I lurked around other stalls. From Redhat stall I met Sourabh, he is also a Fedora contributor. We had a good chat about Debian and fedora. He also showed his workstation which is a Thinkpad T470s. And during lunch break I met Naresh, he works for Linaro.

Hyderabad Pyconf

The evening sessions were wonderful. There were talks about serverless IoTs and about Stephanie which is a Virtual assistant creating platform. Then the last keynote was from Kushal Das. He is from Python Software Foundation and he is also associated with projects like Digital Freedom Press, Fedora. Kushal talked about the hack cultures, first generation hackers. This was very exiting talk. After session I went to meet Kushal and we talked about Tor and Dmitry Bogatov who got arrested for running Tor exit node.

After the event I left the IIIT campus and walked outside. When I started the journey I have plans to see the city a little bit, but night already consumed the streets. I decided on one thing. Instead of taking cab lets travel on bus. I took a city bus to the airport. It was a Telengana State Road Transport Corporation (TSRTC) bus which is similar to Kerala’s KSRTC expect that it has a name called “Pushpak”. After one hour I reached at airport. I got plenty of time to catch my flight to home. So I just wandered around the airport to kill time. When I landed here in the morning, I didnt even look around. Now I got some time to see it. After that I was just curious about the politics in Andhra Pradhesh- Telengana separation. So just read some online articles about the issues. I entered to the airport and I still have time. So I took a small nap. After that I went to collect boarding pass and moved to aircraft. I took another nap in the plane. I woke up when the air hostess called me to revert the push back seat. So basically we were about to touch down in matter of minutes. After landing in the CIAL( Cochin International Airport Limited ). I got the bus to my home. It was a good travel after all. Only mistake I made is when I collected my T-shirt I forgot to check its size and after reaching home I found out it was small.

Planet DebianJonathan McDowell: Twisted Networking with an EE MiFi

Life’s been busy for the past few months, so excuse the lack of posts. One reason for this is that I’ve moved house. Virgin were supposed to install a cable modem on November 11th, but at the time of writing currently have my install on hold (more on that in the future). As a result when we actually moved in mid-December there was no broadband available. I’d noticed Currys were doing a deal on an EE 4GEE WiFi Mini - £4.99 for the device and then £12.50/month for 20GB on a 30 day rolling contract. Seemed like a good stopgap measure even if it wasn’t going to be enough for streaming video. I was pleasantly surprised to find it supported IPv6 out of the box - all clients get a globally routed IPv6 address (though it’s firewalled so you can’t connect back in; I guess this makes sense but it would be nice to be able to allow things through). EE are also making use of DNS64 + NAT64, falling back to standard CGNAT when the devices don’t support that.

All well and good, but part of the problem in the new place is a general lack of mobile reception in some rooms (foil backed internal insulation doesn’t help). So the MiFi is at the top of the house, where it gets a couple of bars of 4G reception and sits directly above the living room and bedroom. Coverage in those rooms is fine, but the kitchen is at the back of the house through a couple of solid brick walls and the WiFi ends up extremely weak there. Additionally my Honor 7 struggles to get a 3 signal in that room (my aging Nexus 7, also on 3, does fine, so it seems more likely to be the Honor 7 at fault here). I’ve been busy with various other bits over the Christmas period, but with broadband hopefully arriving in the new year I decided it was time to sort out my UniFi to handle coverage in the kitchen.

The long term plan is cabling around the house, but that turned out to be harder than expected (chipboard flooring and existing cabling not being in conduit ruled out the easy options, so there needs to be an external run from the top to the bottom). There is a meter/boiler room which is reasonably central and thus a logical place for cable termination and an access point to live. So I mounted the UniFi there, on the wall closest to the kitchen. Now I needed to get it connected to the MiFi, which was still upstairs. Luckily I have a couple of PowerLine adaptors I was using at the old place, so those provided a network link between the locations. The only remaining problem was that the 4GEE doesn’t have ethernet. What it does have is USB, and it presents as a USB RNDIS network interface. I had a spare DGN3500 lying around, so I upgraded it to the latest LEDE, installed kmod-usb-net-rndis and usb-modeswitch and then had a usb0 network device. I bridged this with eth0.1 - I want clients to talk to the 4GEE DHCP server so they can roam between the 2 APs, and I want the IPv6 configuration to work on both APs as well. I did have to change the IP on the DGN3500 as well - it defaulted to 192.168.1.1 which is what the 4GEE uses. Switching it to a static 192.168.1.2 ensures I can still get to it when the 4GEE isn’t active and prevents conflicts.

The whole thing ends up looking like the following (I fought Inkscape + Dia for a bit, but ASCII art turned out to be the easiest option):

/----------\       +-------+       +--------------+            +------------+
| Internet |--LTE--| EE 4G |--USB--|   DGN3500    |--Ethernet--| TL-PA9020P |
|          |       | MiFi  |       | LEDE 17.01.4 |            | PowerLine  |
\----------/       +-------+       +--------------+            +------------+
                       |                                              |
                      WiFi                                            |
                       |                                              |
                  +---------+                                         |
                  | Clients |                                     Ring Main
                  +---------+                                         |
                       |                                              |
                      WiFi                                            |
                       |                                              |
                  +--------+            +----------+            +------------+
                  | UniFi  |--Ethernet--|   PoE    |--Ethernet--| TL-PA9020P |
                  | AC Pro |            | Injector |            | PowerLine  |
                  +--------+            +----------+            +------------+

It feels a bit overly twisted for use with just a 4G connection, but various bits will be reusable when broadband finally arrives.

Planet DebianBits from Debian: New Debian Developers and Maintainers (November and December 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Ben Armstrong (synrg)
  • Frédéric Bonnard (frediz)
  • Jerome Charaoui (lavamind)
  • Michael Jeanson (mjeanson)
  • Jim Meyering (meyering)
  • Christopher Knadle (krait)

The following contributors were added as Debian Maintainers in the last two months:

  • Chris West
  • Mark Lee Garrett
  • Pierre-Elliott Bécue
  • Sebastian Humenda
  • Stefan Schörghofer
  • Stephen Gelman
  • Georg Faerber
  • Nico Schlömer

Congratulations!

Worse Than FailureBest of…: 2017: Nature, In Its Volatility

Happy New Year! Put that hangover on hold, as we return to an entirely different kind of headache, back on the "Galapagos". -- Remy

About two years ago, we took a little trip to the Galapagos- a tiny, isolated island where processes and coding practices evolved… a bit differently. Calvin, as an invasive species, brought in new ways of doing things- like source control, automated builds, and continuous integration- and changed the landscape of the island forever.

Geospiza parvula

Or so it seemed, until the first hiccup. Shortly after putting all of the code into source control and automating the builds, the application started failing in production. Specifically, the web service calls out to a third party web service for a few operations, and those calls universally failed in production.

“Now,” Hank, the previous developer and now Calvin’s supervisor, “I thought you said this should make our deployments more reliable. Now, we got all these extra servers, and it just plumb don’t work.”

“We’re changing processes,” Calvin said, “so a glitch could happen easily. I’ll look into it.”

“Looking into it” was a bit more of a challenge than it should have been. The code was a pasta-golem: a gigantic monolith of spaghetti. It had no automated tests, and wasn’t structured in a way that made it easy to test. Logging was nonexistent.

Still, Calvin’s changes to the organization helped. For starters, there was a brand new test server he could use to replicate the issue. He fired up his testing scripts, ran them against the test server, and… everything worked just fine.

Calvin checked the build logs, to confirm that both test and production had the same version, and they did. So next, he pulled a copy of the code down to his machine, and ran it. Everything worked again. Twiddling the config files didn’t accomplish anything. He build a version of the service configured for remote debugging, and chucked it up to the production server… and the error went away. Everything suddenly started working fine.

Quickly, he reverted production. On his local machine, he did something he’d never really had call to do- he flipped the build flag from “Debug” to “Release” and recompiled. The service hung. When built in “Release” mode, the resulting DLL had a bug that caused a hang, but it was something that never appeared when built in “Debug” mode.

“I reckon you’re still workin’ on this,” Hank asked, as he ambled by Calvin’s office, thumbs hooked in his belt loops. “I’m sure you’ve got a smart solution, and I ain’t one to gloat, but this ain’t never happened the old way.”

“Well, I can get a temporary fix up into production,” Calvin said. He quickly threw a debug build up onto production, which wouldn’t have the bug. “But I have to hunt for the underlying cause.”

“I guess I just don’t see why we can’t build right on the shared folder, is all.”

“This problem would have cropped up there,” Calvin said. “Once we build for Release, the problem crops up. It’s probably a preprocessor directive.”

“A what now?”

Hank’s ignorance about preprocessor directives was quickly confirmed by a search through the code- there was absolutely no #if statements in there. Calvin spent the next few hours staring at this block of code, which is where the application seemed to hang:

public class ServiceWrapper
{
    bool thingIsDone = false;
    //a bunch of other state variables

    public string InvokeSoap(methodArgs args)
    {
        //blah blah blah
        soapClient client = new Client();
        client.doThingCompleted += new doThingEventHandler(MyCompletionMethod);
        client.doThingAsync(args);

        do
        {
            string busyWork = "";
        }
        while (thingIsDone == false)

        return "SUCCESS!" //seriously, this is what it returns
    }

    private void MyCompletionMethod(object sender, completedEventArgs e)
    {
        //do some other stuff
        thingIsDone = true;
    }
}

Specifically, it was in the busyWork loop where the thing hung. He stared and stared at this code, trying to figure out why thingIsDone never seemed to become true, but only when built in Release. Obviously, it had to be a compiler optimization- and that’s when the lightbulb went off.

The C# compiler, when building for release, will look for variables whose values don’t appear to change, and replace them with in-lined constants. In serial code, this can be handled with some pretty straightforward static analysis, but in multi-threaded code, the compiler can make “mistakes”. There’s no way for the compiler to see that thingIsDone ever changes, since the change happens in an external thread. The fix is simple: chuck volatile on the variable declaration to disable that optimization.

volatile bool thingIsDone = false solved the problem. Well, it solved the immediate problem. Having seen the awfulness of that code, Calvin couldn’t sleep that night. Nightmares about the busyWork loop and the return "SUCCESS!" kept him up. The next day, the very first thing he did was refactor the code to actually properly handle multiple threads.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianJunichi Uekawa: hello to 2018.

hello to 2018. Wishing for a great new year.

Planet DebianAndreas Metzler: Fixing the boot-delay

Some time ago my computer stopped booting fast, it was stuck for about 10 seconds at Wait for Network to be Configured. Which seemed strange, since I am using systemd-networkd with a simple static configuration.

[Match]
Name=enp2s*

[Network]
Address=10.0.0.140/8
Gateway=10.0.0.138

Debugging showed the expected quick network setup followed by "NDISC":

Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Flags change: +LOWER_UP +RUNNING
Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Gained carrier
Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Setting addresses
Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Updating address: 10.0.0.140/8 (valid forever)
Dez 31 15:39:02 argenau systemd-timesyncd[368]: Network configuration changed, trying to establish connection.
Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Addresses set
Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Setting routes
Dez 31 15:39:02 argenau systemd-networkd[279]: enp2s0: Routes set
Dez 31 15:39:02 argenau kernel: r8169 0000:02:00.0 enp2s0: link up
Dez 31 15:39:02 argenau kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0: link becomes ready
Dez 31 15:39:03 argenau systemd-timesyncd[368]: Synchronized to time server 194.177.151.10:123 (0.debian.pool.ntp.org).
Dez 31 15:39:04 argenau systemd-networkd[279]: enp2s0: Adding address: fe80::dacb:8aff:fecb:9db/64 (valid forever)
Dez 31 15:39:04 argenau systemd-networkd[279]: enp2s0: Gained IPv6LL
Dez 31 15:39:04 argenau systemd-networkd[279]: enp2s0: Discovering IPv6 routers
Dez 31 15:39:04 argenau systemd-networkd[279]: NDISC: Started IPv6 Router Solicitation client
Dez 31 15:39:04 argenau systemd-networkd[279]: NDISC: Sent Router Solicitation, next solicitation in 4s
Dez 31 15:39:08 argenau systemd-networkd[279]: NDISC: Sent Router Solicitation, next solicitation in 7s
Dez 31 15:39:16 argenau systemd-networkd[279]: NDISC: Sent Router Solicitation, next solicitation in 15s
Dez 31 15:39:16 argenau systemd-networkd[279]: NDISC: No RA received before link confirmation timeout
Dez 31 15:39:16 argenau systemd-networkd[279]: NDISC: Invoking callback for 't'.
Dez 31 15:39:16 argenau systemd-networkd[279]: enp2s0: Configured
Dez 31 15:39:16 argenau systemd-networkd-wait-online[290]: ignoring: lo
Dez 31 15:39:16 argenau systemd[1]: Started Wait for Network to be Configured.
Dez 31 15:39:16 argenau systemd[1]: Reached target Network is Online.

Given that I am not running a local IPv6 network I have simply added IPv6AcceptRA=no to the [Network]-stanza and bootup is fast again.

Planet DebianJames McCoy: Monthly FLOSS activity - 2017/12 edition

Happy New Year!

Debian

vim

subversion


neovim

  • Went on a spree merging quickfix-related patches from Vim.

Subversion

  • Reworked logging of lz4 library version in svn --verbose --version output to support versions of lz4 in Debian/Ubuntu releases. (r1818801, r1818807, r1818868)
  • Fixed autoconf macros for lz4/utf8proc incorrectly using "yes" as the path prefix. (r1818871)

Vim

  • Reviewed a pull request to merge RedHat's Hunspell support to fallback on when Vim's spell dictionaries aren't present. This is something I've worked on before, but ran out of tuits and wasn't sure if Bram would actually merge such a change. Hopefully he'll provide some feedback on the PR so we can see if it's worth spending time on polishing the PR.
  • Based on discussion in the above PR and a new lintian tag, submitted a pull request to use AC_PATH_TOOL instead of AC_PATH_PROG to locate pkg-config.

Planet DebianRuss Allbery: Review: Barbary Station

Review: Barbary Station, by R.E. Stearns

Series: Barbary Station #1
Publisher: Saga
Copyright: October 2017
ISBN: 1-4814-7688-2
Format: Kindle
Pages: 448

Adda and Iridian are newly-graduated engineers who, as the book opens, are hijacking a colony ship to deliver it to a notorious group of pirates. Adda is the computer expert: dyed hair, neural implants, and the sort of high-tech gear required to subsume computer systems. Iridian is a former soldier, a Shieldrunner to be specific. They graduated into an awful economy following a secessionist war between Earth and the outer system and have spent much of their adult lives trying to keep their heads above water financially. This is Adda's scheme to get them enough money to live comfortably and, more importantly, together: hijack a colony ship, eject the passengers, and deliver the rest to the most successful pirate gang in the system.

This plan goes surprisingly well right up to the point where they arrive at Barbary Station. There, they discover that the pirates everyone believes are living in luxury in a former ship-breaking station are, instead, prisoners in a cobbled-together emergency shelter attached to the side of a station they can't safely enter. The pirates don't control Barbary Station. A malicious AI does, and it's trying very hard to kill them.

You can tell that this book was written in 2017 by the fact that a college education in engineering is only financially useful as a stepping point to piracy and crime. I can't imagine an author more than 20 or 30 years ago writing about economically desperate STEM college graduates, and yet it now seems depressingly plausible.

James Nicoll's appreciation for this story was derailed early by the total lack of attention the main characters give to the hapless passengers of the colony ship who get abandoned in deep space. I'm forced to admit that I barely noticed that, probably because the story seemed to barely notice it. Adda and Iridian do show some care for ordinary civilians stuck in the line of fire later in the book, but they primarily see the world in terms of allies and opportunities rather than solidarity among the victims. To be fair to them, their future is a grim, corporate-controlled oligarchy that is entirely uninterested in teaching such luxuries as empathy.

Despite some interesting examination of AI systems and the interaction of logic between security and environmental controls, Barbary Station is not really about its world-building or science-fiction background. If you try to read it as that sort of book, you will probably be frustrated by unanswered, and even unasked, questions. The plot is more thriller than idea exploration: can the heroines make allies, subvert a malicious AI, figure out what really happened on the station, and stay alive long enough for any of the answers to matter? There are a lot of bloody fights, an escalating series of terrifying ways in which an AI can try to kill unwanted parasites, and the constant danger that their erstwhile allies will suddenly decide they've outlived their usefulness.

As long as what you want is a thriller, though, this is an enjoyable one, although not exceptional. It has the occasional writing problem that I'll attribute to first novel: I got very tired of the phrase "the cold and the dark," for example, and the set pieces in the crumbling decks of a badly damaged space station were less epic than I would have wished because I struggled to visualize them. But the tension builds satisfyingly, the sides and factions on the station are entertainingly complex, and the resolution of the AI plot was appropriately creepy and inhuman. This AI felt like a computer with complex programming, not like a human, and that's hard to pull off.

This is also a book in which one of the protagonists is a computer hacker, and I was never tempted to throw it at a wall. The computers acted basically like computers within the conceit of neural implants that force metaphorical mental models instead of code. For me, that's a high bar to meet.

What Barbary Station does best is show a mixed working partnership. On the surface, Adda and Iridian fall into the brains and brawn stereotypes, but Stearns takes their relationship much deeper than that. Adda is nervous, distant, skittish, and needs her time alone to concentrate. She's comfortable in her own space with her thoughts. Iridian may be the muscle, but she's also the gregarious and outgoing one who inspires trust and loves being around people. While Adda works out the parameters of the pirates' AI problem, Iridian is making friends, identifying grudges and suspicions, and figuring out how to cross faction boundaries. And Adda and Iridian know each other well, understand each other's strengths and weaknesses, and fill in each other's gaps with unconscious ease. Books with this type of partnership protagonist told in alternating viewpoints aren't unheard of, but they aren't common, and I think Stearns did it very well. (I did find myself wishing the chapters would advertise the protagonist of that chapter, though, particularly when picking this book up after a reading break.)

Barbary Station felt like what military SF could be if it were willing to consider more varied human motivations than duty and honor, allow lesbian partners as protagonists, and use suspicious criminals instead of military units as the organizational structure. It has a similar focus on the technical hardware, immediate survival problems, the dangers of space, physical feats of heroism, and navigating factions in violent, hierarchical organizations. Characterization gets deeper and more satisfying as the book goes on, and there are a few moments of human connection that I found surprisingly moving. It's not entirely the book I wanted, it takes a while to get going, and I don't think the world background quite hung together, but by the end of the book I was having a hard time putting it down.

If you're in the mood for a desperate fight against malicious automation in an abandoned deep space structure, and can tolerate some world-building gaps, repetitive wording, and some odd failures of empathy, you could definitely do worse.

This is a mostly self-contained story, but there were enough hooks for a sequel that I was unsurprised to see that it will be followed by Mutiny at Vesta.

Rating: 6 out of 10

Planet Linux AustraliaSimon Lyall: Donations 2017

Like in 2016 and 2015 I am blogging about my charity donations.

The majority of donations were done during December (I start around my birthday) although after my credit card got suspended last year I spread them across several days.

The inspiring others bit seems to have worked a little. Ed Costello has blogged his donations for 2017.

I’ll note that throughout the year I’ve also been giving money via Patreon to several people whose online content I like. I suspended these payments in early-December but they have backed down on the change so I’ll probably restart them in early 2018.

As usual my main donation was to Givewell. This year I gave to them directly and allowed them to allocate to projects as they wish.

  • $US 600 to Givewell (directly for their allocation)

In march I gave to two organization I follow online. Transport Blog re-branded themselves as “Greater Auckland” and is positioning themselves as a lobbying organization as well as news site.

Signum University produce various education material around science-fiction, fantasy and medieval literature. In my case I’m following their lectures on Youtube about the Lord of the Rings.

I gave some money to the Software Conservancy to allocate across their projects and again to the Electronic Frontier Foundation for their online advocacy.

and lastly I gave to various Open Source Projects that I regularly use.

Share

,

Planet DebianJulian Andres Klode: A year ends, a new year begins

2017 is ending. It’s been a rather uneventful year, I’d say. About 6 months ago I started working on my master’s thesis – it plays with adding linear types to Go – and I handed that in about 1.5 weeks ago. It’s not really complete, though – you cannot actually use it on a complete Go program. The source code is of course available on GitHub, it’s a bunch of Go code for the implementation and a bunch of Markdown and LaTex for the document. I’m happy about the code coverage, though: As a properly developed software project, it achieves about 96% code coverage – the missing parts happening at the end, when time ran out 😉

I released apt 1.5 this year, and started 1.6 with seccomp sandboxing for methods.

I went to DebConf17 in Montreal. I unfortunately did not make it to DebCamp, nor the first day, but I at least made the rest of the conference. There, I gave a talk about APT development in the past year, and had a few interesting discussions. One thing that directly resulted from such a discusssion was a new proposal for delta upgrades, with a very simple delta format based on a variant of bsdiff (with external compression, streamable patches, and constant memory use rather than linear). I hope we can implement this – the savings are enormous with practically no slowdown (there is no reconstruction phase, upgrades are streamed directly to the file system), which is especially relevant for people with slow or data capped connections.

This month, I’ve been buying a few “toys”: I got a pair of speakers (JBL LSR 305), and I got a noise cancelling headphone (a Sony WH-1000XM2). Nice stuff. Been wearing the headphones most of today, and they’re quite comfortable and really make things quite, except for their own noise 😉 Well, both the headphone and the speakers have a white noise issue, but oh well, the prices were good.

This time of the year is not only a time to look back at the past year, but also to look forward to the year ahead. In one week, I’ll be joining Canonical to work on Ubuntu foundation stuff. It’s going to be interesting. I’ll also be moving places shortly, having partially lived in student housing for 6 years (one room, and a shared kitchen), I’ll be moving to a complete apartement.

On the APT front, I plan to introduce a few interesting changes. One of them involves automatic removal of unused packages: This should be happening automatically during install, upgrade, and whatever. Maybe not for all packages, though – we might have a list of “safe” autoremovals. I’d also be interested in adding metadata for transitions: Like if libfoo1 replaces libfoo0, we can safely remove libfoo0 if nothing depends on it anymore. Maybe not for all “garbage” either. It might make sense to restrict it to new garbage – that is packages that become unused as part of the operation. This is important for safe handling of existing setups with automatically removable packages: We don’t suddenly want to remove them all when you run upgrade.

The other change is about sandboxing. You might have noticed that sometimes, sandboxing is disabled with a warning because the method would not be able access the source or the target. The goal is to open these files in the main program and send file descriptors to the methods via a socket. This way, we can avoid permission problems, and we can also make the sandbox stronger – for example, by not giving it access to the partial/ directory anymore.

Another change we need to work on is standardising the “Important” field, which is sort of like essential – it marks an installed package as extra-hard to remove (but unlike Essential, does not cause apt to install it automatically). The latest draft calls it “Protected”, but I don’t think we have a consensus on that yet.

I also need to get happy eyeballs done – fast fallback from IPv6 to IPv4. I had a completely working solution some months ago, but it did not pass CI, so I decided to start from scratch with a cleaner design to figure out if I went wrong somewhere. Testing this is kind of hard, as it basically requires a broken IPv6 setup (well, unreachable IPv6 servers).

Oh well, 2018 has begun, so I’m going to stop now. Let’s all do our best to make it awesome!


Filed under: Debian, General, Ubuntu

Harald WelteOsmocom Review 2017

As 2017 has just concluded, let's have a look at the major events and improvements in the Osmocom Cellular Infrastructure projects (i.e. those projects dealing with building protocol stacks and network elements for mobile network infrastructure.

I've prepared a detailed year 2017 summary at the osmocom.org website, but let me write a bit about the most note-worthy topics here.

NITB Split

Once upon a time, we implemented everything needed to operate a GSM network inside a single process called OsmoNITB. Those days are now gone, and we have separate OsmoBSC, OsmoMSC, OsmoHLR, OsmoSTP processes, which use interfaces that are interoperable with non-Osmocom implementations (which is what some of our users require).

This change is certainly the most significant change in the close-to-10-year history of the project. However, we have tried to make it as non-intrusive as possible, by using default point codes and IP addresses which will make the individual processes magically talk to each other if installed on a single machine.

We've also released a OsmoNITB Migration Guide, as well as our usual set of user manuals in order to help our users.

We'll continue to improve the user experience, to re-introduce some of the features lost in the split, such as the ability to attach names to the subscribers.

Testing

We have osmo-gsm-tester together with the two physical setups at the sysmocom office, which continuously run the latest Osmocom components and test an entire matrix of different BTSs, software configurations and modems. However, this tests at super low load, and it tests only signalling so far, not user plane yet. Hence, coverage is limited.

We also have unit tests as part of the 'make check' process, Jenkins based build verification before merging any patches, as well as integration tests for some of the network elements in TTCN-3. This is much more than we had until 2016, but still by far not enough, as we had just seen at the fall-out from the sub-optimal 34C3 event network.

OsmoCon

2017 also marks the year where we've for the first time organized a user-oriented event. It was a huge success, and we will for sure have another OsmoCon incarnation in 2018 (most likely in May or June). It will not be back-to-back with the developer conference OsmoDevCon this time.

SIGTRAN stack

We have a new SIGTRAN stakc with SUA, M3UA and SCCP as well as OsmoSTP. This has been lacking a long time.

OsmoGGSN

We have converted OpenGGSN into a true member of the Osmocom family, thereby deprecating OpenGGSN which we had earlier adopted and maintained.

Harald Welte34C3 and its Osmocom GSM/UMTS network

At the 34th annual Chaos Communication Congress, a team of Osmocom folks continued the many years old tradition of operating an experimental Osmocom based GSM network at the event. Though I've originally started that tradition, I'm not involved in installation and/or operation of that network, all the credits go to Lynxis, neels, tsaitgaist and the larger team of volunteers surrounding them. My involvement was only to answer the occasional technical question and to look at bugs that show up in the software during operation, and if possible fix them on-site.

34C3 marks two significant changes in terms of its cellular network:

  • the new post-nitb Osmocom stack was used, with OsmoBSC, OsmoMSC and OsmoHLR
  • both an GSM/GPRS network (on 1800 MHz) was operated ,as well as (for the first time) an UMTS network (in the 850 MHz band)

The good news is: The team did great work building this network from scratch, in a new venue, and without relying on people that have significant experience in network operation. Definitely, the team was considerably larger and more distributed than at the time when I was still running that network.

The bad news is: There was a seemingly endless number of bugs that were discovered while operating this network. Some shortcomings were known before, but the extent and number of bugs uncovered all across the stack was quite devastating to me. Sure, at some point from day 2 onwards we had a network that provided [some level of] service, and as far as I've heard, some ~ 23k calls were switched over it. But that was after more than two days of debugging + bug fixing, and we still saw unexplained behavior and crashes later on.

This is such a big surprise as we have put a lot of effort into testing over the last years. This starts from the osmo-gsm-tester software and continuously running test setup, and continues with the osmo-ttcn3-hacks integration tests that mainly I wrote during the last few months. Both us and some of our users have also (successfully!) performed interoperability testing with other vendors' implementations such as MSCs. And last, but not least, the individual Osmocom developers had been using the new post-NITB stack on their personal machines.

So what does this mean?

  • I'm sorry about the sub-standard state of the software and the resulting problems we've experienced in the 34C3 network. The extent of problems surprised me (and I presume everyone else involved)
  • I'm grateful that we've had the opportunity to discover all those bugs, thanks to the GSM team at 34C3, as well as Deutsche Telekom for donating 3 ARFCNs from their spectrum, as well as the German regulatory authority Bundesnetzagentur for providing the experimental license in the 850 MHz spectrum.
  • We need to have even more focus on automatic testing than we had so far. None of the components should be without exhaustive test coverage on at least the most common transactions, including all their failure modes (such as timeouts, rejects, ...)

My preferred method of integration testing has been by using TTCN-3 and Eclipse TITAN to emulate all the interfaces surrounding a single of the Osmocom programs (like OsmoBSC) and then test both valid and invalid transactions. For the BSC, this means emulating MS+BTS on Abis; emulating MSC on A; emulating the MGW, as well as the CTRL and VTY interfaces.

I currently see the following areas in biggest need of integration testing:

  • OsmoHLR (which needs a GSUP implementation in TTCN-3, which I've created on the spot at 34C3) where we e.g. discovered that updates to the subscriber via VTY/CTRL would surprisingly not result in an InsertSubscriberData to VLR+SGSN
  • OsmoMSC, particularly when used with external MNCC handlers, which was so far blocked by the lack of a MNCC implementation in TTCN-3, which I've been working on both on-site and after returning back home.
  • user plane testing for OsmoMGW and other components. We currently only test the control plane (MGCP), but not the actual user plane e.g. on the RTP side between the elements
  • UMTS related testing on OsmoHNBGW, OsmoMSC and OsmoSGSN. We currently have no automatic testing at all in these areas.

Even before 34C3 and the above-mentioned experiences, I concluded that for 2018 we will pursue a test-driven development approach for all new features added by the sysmocom team to the Osmocom code base. The experience with the many issues at 34C3 has just confirmed that approach. In parallel, we will have to improve test coverage on the existing code base, as outlined above. The biggest challenge will of course be to convince our paying customers of this approach, but I see very little alternative if we want to ensure production quality of our cellular stack.

So here we come: 2018, The year of testing.

Planet DebianPaul Wise: FLOSS Activities December 2017

Changes

Issues

Review

Administration

  • Debian: fix sbuild config on buildd, cleanup after git gc OOM, redirect CD support queries, answer SSH hash question again
  • Debian QA: deploy patches
  • Debian mentors: check why package was not imported
  • Debian installer: apply patches
  • Debian PTS: investigate issues after git migration, apply patch, deploy patches
  • Debian package tracker: deploy patches, update mirror URL in db
  • Debian mentors: remove a broken upload
  • Debian wiki: redirect a bouncing mail, whitelist email addresses
  • Debian website: work on git conversion
  • Debian derivatives census: deploy change, remove patches larger than 1MiB
  • Openmoko: forcibly reboot

Communication

Initiate discussions about m68k usertags, buildd chroot creation failures and game server packaging.

Sponsors

The dpkg patch was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianRuss Allbery: DocKnot 1.02

DocKnot is my current personal automation project: try to find some way to keep all the various documentation for my software projects in sync while reusing boilerplate that applies to multiple projects. I'm still figuring out how I want it to work, so I haven't written the sort of comprehensive documentation that would let someone else more easily use it. The current tentative plan is to do that for the 2.00 release, around the time I've switched all my active project documentation and all my software web pages over to it.

This release has all the incremental improvements I needed to handle the pam-krb5 documentation:

  • Support quoted paragraphs when converting to text.
  • Support numbered lists when converting to thread.
  • Force long, unbreakable lines to be left intact when wrapping.
  • Preserve two spaces after periods in more cases.
  • Support test/prefix metadata to override testing instructions.
  • Add a new license text for pam-krb5.
  • Support more complex quote attributes in thread output.
  • Add security advisory support in thread output.

I'll probably redo how the license is named in the future, since right now I'm abusing copyright-format 1.0 syntax in a way that wasn't intended. (I use that format for the upstream license files of all of my software.)

Right now, this is all very, very specific to my software page styles and how I like to document software, and doesn't aspire to be more than that. That's why I've not uploaded it to Debian, although it is available on CPAN (as App::DockNot) if anyone wants.

You can get the latest version from the DocKnot distribution page.

Planet DebianChris Lamb: Free software activities in December 2017

Here is my monthly update covering what I have been doing in the free software world in December 2017 (previous month):

  • Released a new version of python-gfshare, my Python library that implements Shamir’s method for secret sharing fixing parts of the documentation as well as fixing two warnings via contributions by Kevin Ji [...] [...].
  • Opened a PR against vim-pizza (a plugin to order pizza from within the Vim text editor) to use xdg-open or sensible-browser under Debian and derivatives. [...]
  • Created two pull requests for the RediSearch search engine module for Redis, first to un-ignore the /debian dir in .gitignore to aid packaging [...] and second to inherit CFLAGS/LDFLAGS from the outside environment to enable hardening support [...].
  • Even more hacking on the Lintian static analysis tool for Debian packages:
    • New features:
      • Support Standards-Version 4.1.3.
      • Warn when files specified in Files-Excluded exist in the source tree. (#871454)
      • Check Microsoft Windows Portable Executable (PE) files missing hardening features. (#837548)
      • Warn about Python 2.x packages using ${python3:Depends} and Python 3.x packages using ${python:Depends}. (#884676)
      • Check changelog entries with incorrectly formatted dates. (#793406)
      • Check override_dh_fixperms targets missing calls to dh_fixperms. (#885910)
      • Ensure PAM modules are in the admin, preventing a false positive for libpam-krb5. (#885899)
      • Check Python packages installing modules called site, docs, examples etc. into the global namespace. (#769365)
      • Check packages that invoke AC_PATH_PROG without considering cross-compilation. (#884798)
      • Emit a warning for packages that mismatch version control systems in Vcs-* headers. (#884503)
      • Warn when packages specify a Bugs field in debian/control that does not refer to official Debian infrastructure. (#741071)
      • Warn for packages shipping pkg-config files under /usr/lib/pkgconfig. (#885096)
      • Warn about packages that ship non-reproducible Python .doctree files. (#885327)
      • Bump the recommended Debhelper compat level to 11. (#884699)
      • Warn about Python 3 packages that depend on Python 2 packages (and vice versa). (#782277)
      • Check for override_dh_clean targets missing calls to dh_clean. (#884817)
      • Check Apache 2.0-licensed packages that do not distribute their accompanying NOTICE files. (#885042)
      • Detect embedded jQuery libraries with version number in their filenames. (#833613)
      • Also emit embedded-javascript-library for Twitter Bootstrap and Mustache.
      • Check development packages that ship ELF binaries in $PATH. (#794295)
      • Warn about library packages with excessive priority. (#834290)
      • Warn about Multi-Arch: foreign packages that ship CMake, pkg-config or static libraries in public, architecture-dependent search paths. (#882684)
      • Test for packages shipping gschemas.compiled files. (#884142)
      • Warn if a package ships compiled font files. (#884165)
      • Detect invalid debian/po/POTFILES.in. (#883653)
      • Warn for packages that modify the epoch yet there's no comment about the change in the changelog.
    • Bug fixes:
    • Reporting improvements:
    • Documentation:
    • Miscellaneous:
      • Add a vendor profile for Purism's PureOS. (#884408)
      • Allow the tag display limit to be configured via --tag-display-limit. (#813525)
      • Tag build-dependencies with <!nocheck> in debian/control.
      • Make -v imply --no-tag-display-limit. (#812756)
      • Remove russianRussian corrections as they are covered by data/spelling/corrections-case. (#883041)
  • Suggested an improvement to the "lack of entropy" error message in the TLSH (Trend Micro Locality Sensitive Hash) fuzzy matching algorithm. [...]
  • I also blogged about simple media cachebusting when using GitHub Pages.

Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Support Android ROM boot.img introspection. (#884557)
  • Handle case where a file to be "fuzzy" matched does not contain enough entropy despite being over 512 bytes. (#882981)
  • Ensure the cleanup of symlink placeholders is idempotent. [...]

trydiffoscope

trydiffoscope is a web-based version of the diffoscope in-depth and content-aware diff utility. Continued thanks to Bytemark for sponsoring the hardware.

  • Parse dpkg-parsechangeloga in setup.py instead of hardcoding version. [...]
  • Flake8 the main file. [...]

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Don't HTTP 500 if no request body. [...]
  • Catch TypeError: decode() argument 1 must be string, not None tracebacks. [...]


Debian

My activities as the current Debian Project Leader will be covered in my Bits from the DPL email to the debian-devel-announce mailing list.

Patches contributed

  • bitseq: Add missing Build-Depends on python-numpy for documentation generation. (#884677)
  • dh-golang: Avoid "uninitialized value" warnings. (#885696)
  • marsshooter: Avoid source-includes-file-in-files-excluded Lintian override. (#885732)
  • gtranslator: Do not ship .pyo and .pyc files. (#884714)
  • media-player-info: Bugs field does not refer to Debian infrastructure. (#885703)
  • pydoctor: Add a Homepage field to debian/control. (#884255)

Debian LTS


This month I have been paid to work 14 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Updating old notes in data/dla-needed.txt.
  • Issued DLA 1204-1 for the evince PDF viewer to fix an arbitrary command injection vulnerability where a specially-crafted embedded DVI filename could be exploited to run commands as the current user when "printing" to PDF.
  • Issued DLA 1209-1 to fix a vulnerability in sensible-browser (a utility to start the most suitable web browser based on one's environment or configuration) where remote attackers could conduct argument-injection attacks via specially-crafted URLs.
  • Issued DLA 1210-1 for kildclient, a "MUD" multiplayer real-time virtual world game to remedy a command-injection vulnerability.

Uploads

  • python-django (2:2.0-1) — Release the new upstream stable release to the experimental suite.
  • redis:
    • 5:4.0.5-1 — New upstream release & use "metapackage" over "meta-package" in debian/control.
    • 5:4.0.6-1 — New upstream bugfix release.
    • 5:4.0.6-2 — Replace redis-sentinel's main dependency with redis-tools from redis-server moving the creating/deletion of the redis user, associated data & log directories to redis-tools (#884321), and add stub manpages for redis-sentinel, redis-check-aof & redis-check-rdb.
    • 5:4.0.6-1~bpo9+1 — Upload to the stretch-backports repository.
  • redisearch:
    • 1.0.1-1 — New upstream release.
    • 1.0.2-1 — New upstream release, ensure .so file is hardered (upstream patch), update upstream's .gitignore so our changes under debian/ are visible without -f (upstream patch and override no-upstream-changelog in all binary packages.
  • installation-birthday (6) — Bump Standards-Version to 4.1.2 and replace Priority: extra with Priority: optional.

Finally, I also made the following miscellaneous uploads:

  • cpio (2.12+dfsg-6), NMU-ing a new 2.12 upstream version to the "unstable" suite.
  • wolfssl (3.12.2+dfsg-1 & 3.13.0+dfsg-1) — Sponsoring new upstream versions.

Debian bugs filed


FTP Team


As a Debian FTP assistant I ACCEPTed 106 packages: aodh, autosuspend, binutils, btrfs-compsize, budgie-extras, caja-seahorse, condor, cross-toolchain-base-ports, dde-calendar, deepin-calculator, deepin-shortcut-viewer, dewalls, dh-dlang, django-mailman3, flask-gravatar, flask-mail, flask-migrate, flask-paranoid, flask-peewee, gcc-5-cross-ports, getmail, gitea, gitlab, golang-github-go-kit-kit, golang-github-knqyf263-go-deb-version, golang-github-knqyf263-go-rpm-version, golang-github-mwitkow-go-conntrack, golang-github-parnurzeal-gorequest, golang-github-prometheus-tsdb, haskell-unicode-transforms, haskell-unliftio-core, htslib, hyperkitty, libcbor, libcdio, libcidr, libcloudproviders, libepubgen, libgaminggear, libgitlab-api-v4-perl, libgoocanvas2-perl, libical, libical3, libixion, libjaxp1.3-java, liblog-any-adapter-tap-perl, liborcus, libosmo-netif, libt3config, libtirpc, linux-show-player, mailman-hyperkitty, mailman-suite, mailmanclient, muchsync, node-browser-stdout, node-crc32, node-deflate-js, node-get-func-name, node-ip-regex, node-json-parse-better-errors, node-katex, node-locate-path, node-uglifyjs-webpack-plugin, nq, nvidia-cuda-toolkit, openstack-meta-packages, osmo-ggsn, osmo-hlr, osmo-libasn1c, osmo-mgw, osmo-pcu, patman, peewee, postorius, pyasn1, pymediainfo, pyprind, pysmi, python-colour, python-defaults, python-django-channels, python-django-x509, python-ldap, python-quamash, python-ratelimiter, python-rebulk, python-trezor, python3-defaults, python3-stdlib-extensions, python3.6, python3.7, qscintilla2, range-v3, rawkit, remmina, reprotest, ruby-gettext-i18n-rails-js, ruby-webpack-rails, sacjava, sphinxcontrib-pecanwsme, unicode-cldr-core, wolfssl, writerperfect, xrdp & yoshimi.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: libtirpc, python-ldap, python-trezor & sphinxcontrib-pecanwsme.

Don Martisome more random links

This one is timely, considering that an investment in "innovation" comes with a built-in short position in Bay Area real estate, and the short squeeze is on: Collaboration in 2018: Trends We’re Watching by Rowan Trollope

In 2018, we’ll see the rapid decline of “place-ism,” the discrimination against people who aren’t in a central office. Technology is making it easier not just to communicate with distant colleagues about work, but to have the personal interactions with them that are the foundation of trust, teamwork, and friendship.

Really, "place-ism" only works if you can afford to overpay the workers who are themselves overpaying for housing. And management can only afford to overpay the workers by giving in to the temptations of rent-seeking and deception. So the landlord makes the nerd pay too much, the manager has to pay the nerd too much, and you end up with, like the man said, "debts that no honest man can pay"?

File under "good examples to illustrate Betteridge's law of headlines": Now That The FCC Is Doing Away With Title II For Broadband, Will Verizon Give Back The Taxpayer Subsidies It Got Under Title II?

Open source business news: Docker, Inc is Dead. Easy to see this as a run-of-the-mill open source business failure story. But at another level, it's the story of how the existing open source incumbents used open practices to avoid having to bid against each other for an overfunded startup.

If "data is the new oil" where is the resource curse for data? Google Maps’s Moat, by Justin O’Beirne (related topic: once Google has the 3d models of buildings, they can build cool projects: Project Sunroof)

Have police departments even heard of Caller ID Spoofing or Swatting? Kansas Man Killed In ‘SWATting’ Attack

Next time I hear someone from a social site talking about how much they're doing about extremists and misinformation and such, I have to remember to ask: have you adjusted your revenue targets for political advertising down in order to reflect the bad shit you're not doing any more? How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Or are you just encouraging the "dark social" users to hide it better?

ICYMI, great performance optimization: Firefox 57 delays requests to tracking domains

Boring: you're operating a 4500-pound death machine. Exciting: three Slack notifications and a new AR game! Yes, Smartphone Use Is Probably Behind the Spike in Driving Deaths. So Why Isn’t More Being Done to Curb It?

I love "nopoly controls entire industry so there is no point in it any more" stories: The Digital Advertising Duopoly Good news on advertising. The Millennials are burned out on advertising—most of what they're exposed to now is just another variant of "creepy annoying shit on the Internet"—but the generation after the Millennials are going to have hella mega opportunities building the next Creative Revolution.

Another must-read for the diversity and inclusion department. 2017 Was the Year I Learned About My White Privilege by Max Boot.

Sam VargheseAll your gods have feet of clay: Sarah Ferguson’s fall from grace

The year that ends today was remarkable for one thing on the media front that has gone largely unnoticed: the fall from grace of one of the Australian Broadcasting Corporation’s brightest stars who has long been a standard-setter at the country’s national broadcaster.

Sarah Ferguson was the journalist’s journalist, seemingly a woman of fierce integrity, and one who pandered to neither left nor right. When she sat in for Leigh Sales, the host of 7.30, the main current affairs programme, for six months while Sales was on a maternity leave break, the programme seemed to come to life as she attacked politicians with vigour and fearlessness.

There was bite in her speech, there was knowledge, there was surprise aplenty. Apart from the stint on 7.30, she brought depth and understanding to a long programme on the way the Labor Party tore itself to bits while in government for six years from 2007, a memorable TV saga.

A powerful programme on domestic violence during the year was filled with the kind of sparse and searing dialogue for which Ferguson was known. I use the past tense advisedly for it all came apart in October.

That was the month when Ferguson decided for some strange reason to interview Hillary Clinton for an episode of Four Corners, the ABC’s main investigative affairs programme. How an interview with a jaded politician who was trying to blame all and sundry for her defeat in the 2016 US presidential election fit into this category is anyone’s guess.

The normally direct and forthright Ferguson seemed to be in awe of Clinton, and gave the former American secretary of state free pass after free pass, never challenging the lies that Clinton used to paint herself as the victim of some extraordinary plot.

In fact, Ferguson herself appeared to be so embarrassed by her own performance that she penned — or a ghost writer did — an article that was totally out of character, claiming that she had prepared for the interview and readied herself to the extent possible.

To put it kindly, this was high-grade bulldust.

Either Ferguson was overwhelmed by the task of interviewing a figure such as Clinton or else she decided to go easy on one of her own sex. It was a pathetic sight to see one of the best journalists at the ABC indulge in such ordinary social intercourse.

There were so many points during the interview when Ferguson could have caught her subject napping. And that’s an art she is adept at.

But, alas, that 45 minutes went without a single contradiction, without a single interjection, with Ferguson projecting a has-been as somehow a subject who was worthy of being featured on a programme that generally caters to hard news. By the time this interview took place, Clinton had been interviewed by world+dog as she tried to sell her book, What Happened.

Thus, Ferguson was reduced to recycling old stuff, and she made a right royal hash of it too.

I sent the following complaint to the ABC on 22 October, six days after the interview was broadcast:

I am writing to make a formal complaint about the dissemination of false information on the ABC, via the Four Corners interview with Hillary Clinton on 16 October.

Sarah Ferguson conducted the interview but did not challenge numerous falsehoods uttered by Clinton.

Four Corners is promoted as investigative journalism. Ferguson had done no preparation at all to anticipate Clinton’s lies – and that is a major failing. There was no investigative aspect about this interview. It was pap at its finest. The ABC had a long article about how Ferguson had prepared for the interview – but this seems toe be so much eyewash.

Clinton has been interviewed numerous times after her election loss and many of these interviews are available on the internet, beginning with the 75-minute interview done by Walt Mossberg and Kara Swisher of The Verge on 30 June. So Ferguson cannot claim that she did not have access to these.

The ABC, as part of its charter, has to be balanced. Given that there were so many accusations made about WikiLeaks publisher Julian Assange by Clinton, he should have been given a chance — after the programme — to give his side of the story. This did not happen.

There has been a claim by Four Corners executive producer Sally Neighbour on Twitter that she wrote to Assange on 19 September seeking an interview. But this, if true, could not have been a right of reply as it was well before the Clinton interview.

Neighbour also retweeted a tweet which said “Assange is Putin’s bitch” as part of promoting the programme. This is not very professional, to put it mildly.

Finally, given that the allegations about the 2016 US presidential polls have been dominating the news for more than a year, Ferguson should have known what the central issues raised by Clinton would be. Else, she should not have done the interview.

If, as claimed, Ferguson did prepare for the interview, then how she did not know about these issues is a mystery.

Some of the false statements made by Clinton during the interview, none of which were challenged by Ferguson.

1. “And if he’s such a, you know, martyr of free speech, why doesn’t WikiLeaks ever publish anything coming out of Russia?” was one of Clinton’s claims about Assange.

Here is a massive WikiLeaks drop on Russia: https://wikileaks.org/spyfiles/russia/

There are many documents critical of Russia: https://contraspin.co.nz/in-plain-sight-why-wikileaks-is-clearly-not-in-bed-with-russia/

2. Clinton claimed that the release of emails from the Democratic National Committee was timed to cut off oxygen to the stories around the Trump Hollywood access tape which was released on 7 October.

This again is a lie. Assange had proclaimed the release of these emails on 4 October – http://www.reuters.com/article/us-ecuador-sweden-assange/wikileaks-assange-signals-release-of-documents-before-u-s-election-idUSKCN1240UG

3. Clinton claimed in the interview that made-up stories were run using the DNC emails. This again is a lie.

One email showed that Islamic State is funded by Qatar and Saudi Arabia, both countries from which Clinton accepted donations for the Clinton Foundation. The fact that the DNC acted to favour Clinton over Bernie Sanders for the Democrat nomination in 2016 was also mentioned in the emails.

The New York Times published stories based on these facts. Ferguson did not challenge Clinton’s lie about this.

There are numerous other instances of stories being run based on the emails – and none of these was ever contradicted by Clinton or her lackeys.

4. Clinton also claimed that the DNC emails were obtained through an external hack. There is no concrete evidence for this claim.

There is evidence, however, produced by NSA whistleblower William Binny and CIA veteran Ray McGovern to show that they could only have been taken by an internal source, who was likely to have used an USB key and copied them. See https://consortiumnews.com/2017/09/20/more-holes-in-russia-gate-narrative/

5. Clinton alleged that Julian Assange is a “tool of Russian intelligence” who “does the bidding of a dictator”.

Barack Obama himself is on the record (https://www.youtube.com/watch?v=XEu6kHRHYhU&t=30s) as stating that there is no clear connection between WikiLeaks and Russian intelligence. Several others in his administration have said likewise.

Once again, Ferguson did not challenge the lie.

6. Ferguson did not ask Clinton a word about the Libyan conflict where 40,000 died in an attack which she (Clinton) had orchestrated.

7. Not a word was asked about the enormous sums Clinton took from Wall Street for speeches – Goldman Sachs, a pivot of the global financial crisis, was charged $675,000 for a speech.

The ABC owes the public, whose funds it uses, an explanation for the free pass given to Clinton.

Kieran Doyle of the ABC’s Audience and Consumer Affairs department sent me the following response on 6 December:

Your complaint has been investigated by Audience and Consumer Affairs, a unit which is separate to and independent of program making areas within the ABC. We have considered your concerns and information provided by ABC News management, reviewed the broadcast and assessed it against the ABC’s editorial standards for impartiality and accuracy.

The newsworthy focus of this interview was Hillary Clinton’s personal reaction to her defeat to Donald Trump in the most stunning election loss in modern US history, which she has recounted in her controversial book What Happened. Mrs Clinton is a polarising historical figure in US politics who is transparently partisan and has well established personal and political positions on a wide range of issues. An extended, stand-alone interview will naturally include her expressing critical views of her political adversaries and her personal view on why she believes she was defeated.

ABC News management has explained the program was afforded 40 minutes of Mrs Clinton’s time for the interview, and the reporter had to be extremely selective about the questions she asked and the subjects she covered within that limited time frame. It was therefore not possible to contest and interrogate, to any significant degree, all of the claims she made. We note she did challenge Mrs Clinton’s view of Mr Assange –

SARAH FERGUSON: lots of people, including in Australia, think that Assange is a martyr for free speech and freedom of information.

SARAH FERGUSON: Isn’t he just doing what journalists do, which is publish information when they get it?

We are satisfied that Mrs Clinton’s comments about Wikileaks not publishing critical information about Russia, were presented within the context of the key issues of the 2016 US election and what impacted the campaign. The interview was almost exclusively focused on events leading up to the election, and the reporter’s questions about Wikileaks were clearly focused on its role in the leadup to the election, rather than after it. One of the key events in the lead-up to the election was the Wikileaks releases of hacked emails from the Democratic National Committee and Clinton’s campaign manager over a number of dates, starting with the Democratic Convention.

In response to your concern, the program has provided the following statement –

The key point is that Wikileaks didn’t publish anything about Russia during the campaign. Wikileaks received a large cache of documents related to the Russian government during the 2016 campaign and declined to publish them.

Mrs Clinton’s suggestion that the release of the Podesta emails was timed to occur shortly after the release of the Access Hollywood tape was presented as her person view. It was not presented as a factual statement that has been confirmed.

ABC News management has explained that in anticipation of the interview with Mrs Clinton, Four Corners wrote to Mr Assange in September and invited him to take part in an interview, to address the criticisms she has made of him and Wikileaks in the wake of her election defeat. The program explicitly set out a statement Mrs Clinton had made in a podcast interview with The New Yorker criticising Mr Assange’s alleged links to Russia, in an almost identical way to the criticisms she then made to Four Corners, and asking Mr Assange to respond. Mr Assange did not respond. Four Corners has subsequently renewed the invitation, both directly through Twitter and by email with Mr Assange’s Australian legal advisor, Greg Barnes, offering him a full right of reply. The program would welcome the opportunity to interview Mr Assange.

We observe that Four Corners published a report covering Mr Assange’s reaction to Mrs Clinton’s criticisms, soon after he released his comments. That report is available at ABC News online by clicking the attached link –

http://www.abc.net.au/news/2017-10-16/hillary-clinton-says-julian-assange-helped-donald-trump-win/9047944

We note the program’s Executive Producer has stated publicly that her re-tweet of an offensive viewer tweet during the program was done in error. ABC News management has explained there was very heavy twitter traffic on the night and the re-tweet was a mistake and not done intentionally. As soon as she realised she had done it, the Executive Producer deleted the tweet and apologised on Twitter that evening.

While Audience and Consumer Affairs believe it would have been preferable for the program to make a further attempt to seek Mr Assange’s response to Mrs Clinton’s criticism of him, between recording the interview and broadcasting it, we are satisfied that the program has clearly afforded Mr Assange an opportunity to be interviewed to respond to those criticisms, and to the extent that he has responded, his views have been reported in the Four Corners online article on the ABC website.

Please be assured that the specific issues you personally believe should have been raised with Mrs Clinton are noted.

The ABC Code of Practice is available online at the attached link; http://about.abc.net.au/reports-publications/code-of-practice/.

Should you be dissatisfied with this response to your complaint, you may be able to pursue the matter with the Australian Communications and Media Authority http://www.acma.gov.au

Planet DebianRuss Allbery: pam-krb5 4.8

This is the default Kerberos PAM module for Debian and Ubuntu systems, and supports both MIT Kerberos and Heimdal. I'm not sure how many people still use straight Kerberos PAM modules these days, with sssd taking off, but I'm still maintaining it.

This release fixes a somewhat obscure bug: if you configure the module to do expired password changes properly, it checks to see that the expired credentials can still get kadmin/changepw credentials to do the password change. However, it was setting credential options improperly on that call, which could cause it to spuriously fail if, say, krb5.conf is configured to request proxiable credentials but kadmin/changepw doesn't support proxiable credentials. Thanks to Florian Best for the excellent bug report.

The test suite in this version also works properly with Heimdal 7.0.1 and later, which changed a bunch of the messages (at the cost of skipping tests with earlier versions of Heimdal), and reports richer error messages on PKINIT failures with Heimdal. It also includes documentation fixes and lots of warning fixes, and now builds properly with tons of warnings enabled with GCC 7, Clang, and the Clang static analyzer.

You can get the latest version from the pam-krb5 distribution page.

Planet DebianRuss Allbery: rra-c-util 7.0

This is my collection of utility libraries and support code for (mostly) C software.

The major version bump is due a backwards-incompatible change: dropping the SA_LEN macro from portable/socket.h, including all the Autoconf machinery to probe for it. This macro came from INN's old portability code when porting to IPv6, but INN turned out to not really need it and it's never caught on. It was causing some warnings with GCC 7 that would otherwise have been hard to fix, so it was time for it to go.

There are a couple of other changes to function signatures that shouldn't matter for backward compatibility: network_sockaddr_sprint now takes a socklen_t for better type compatibility with other networking functions, and bail_krb5 and diag_krb5 in the TAP add-ons take a long as the error code argument so that they can take either a krb5_error_code or a kadm5_ret_t.

The remaining changes are all about enabling more warnings. rra-c-util now builds with intensive warnings enabled in both GCC 7 and Clang, and the warning options have been refreshed against GCC 7. It also reports clean from the Clang static analyzer, which included changing reallocarray and the vector implementations to always allocate memory of at least the minimum size.

You can get the latest version from the rra-c-util distribution page.

Planet DebianRuss Allbery: C TAP Harness 4.2

The functional change in this release of my test framework for C programs is the addition of a new is_blob test function. This is equivalent to ok(memcmp(...)) but it reports where the two memory regions differ as a diagnostic. This was contributed by Daniel Collins.

Otherwise, the changes are warning fixes and machinery to more easily test with warnings. C TAP Harness now supports being built with warnings with either GCC or Clang.

You can get the latest version from the C TAP Harness distribution page.

Planet DebianGregor Herrmann: RC bugs 2017/30-52

for some reason I'm still keeping track of the release-critical bugs I touch, even though it's a long time since I systematically try to fix them. & since I have the list, I thought I might as well post it here, for the third (& last) time this year:

  • #720666 – src:libxml-validate-perl: "libxml-validate-perl: FTBFS: POD coverage test failure"
    don't run POD tests (pkg-perl)
  • #810655 – libembperl-perl: "libembperl-perl: postinst fails when libapache2-mod-perl2 is not installed"
    upload backported fix to jessie
  • #825011 – libdata-alias-perl: "libdata-alias-perl: FTBFS with Perl 5.24: undefined symbol: LEAVESUB"
    upload new upstream release (pkg-perl)
  • #826465 – texlive-latex-recommended: "texlive-latex-recommended: Unescaped left brace in regex is deprecated"
    propose patch
  • #851506 – cpanminus: "cpanminus: major parts of upstream sources with compressed white-space"
    take tarball from github (pkg-perl)
  • #853490 – src:libdomain-publicsuffix-perl: "libdomain-publicsuffix-perl: ftbfs with GCC-7"
    apply patch from ubuntu (pkg-perl)
  • #853499 – src:libopengl-perl: "libopengl-perl: ftbfs with GCC-7"
    new upstream release (pkg-perl)
  • #867514 – libsolv: "libsolv: find_package called with invalid argument "2.7.13+""
    propose a patch, later upload to DELAYED/1, then patch included in a maintainer upload
  • #869357 – src:libdigest-whirlpool-perl: "libdigest-whirlpool-perl FTBFS on s390x: test failure"
    upload to DELAYED/5
  • #869360 – slic3r: "slic3r: missing dependency on perlapi-*"
    upload to DELAYED/5
  • #869576 – src:gimp-texturize: "gimp-texturize: Local copy of intltool-* fails with perl 5.26"
    add patch, QA upload
  • #869578 – src:gdmap: "gdmap: Local copy of intltool-* fails with perl 5.26"
    provide a patch
  • #869579 – src:granule: "granule: Local copy of intltool-* fails with perl 5.26"
    add patch, QA upload
  • #869580 – src:teg: "teg: Local copy of intltool-* fails with perl 5.26"
    provide a patch
  • #869583 – src:gnome-specimen: "gnome-specimen: Local copy of intltool-* fails with perl 5.26"
    provide a patch
  • #869884 – src:chemical-mime-data: "chemical-mime-data: Local copy of intltool-* fails with perl 5.26"
    provide a patch, upload to DELAYED/5 later
  • #870213 – src:pajeng: "pajeng FTBFS with perl 5.26"
    provide a patch, uploaded by maintainer
  • #870821 – src:esys-particle: "esys-particle FTBFS with perl 5.26"
    propose patch
  • #870832 – src:libmath-prime-util-gmp-perl: "libmath-prime-util-gmp-perl FTBFS on big endian: Failed 2/31 test programs. 8/2885 subtests failed."
    upload new upstream release (pkg-perl)
  • #871059 – src:fltk1.3: "fltk1.3: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/(\${ <-- HERE _IMPORT_PREFIX}/lib)(?!/x86_64-linux-gnu)/ at debian/fix-fltk-targets-noconfig line 6, <> line 1."
    propose patch
  • #871159 – texlive-extra-utils: "pstoedit: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/\\([a-zA-Z]+){([^}]*)}{ <-- HERE ([^}]*)}/ at /usr/bin/latex2man line 1327."
    propose patch
  • #871307 – libmimetic0v5: "libmimetic0v5: requires rebuild against GCC 7 and symbols/shlibs bump"
    implement reporter's recipe (thanks!)
  • #871335 – src:smlnj: "smlnj: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/~?\\begin{ <-- HERE (small|Bold|Italics|Emph|address|quotation|center|enumerate|itemize|description|boxit|Boxit|abstract|Figure)}/ at mltex2html line 1411, <DOCUMENT> line 1."
    extend existing patch, QA upload
  • #871349 – src:ispell-uk: "ispell-uk: FTBFS: The encoding pragma is no longer supported at ../../bin/verb_reverse.pl line 12."
    propose patch
  • #871357 – src:packaging-tutorial: "packaging-tutorial: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/\\end{ <-- HERE document}/ at /usr/share/perl5/Locale/Po4a/TransTractor.pm line 643."
    analyze and propose a possible patch
  • #871367 – src:fftw: "fftw: FTBFS: Unescaped left brace in regex is deprecated here (and will be fatal in Perl 5.30), passed through in regex; marked by <-- HERE in m/\@(\w+){ <-- HERE ([^\{\}]+)}/ at texi2html line 1771."
    propose patch
  • #871818 – src:debian-zh-faq: "debian-zh-faq FTBFS with perl 5.26"
    propose patch
  • #872275 – slic3r-prusa: "slic3r-prusa: Loadable library and perl binary mismatch"
    propose patch
  • #873697 – src:libtext-bibtex-perl: "libtext-bibtex-perl FTBFS on arm*/ppc64el: t/unlimited.t (Wstat: 11 Tests: 4 Failed: 0)"
    upload new upstream release prepared by smash (pkg-perl)
  • #875627 – libauthen-captcha-perl: "libauthen-captcha-perl: Random failure due to bad images"
    upload package with fixed ong prepared by xguimard (pkg-perl)
  • #877841 – src:libxml-compile-wsdl11-perl: "libxml-compile-wsdl11-perl: FTBFS Can't locate XML/Compile/Tester.pm in @INC"
    add missing build dependency (pkg-perl)
  • #877842 – src:libxml-compile-soap-perl: "libxml-compile-soap-perl: FTBFS: Can't locate Test/Deep.pm in @INC"
    add missing build dependencies (pkg-perl)
  • #880777 – src:pdl: "pdl build depends on removed libgd2*-dev provides"
    update build dependency (pkg-perl)
  • #880787 – src:libhtml-formatexternal-perl: "libhtml-formatexternal-perl build depends on removed transitional package lynx-cur"
    update build dependency (pkg-perl)
  • #880843 – src:libperl-apireference-perl: "libperl-apireference-perl FTBFS with perl 5.26.1"
    change handling of 5.26.1 API (pkg-perl)
  • #881058 – gwhois: "gwhois: please switch Depends from lynx-cur to lynx"
    update dependency, upload to DELAYED/15
  • #882264 – src:libtemplate-declare-perl: "libtemplate-declare-perl FTBFS with libhtml-lint-perl 2.26+dfsg-1"
    add patch for compatibility with newer HTML::Lint (pkg-perl)
  • #883673 – src:libdevice-cdio-perl: "fix build with libcdio 1.0"
    add patch from doko (pkg-perl)
  • #885541 – libtest2-suite-perl: "libtest2-suite-perl: file conflicts with libtest2-asyncsubtest-perl and libtest2-workflow-perl"
    add Breaks/Replaces/Provides (pkg-perl)

,

Planet DebianChris Lamb: Favourite books of 2017

Whilst I managed to read just over fifty books in 2017 (down from sixty in 2016) here are ten of my favourites, in no particular order.

Disappointments this year included Doug Stanhope's This Is Not Fame, a barely coherent collection of bar stories that felt especially weak after Digging Up Mother, but I might still listen to the audiobook as I would enjoy his extemporisation on a phone book. Ready Player One left me feeling contemptuous, as did Charles Stross' The Atrocity Archives.

The worst book I finished this year was Adam Mitzner's Dead Certain, beating Dan Brown's Origin, a poor Barcelona tourist guide at best.






https://images-eu.ssl-images-amazon.com/images/P/B005DI9SKW.01._PC__.jpg

Year of Wonders

Geraldine Brooks

Teased by Hilary Mantel's BBC Reith Lecture appearances and not content with her short story collection, I looked to others for my fill of historical fiction whilst awaiting the final chapter in the Wolf Hall trilogy.

This book, Year of Wonders, subtitled A Novel of the Plague, is written from point of view of Anna Frith, recounting what she and her Derbyshire village experience when they nobly quarantine themselves in order to prevent the disease from spreading further.

I found it initially difficult to get to grips with the artificially aged vocabulary — and I hate to be "that guy" — but do persist until the chapter where Anna takes over the village apothecary.


https://images-eu.ssl-images-amazon.com/images/P/B072P185BN.01._PC__.jpg

The Second World Wars

Victor Davis Hanson

If the pluralisation of "Wars" is an affectation, it certainly is an accurate one: whilst we might consider the Second World War to be a unified conflict today, Hanson reasonably points out that this is a post hoc simplification of different conflicts from the late-1910s through 1945.

Unlike most books that attempt to cover the entirety of the war, this book is organised by topic instead of chronology. For example, there are two or three adjacent chapters comparing and contrasting naval strategy before moving onto land armies, constrasting and comparing Germany's eastern and western fronts, etc. This approach leads to a readable and surprisingly gripping book despite its lengthy 720 pages.

Particular attention is given to the interplay between the various armed services and how this tended to lead to overall strategic victory. This, as well as the economics of materiel, simple rates-of-replacement, combined with the irrationality and caprice of the Axis would be an fair summary of the author's general thesis — this is no Churchill, Hitler & The Unnecessary War.

Hanson is not afraid to ask "what if" questions but only where they provide meaningful explanation or provide deeper rationale rather than as an indulgent flight of fancy. His answers to such questions are invariably that some outcome would have come about.

Whilst the author is a US citizen, he does not spare his homeland from criticism, but where Hanson's background as classical-era historian lets him down is in contrived comparisons to the Peloponnesian War and other ancient conflicts. His Napoleonic references do not feel as forced, especially due to Hitler's own obsessions. Recommended.


https://images-eu.ssl-images-amazon.com/images/P/B0711Y3BVG.01._PC__.jpg

Everybody Lies

Seth Stephens-Davidowitz

Vying for the role as the Freakonomics for the "Big Data" generation, Everybody Lies is essentially a compendium of counter-arguments, refuting commonly-held beliefs about the internet and society in general based on large-scale observations. For example:

Google searches reflecting anxiety—such as "anxiety symptoms" or "anxiety help"—tend to be higher in places with lower levels of education, lower median incomes and where a larger portion of the population lives in rural areas. There are higher search rates for anxiety in rural, upstate New York than in New York City.

Or:

On weekends with a popular violent movie when millions of Americans were exposed to images of men killing other men, crime dropped. Significantly.

Some methodological anecdotes are included: a correlation was once noticed between teens being adopted and the use of drugs and skipping school. Subsequent research found this correlation was explained entirely by the 20% of the self-reported adoptees not actually being adopted...

Although replete with the kind of factoids that force you announce them out loud to anyone "lucky" enough to be in the same room as you, Everybody Lies is let down by a chronic lack of structure — a final conclusion that is so self-aware of its limitations that it ready and repeatedly admits to it is still an weak conclusion.


https://images-eu.ssl-images-amazon.com/images/P/B01LWAESYQ.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B0736185ZL.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B01MZI77C0.01._PC__.jpg

The Bobiverse Trilogy

Dennis Taylor

I'm really not a "science fiction" person, at least not in the sense of reading books catalogued as such, with all their indulgent meta-references and stereotypical cover art.

However, I was really taken by the conceit and execution of the Bobiverse trilogy: Robert "Bob" Johansson perishes in an automobile accident the day after agreeing to have his head cryogenically frozen upon death. 117 years later he finds that he has been installed in a computer as an artificial intelligence. He subsequently clones himself multiple times resulting in the chapters being written from various "Bob's" locations, timelines and perspectives around the galaxy.

One particular thing I liked about the books was their complete disregard for a film tie-in; Ready Player One was almost cynically written with this in mind, but the Bobiverse cheerfully handicaps itself by including Homer Simpson and other unlicensable characters.

Whilst the opening world-building book is the most immediately rewarding, the series kicks into gear after this — as the various "Bob's" unfold with differing interests (exploration, warfare, pure science, anthropology, etc.) a engrossing tapestry is woven together with a generous helping of humour and, funnily enough, believability.


https://images-eu.ssl-images-amazon.com/images/P/1784703931.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B00K7ED54M.01._PC__.jpg

Homo Deus: A Brief History of Tomorrow

Yuval Noah Harari

After a number of strong recommendations I finally read Sapiens, this book's prequel.

I was gripped, especially given its revisionist insight into various stages of Man. The idea that wheat domesticated us (and not the other way around) and how adoption of this crop led to truncated and unhealthier lifespans particularly intrigued me: we have an innate bias towards chronocentrism, so to be reminded that progress isn't a linear progression from "bad" to "better" is always useful.

The sequel, Homo Deus, continues this trend by discussing the future potential of our species. I was surprised just how humourous the book was in places. For example, here is Harari on the anthropocentric nature of religion:

You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.

Or even:

You can't settle the Greek debt crisis by inviting Greek politicians and German bankers to a fist fight or an orgy.

The chapters on AI and the inexpensive remarks about the impact of social media did not score many points with me, but I certainly preferred the latter book in that the author takes more risks with his own opinion so it's less dry and more more thought-provoking, even if one disagrees.


https://images-eu.ssl-images-amazon.com/images/P/0857535579.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B01N5URPMC.01._PC__.jpg

La Belle Sauvage: The Book of Dust Volume One

Philip Pullman

I have extremely fond memories of reading (and re-reading, etc.) the author's Dark Materials as a teenager despite being started on the second book by a "supply" English teacher.

La Belle Sauvage is a prequel to this original trilogy and the first of another trio. Ms Lyra Belacqua is present as a baby but the protagonist here is Malcolm Polstead who is very much part of the Oxford "town" rather than "gown".

Alas, Pullman didn't make a study of Star Wars and thus relies a little too much on the existing canon, wary to add new, original features. This results in an excess of Magesterium and Mrs Coulter (a superior Delores Umbridge, by the way), and the protagonist is a little too redolent of Will...

There is also an very out-of-character chapter where the magical rules of the novel temporarily multiply resulting in a confusion that was almost certainly not the author's intention. You'll spot it when you get to it, which you should.

(I also enjoyed the slender Lyra's Oxford, essentially a short story set just a few years after The Amber Spyglass.)


https://images-eu.ssl-images-amazon.com/images/P/B002VYJYR8.01._PC__.jpg

Open: An Autobiography

Andre Agassi

Sporting personalities certainly exist, but they are rarely revealed by their "authors" so upon friends' enquiries to what I was reading I frequently caught myself qualifying my response with «It's a sports autobiography, but...».

It's naturally difficult to know what we can credit to Agassi or his (truly excellent) ghostwriter but this book is a real pleasure to read. This is no lost Nabokov or Proust, but the level of wordsmithing went beyond supererogatory. For example:

For a man with so many fleeting identities, it's shocking, and symbolic, that my initials are A. K. A.

Or:

I understand that there's a tax on everything in America. Now, I discover that this is the tax on success in sports: fifteen seconds of time for every fan.

Like all good books that revolve around a subject, readers do not need to know or have any real interest in the topic at hand, so even non-tennis fans will find this an engrossing read. Dark themes abound — Agassi is deeply haunted by his father, a topic I wish he went into more, but perhaps he has not done the "work" himself yet.


https://images-eu.ssl-images-amazon.com/images/P/1405910119.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B00EAA6QFE.01._PC__.jpg

The Complete Short Stories

Roald Dahl

I distinctly remember reading Roald Dahl's The Wonderful Story of Henry Sugar and Six More collection of short stories as a child, some characters still etched in my mind; the 'od carrier and fingersmith of The Hitchhiker or the protagonist polishing his silver Trove in The Mildenhall Treasure.

Instead of re-reading this collection I embarked on reading his complete short stories, curious whether the rest of his œuvre was at the same level. After reading two entire volumes, I can say it mostly does — Dahl's typical humour and descriptive style are present throughout with only a few show-off sentences such as:

"There's a trick that nearly every writer uses of inserting at least one long obscure word into each story. This makes the reader think that the man is very wise and clever. I have a whole stack of long words stored away just for this purpose." "Where?" "In the 'word-memory' section," he said, epexegetically.

There were a perhaps too many of his early, mostly-factual, war tales that were lacking a an interesting conceit and I still might recommend the Henry Sugar collection for the uninitiated, but I would still heartily recommend either of these two volumes, starting with the second.


https://images-eu.ssl-images-amazon.com/images/P/B00GIUGEO2.01._PC__.jpg

Watching the English

Kate Fox

Written by a social anthropologist, this book dissects "English" behaviour for the layman providing an insight into British humour, rites of passage, dress/language codes, amongst others.

A must-read for anyone who is in — or considering... — a relationship with an Englishman, it is also a curious read for the native Brit: a kind of horoscope for folks, like me, who believe they are above them.

It's not perfect: Fox tediously repeats that her "rules" or patterns are not rules in the strict sense of being observed by 100% of the population; there will always be people who do not, as well as others whose defiance of a so-called "rule" only reinforces the concept. Most likely this reiteration is to sidestep wearisome criticisms but it becomes ponderous and patronising over time.

Her general conclusions (that the English are repressed, risk-averse and, above all, hypocrites) invariably oversimplify, but taken as a series of vignettes rather than a scientifically accurate and coherent whole, the book is worth your investment.

(Ensure you locate the "revised" edition — it not only contains more content, it also profers valuable counter-arguments to rebuttals Fox received since the original publication.)


https://images-eu.ssl-images-amazon.com/images/P/B06Y619TS1.01._PC__.jpg

What Does This Button Do?

Bruce Dickinson

In this entertaining autobiography we are thankfully spared a litany of Iron Maiden gigs, successes and reproaches of the inevitable bust-ups and are instead treated to an introspective insight into just another "everyman" who could very easily be your regular drinking buddy if it weren't for a need to fulfill a relentless inner drive for... well, just about anything.

The frontman's antics as a schoolboy stand out, as are his later sojourns into Olympic fencing and being a commercial pilot. These latter exploits sound bizarre out of context but despite their non-sequitur nature they make a perfect foil (hah!) to the heavy metal.

A big follower of Maiden in my teens, I fell off the wagon as I didn't care for their newer albums so I was blindsided by Dickinson's sobering cancer diagnosis in the closing chapters. Furthermore, whilst Bruce's book fails George Orwell's test that autobiography is only to be trusted when it reveals something disgraceful, it is tour de force enough for to distract from any concept of integrity.

(I have it on excellent authority that the audiobook, which is narrated by the author, is definitely worth one's time.)

Planet DebianSune Vuorela: Aubergine – Playing with emoji

Playing with emojis

At some point, I needed to copy paste emojis, but couldn’t find a good way to do it. So what does a good hacker do?
Scratch an own itch. As I wrote about in the past, all these projects should be shared with the rest of the world.
So here it is: https://cgit.kde.org/scratch/sune/aubergine.git/

It looks like this with the symbola font for emojis: Screenshot

It basically lets you search for emojis by their description, and by clicking on a emoji, it gets inserted into the clipboard.

As such, I’m not sure the application is really interesting, but there might be two interesting bits in the source code:

  • A parser for the unicode data text files in /usr/share/unicode/NamesList.txt is placed in lib/parser.{h,cpp}
  • A class that tries to expose QClipboard as QML objects placed in app/clipboard.{h,cpp}. I’m not yet sure if this is the right approach for that, but it is the one that currently makes most sense in my mind. If I’m receiving proper feedback, I might be able to extend/finish it and submit it to Qt.

And of course, now it is simple to describe fancy cooking:

🍆 🔪 🔥
(aubergine) (hocho) (fire)

I ❣ emoji

Sam VargheseToo much of anything is good for nothing

Last year, Australia’s national Twenty20 competition, the Big Bash League, had 32 league games plus three finals. It was deemed a great success.

But the organiser, Cricket Australia, is not content with that. This year, there will be 40 games followed by the two semi-finals and the final. And the tournament will drag on into February.

This means many of the same cricketers will be forced to play those eight extra games, putting that much more strain on their bodies and minds. How much cricket can people play before they become jaded and reduced to going through the motions?

Why are the organisers always trying to squeeze out more and more from the same players? Why are they not content with what they have – a tournament that is popular, draws fairly decent crowds and is considered a success?

There was talk last season of increasing the number of teams; mercifully, that has not happened. There is an old saying that one have too much of a good thing.

Many of the same cricketers who are expected to perform well at the BBL play in similar competitions around the globe – Pakistan (played in the UAE), the West Indies, New Zealand, Sri Lanka, India, and Bangladesh all have their own leagues.

At the end of the year, it should not be surprising to find that the better cricketers in this format are quite a tired lot. The organisers seem to be content with more games, never mind if they are boring, one-sided matches.

There is a breaking point in all these tournaments, one at which the people lose interest and begin to wander away. From 2015-16 to 2016-17, there was a sizeable drop in the numbers who came to watch.

While it is true that the organisers make money before a ball is bowled — the TV rights ensure that — the BBL has been sold as a family-friendly tournament that is meant for the average Australian or visitor to the country to watch in person.

Empty stands do not look good on TV and send a message to prospective attendees. But it is unlikely that such thoughts have occurred to the likes of cricket supremo James Sutherland.

Planet DebianRuss Allbery: Review: Saving Francesca

Review: Saving Francesca, by Melina Marchetta

Series: Francesca #1
Publisher: Alfred A. Knopf
Copyright: 2003
Printing: 2011
ISBN: 0-307-43371-4
Format: Kindle
Pages: 245

Francesca is in Year Eleven in St. Sebastian's, in the first year that the school opened to girls. She had a social network and a comfortable place at her previous school, but it only went to Year Ten. Most of her friends went to Pius, but St. Sebastian's is a better school. So Francesca is there, one of thirty girls surrounded by boys who aren't used to being in a co-ed school, and mostly hanging out with the three other girls who had gone to her previous school. She's miserable, out of place, and homesick for her previous routine.

And then, one morning, her mother doesn't get out of bed. Her mother, the living force of energy, the one who runs the household, who pesters Francesca incessantly, who starts every day with a motivational song. She doesn't get out of bed the next day, either. Or the day after that. And the bottom falls out of Francesca's life.

I come at this book from a weird angle because I read The Piper's Son first. It's about Tom Mackee, one of the supporting characters in this book, and is set five years later. I've therefore met these people before: Francesca, quiet Justine who plays the piano accordion, political Tara, and several of the Sebastian boys. But they are much different here as their younger selves: more raw, more childish, and without the understanding of settled relationships. This is the story of how they either met or learned how to really see each other, against the backdrop of Francesca's home life breaking in entirely unexpected ways.

I think The Piper's Son was classified as young adult mostly because Marchetta is considered a young adult writer. Saving Francesca, by comparison, is more fully a young adult novel. Instead of third person with two tight viewpoints, it's all first person: Francesca telling the reader about her life. She's grumpy, sad, scared, anxious, and very self-focused, in the way of a teenager who is trying to sort out who she is and what she wants. The reader follows her through the uncertainty of maybe starting to like a boy who may or may not like her and is maddeningly unwilling to commit, through realizing that the friends she had and desperately misses perhaps weren't really friends after all, and into the understanding of what friendship really means for her. But it's all very much caught up in Francesca's head. The thoughts of the other characters are mostly guesswork for the reader.

The Piper's Son was more effective for me, but this is still a very good book. Marchetta captures the gradual realization of friendship, along with the gradual understanding that you have been a total ass, extremely well. I was somewhat less convinced by Francesca's mother's sudden collapse, but depression does things like that, and by the end of the book one realizes that Francesca has been somewhat oblivious to tensions and problems that would have made this less surprising. And the way that Marchetta guides Francesca to a deeper understanding of her father and the dynamics of her family is emotionally realistic and satisfying, although Francesca's lack of empathy occasionally makes one want to have a long talk with her.

The best part of this book are the friendships. I didn't feel the moments of catharsis as strongly here as in The Piper's Son, but I greatly appreciated Marchetta's linking of the health of Francesca's friendships to the health of her self-image. Yes, this is how this often works: it's very hard to be a good friend until you understand who you are inside, and how you want to define yourself. Often that doesn't come in words, but in moments of daring and willingness to get lost in a moment. The character I felt the most sympathy for was Siobhan, who caught the brunt of Francesca's defensive self-absorption in a way that left me wincing even though the book never lingers on her. And the one who surprised me the most was Jimmy, who possibly shows the most empathy of anyone in the book in a way that Francesca didn't know how to recognize.

I'm not unhappy about reading The Piper's Son first, since I don't think it needs this book (and says some of the same things in a more adult voice, in ways I found more powerful). I found Saving Francesca a bit more obvious, a bit less subtle, and a bit more painful, and I think I prefer reading about the more mature versions of these characters. But this is a solid, engrossing psychological story with a good emotional payoff. And, miracle of miracles, even a bit of a denouement.

Followed by The Piper's Son.

Rating: 7 out of 10

,

CryptogramFriday Squid Blogging: Squid Populations Are Exploding

New research:

"Global proliferation of cephalopods"

Summary: Human activities have substantially changed the world's oceans in recent decades, altering marine food webs, habitats and biogeochemical processes. Cephalopods (squid, cuttlefish and octopuses) have a unique set of biological traits, including rapid growth, short lifespans and strong life-history plasticity, allowing them to adapt quickly to changing environmental conditions. There has been growing speculation that cephalopod populations are proliferating in response to a changing environment, a perception fuelled by increasing trends in cephalopod fisheries catch. To investigate long-term trends in cephalopod abundance, we assembled global time-series of cephalopod catch rates (catch per unit of fishing or sampling effort). We show that cephalopod populations have increased over the last six decades, a result that was remarkably consistent across a highly diverse set of cephalopod taxa. Positive trends were also evident for both fisheries-dependent and fisheries-independent time-series, suggesting that trends are not solely due to factors associated with developing fisheries. Our results suggest that large-scale, directional processes, common to a range of coastal and oceanic environments, are responsible. This study presents the first evidence that cephalopod populations have increased globally, indicating that these ecologically and commercially important invertebrates may have benefited from a changing ocean environment.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityKansas Man Killed In ‘SWATting’ Attack

A 28-year-old Kansas man was shot and killed by police officers on the evening of Dec. 28 after someone fraudulently reported a hostage situation ongoing at his home. The false report was the latest in a dangerous hoax known as “swatting,” wherein the perpetrator falsely reports a dangerous situation at an address with the goal of prompting authorities to respond to that address with deadly force. This particular swatting reportedly originated over a $1.50 wagered match in the online game Call of Duty. Compounding the tragedy is that the man killed was an innocent party who had no part in the dispute.

The following is an analysis of what is known so far about the incident, as well as a brief interview with the alleged and self-professed perpetrator of this crime.

It appears that the dispute and subsequent taunting originated on Twitter. One of the parties to that dispute — allegedly using the Twitter handle “SWauTistic” — threatened to swat another user who goes by the nickname “7aLeNT“. @7aLeNT dared someone to swat him, but then tweeted an address that was not his own.

Swautistic responded by falsely reporting to the Kansas police a domestic dispute at the address 7aLenT posted, telling the authorities that one person had already been murdered there and that several family members were being held hostage.

Image courtesey @mattcarries

A story in the Wichita Eagle says officers responded to the 1000 block of McCormick and got into position, preparing for a hostage situation.

“A male came to the front door,” Livingston said. “As he came to the front door, one of our officers discharged his weapon.”

“Livingston didn’t say if the man, who was 28, had a weapon when he came to the door, or what caused the officer to shoot the man. Police don’t think the man fired at officers, but the incident is still under investigation, he said. The man, who has not been identified by police, died at a local hospital.

“A family member identified that man who was shot by police as Andrew Finch. One of Finch’s cousins said Finch didn’t play video games.”

Not long after that, Swautistic was back on Twitter saying he could see on television that the police had fallen for his swatting attack. When it became apparent that a man had been killed as a result of the swatting, Swautistic tweeted that he didn’t get anyone killed because he didn’t pull the trigger (see image above).

Swautistic soon changed his Twitter handle to @GoredTutor36, but KrebsOnSecurity managed to obtain several weeks’ worth of tweets from Swautistic before his account was renamed. Those tweets indicate that Swautistic is a serial swatter — meaning he has claimed responsibility for a number of other recent false reports to the police.

Among the recent hoaxes he’s taken credit for include a false report of a bomb threat at the U.S. Federal Communications Commission (FCC) that disrupted a high-profile public meeting on the net neutrality debate. Swautistic also has claimed responsibility for a hoax bomb threat that forced the evacuation of the Dallas Convention Center, and another bomb threat at a high school in Panama City, Fla, among others.

After tweeting about the incident extensively this afternoon, KrebsOnSecurity was contacted by someone in control of the @GoredTutor36 Twitter account. GoredTutor36 said he’s been the victim of swatting attempts himself, and that this was the reason he decided to start swatting others.

He said the thrill of it “comes from having to hide from police via net connections.” Asked about the FCC incident, @GoredTutor36 acknowledged it was his bomb threat. “Yep. Raped em,” he wrote.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” he wrote. “But I began making $ doing some swat requests.”

Asked whether he feels remorse about the Kansas man’s death, he responded “of course I do.”

But evidently not enough to make him turn himself in.

“I won’t disclose my identity until it happens on its own,” the user said in a long series of direct messages on Twitter. “People will eventually (most likely those who know me) tell me to turn myself in or something. I can’t do that; though I know its [sic] morally right. I’m too scared admittedly.”

Update, 7:15 p.m.: A recording of the call to 911 operators that prompted this tragedy can be heard at this link. The playback of the recorded emergency calls starts around 10 minutes into the video.

Update, Dec. 30, 8:06 a.m. ET: Police in Los Angeles reportedly have arrested 25-year-old Tyler Raj Barriss in connection with the swatting attack.

ANALYSIS

As a victim of my own swatting attack back in 2013, I’ve been horrified to watch these crimes only increase in frequency ever since — usually with little or no repercussions for the person or persons involved in setting the schemes in motion. Given that the apparent perpetrator of this crime seems eager for media attention, it seems likely he will be apprehended soon. My guess is that he is a minor and will be treated with kid gloves as a result, although I hope I’m wrong on both counts.

Let me be crystal clear on a couple of points. First off, there is no question that police officers and first responders across the country need a great deal more training to bring the number of police shootings way down. That is undoubtedly a giant contributor to the swatting epidemic.

Also, all police officers and dispatchers need to be trained on what swatting is, how to spot the signs of a hoax, and how to minimize the risk of anyone getting harmed when responding to reports about hostage situations or bomb threats. Finally, officers of the peace who are sworn to protect and serve should use deadly force only in situations where there is a clear and immediate threat. Those who jump the gun need to be held accountable as well.

But that kind of reform isn’t going to happen overnight. Meanwhile, knowingly and falsely making a police report that results in a SWAT unit or else heavily armed police response at an address is an invitation for someone to get badly hurt or killed. These are high-pressure situations and in most cases — as in this incident — the person opening the door has no idea what’s going on. Heaven protect everyone at the scene if the object of the swatting attack is someone who is already heavily armed and confused enough about the situation to shoot anything that comes near his door.

In some states, filing a false police report is just a misdemeanor and is mainly punishable by fines. However, in other jurisdictions filing a false police report is a felony, and I’m afraid it’s long past time for these false reports about dangerous situations to become a felony offense in every state. Here’s why.

If making a fraudulent report about a hostage situation or bomb threat is a felony, then if anyone dies as a result of that phony report they can legally then be charged with felony murder. Under the doctrine of felony murder, when an offender causes the death of another (regardless of intent) in the commission of a dangerous crime, he or she is guilty of murder.

Too often, however, the perpetrators of these crimes are minors, and even when they’re caught they are frequently given a slap on the wrist. Swatting needs to stop, and unfortunately as long as there are few consequences for swatting someone, it will continue to be a potentially deadly means for gaining e-fame and for settling childish and pointless ego squabbles.

Planet DebianAntoine Beaupré: An overview of KubeCon + CloudNativeCon

The Cloud Native Computing Foundation (CNCF) held its conference, KubeCon + CloudNativeCon, in December 2017. There were 4000 attendees at this gathering in Austin, Texas, more than all the previous KubeCons before, which shows the rapid growth of the community building around the tool that was announced by Google in 2014. Large corporations are also taking a larger part in the community, with major players in the industry joining the CNCF, which is a project of the Linux Foundation. The CNCF now features three of the largest cloud hosting businesses (Amazon, Google, and Microsoft), but also emerging companies from Asia like Baidu and Alibaba.

In addition, KubeCon saw an impressive number of diversity scholarships, which "include free admission to KubeCon and a travel stipend of up to $1,500, aimed at supporting those from traditionally underrepresented and/or marginalized groups in the technology and/or open source communities", according to Neil McAllister of CoreOS. The diversity team raised an impressive $250,000 to bring 103 attendees to Austin from all over the world.

We have looked into Kubernetes in the past but, considering the speed at which things are moving, it seems time to make an update on the projects surrounding this newly formed ecosystem.

The CNCF and its projects

The CNCF was founded, in part, to manage the Kubernetes software project, which was donated to it by Google in 2015. From there, the number of projects managed under the CNCF umbrella has grown quickly. It first added the Prometheus monitoring and alerting system, and then quickly went up from four projects in the first year, to 14 projects at the time of this writing, with more expected to join shortly. The CNCF's latest additions to its roster are Notary and The Update Framework (TUF, which we previously covered), both projects aimed at providing software verification. Those add to the already existing projects which are, bear with me, OpenTracing (a tracing API), Fluentd (a logging system), Linkerd (a "service mesh", which we previously covered), gRPC (a "universal RPC framework" used to communicate between pods), CoreDNS (DNS and service discovery), rkt (a container runtime), containerd (another container runtime), Jaeger (a tracing system), Envoy (another "service mesh"), and Container Network Interface (CNI, a networking API).

This is an incredible diversity, if not fragmentation, in the community. The CNCF made this large diagram depicting Kubernetes-related projects—so large that you will have a hard time finding a monitor that will display the whole graph without scaling it (seen below, click through for larger version). The diagram shows hundreds of projects, and it is hard to comprehend what all those components do and if they are all necessary or how they overlap. For example, Envoy and Linkerd are similar tools yet both are under the CNCF umbrella—and I'm ignoring two more such projects presented at KubeCon (Istio and Conduit). You could argue that all tools have different focus and functionality, but it still means you need to learn about all those tools to pick the right one, which may discourage and confuse new users.

Cloud Native landscape

You may notice that containerd and rkt are both projects of the CNCF, even though they overlap in functionality. There is also a third Kubernetes runtime called CRI-O built by RedHat. This kind of fragmentation leads to significant confusion within the community as to which runtime they should use, or if they should even care. We'll run a separate article about CRI-O and the other runtimes to try to clarify this shortly.

Regardless of this complexity, it does seem the space is maturing. In his keynote, Dan Kohn, executive director of the CNCF, announced "1.0" releases for 4 projects: CoreDNS, containerd, Fluentd and Jaeger. Prometheus also had a major 2.0 release, which we will cover in a separate article.

There were significant announcements at KubeCon for projects that are not directly under the CNCF umbrella. Most notable for operators concerned about security is the introduction of Kata Containers, which is basically a merge of runV from Hyper.sh and Intel's Clear Containers projects. Kata Containers, introduced during a keynote by Intel's VP of the software and services group, Imad Sousou, are virtual-machine-based containers, or, in other words, containers that run in a hypervisor instead of under the supervision of the Linux kernel. The rationale here is that containers are convenient but all run on the same kernel, so the compromise of a single container can leak into all containers on the same host. This may be unacceptable in certain environments, for example for multi-tenant clusters where containers cannot trust each other.

Kata Containers promises the "best of both worlds" by providing the speed of containers and the isolation of VMs. It does this by using minimal custom kernel builds, to speed up boot time, and parallelizing container image builds and VM startup. It also uses tricks like same-page memory sharing across VMs to deduplicate memory across virtual machines. It currently works only on x86 and KVM, but it integrates with Kubernetes, Docker, and OpenStack. There was a talk explaining the technical details; that page should eventually feature video and slide links.

Industry adoption

As hinted earlier, large cloud providers like Amazon Web Services (AWS) and Microsoft Azure are adopting the Kubernetes platform, or at least its API. The keynotes featured AWS prominently; Adrian Cockcroft (AWS vice president of cloud architecture strategy) announced the Fargate service, which introduces containers as "first class citizens" in the Amazon infrastructure. Fargate should run alongside, and potentially replace, the existing Amazon EC2 Container Service (ECS), which is currently the way developers would deploy containers on Amazon by using EC2 (Elastic Compute Cloud) VMs to run containers with Docker.

This move by Amazon has been met with skepticism in the community. The concern here is that Amazon could pull the plug on Kubernetes when it hinders the bottom line, like it did with the Chromecast products on Amazon. This seems to be part of a changing strategy by the corporate sector in adoption of free-software tools. While historically companies like Microsoft or Oracle have been hostile to free software, they are now not only using free software but also releasing free software. Oracle, for example, released what it called "Kubernetes Tools for Serverless Deployment and Intelligent Multi-Cloud Management", named Fn. Large cloud providers are getting certified by the CNCF for compliance with the Kubernetes API and other standards.

One theory to explain this adoption is that free-software projects are becoming on-ramps to proprietary products. In this strategy, as explained by InfoWorld, open-source tools like Kubernetes are merely used to bring consumers over to proprietary platforms. Sure, the client and the API are open, but the underlying software can be proprietary. The data and some magic interfaces, especially, remain proprietary. Key examples of this include the "serverless" services, which are currently not standardized at all: each provider has its own incompatible framework that could be a deliberate lock-in strategy. Indeed, a common definition of serverless, from Martin Fowler, goes as follows:

Serverless architectures refer to applications that significantly depend on third-party services (knows as Backend as a Service or "BaaS") or on custom code that's run in ephemeral containers (Function as a Service or "FaaS").

By designing services that explicitly require proprietary, provider-specific APIs, providers ensure customer lock-in at the core of the software architecture. One of the upcoming battles in the community will be exactly how to standardize this emerging architecture.

And, of course, Kubernetes can still be run on bare metal in a colocation facility, but those costs are getting less and less affordable. In an enlightening talk, Dmytro Dyachuk explained that unless cloud costs hit $100,000 per month, users may be better off staying in the cloud. Indeed, that is where a lot of applications end up. During an industry roundtable, Hong Tang, chief architect at Alibaba Cloud, posited that the "majority of computing will be in the public cloud, just like electricity is produced by big power plants".

The question, then, is how to split that market between the large providers. And, indeed, according to a CNCF survey of 550 conference attendees: "Amazon (EC2/ECS) continues to grow as the leading container deployment environment (69%)". CNCF also notes that on-premise deployment decreased for the first time in the five surveys it has run, to 51%, "but still remains a leading deployment". On premise, which is a colocation facility or data center, is the target for these cloud companies. By getting users to run Kubernetes, the industry's bet is that it makes applications and content more portable, thus easier to migrate into the proprietary cloud.

Next steps

As the Kubernetes tools and ecosystem stabilize, major challenges emerge: monitoring is a key issue as people realize it may be more difficult to diagnose problems in a distributed system compared to the previous monolithic model, which people at the conference often referred to as "legacy" or the "old OS paradigm". Scalability is another challenge: while Kubernetes can easily manage thousands of pods and containers, you still need to figure out how to organize all of them and make sure they can talk to each other in a meaningful way.

Security is a particularly sensitive issue as deployments struggle to isolate TLS certificates or application credentials from applications. Kubernetes makes big promises in that regard and it is true that isolating software in microservices can limit the scope of compromises. The solution emerging for this problem is the "service mesh" concept pioneered by Linkerd, which consists of deploying tools to coordinate, route, and monitor clusters of interconnected containers. Tools like Istio and Conduit are designed to apply cluster-wide policies to determine who can talk to what and how. Istio, for example, can progressively deploy containers across the cluster to send only a certain percentage of traffic to newly deployed code, which allows detection of regressions. There is also work being done to ensure standard end-to-end encryption and authentication of containers in the SPIFFE project, which is useful in environments with untrusted networks.

Another issue is that Kubernetes is just a set of nuts and bolts to manage containers: users get all the parts and it's not always clear what to do with them to get a platform matching their requirements. It will be interesting to see how the community moves forward in building higher-level abstractions on top of it. Several tools competing in that space were featured at the conference: OpenShift, Tectonic, Rancher, and Kasten, though there are many more out there.

The 1.9 Kubernetes release should be coming out in early 2018; it will stabilize the Workloads API that was introduced in 1.8 and add Windows containers (for those who like .NET) in beta. There will also be three KubeCon conferences in 2018 (in Copenhagen, Shanghai, and Seattle). Stay tuned for more articles from KubeCon Austin 2017 ...

This article first appeared in the Linux Weekly News.

Krebs on SecurityHappy 8th Birthday, KrebsOnSecurity!

Eight years ago today I set aside my Washington Post press badge and became an independent here at KrebsOnSecurity.com. What a wild ride it has been. Thank you all, Dear Readers, for sticking with me and for helping to build a terrific community.

This past year KrebsOnSecurity published nearly 160 stories, generating more than 11,000 reader comments. The pace of publications here slowed down in 2017, but then again I have been trying to focus on quality over quantity, and many of these stories took weeks or months to report and write.

As always, a big Thank You to readers who sent in tips and personal experiences that helped spark stories here. For anyone who wishes to get in touch, I can always be reached via this site’s contact form, or via email at krebsonsecurity @ gmail dot com.

Here are some other ways to reach out:

Twitter (open DMs)

Facebook

via Wickr at “krebswickr”

Protonmail: krebsonsecurity at protonmail dot com

Keybase

Below are the Top 10 most-read stories of 2017, as decided by views and sorted in reverse chronological order:

The Market for Stolen Account Credentials

Phishers are Upping Their Game: So Should You

Equifax Breach Fallout: Your Salary History

USPS’ Informed Delivery is a Stalker’s Dream

The Equifax Breach: What You Should Know

Got Robocalled? Don’t Get Mad, Get Busy

Why So Many Top Hackers Hail from Russia

Post-FCC Privacy Rules: Should You VPN?

If Your iPhone is Stolen, These Guys May Try to iPhish You

Who is Anna-Senpai, the Mirai Worm Author?

Worse Than FailureBest of…: 2017: The Official Software

This personal tale from Snoofle has all of my favorite ingredients for a WTF: legacy hardware, creative solutions, and incompetent management. We'll be running one more "Best Of…" on New Years Day, and then back to our regularly scheduled programming… mostly--Remy

At the very beginning of my career, I was a junior programmer on a team that developed software to control an electronics test station, used to diagnose problems with assorted components of jet fighters. Part of my job was the requisite grunt work of doing the build, which entailed a compile-script, and the very manual procedure of putting all the necessary stuff onto a boot-loader tape to be used to build the 24 inch distribution disk arrays.

An unspooled magnetic tape for data storagesource

This procedure ran painfully slowly; it took about 11 hours to dump a little more than 2 MB from the tape onto the target disk, and nobody could tell me why. All they knew was that the official software had to be used to load the bootstrap routine, and then the file dumps.

After killing 11 hour days with the machine for several months, I had had it; I didn't get my MS to babysit some machine. I tracked down the source to the boot loader software, learned the assembly language in which it was written and slogged through it to find the problem.

The cause was that it was checking for 13 devices that could theoretically be hooked up to the system, only one of which could actually be plugged in at any given time. The remaining checks simply timed out. Compounding that was the code that copied the files from the tape to the disk. It was your basic poorly written file copy routine that we all learn not to do in CS-102:

    // pseudo code
    for each byte in the file
        read next byte from tape
        write one byte to disk
        flush

Naturally, this made for lots of unnecessary platter-rotation; even at over 3,000 RPM, it took many hours to copy a couple MB from tape to the disk.

I took a copy of the source, changed the device scanning routine to always scan for the device we used first, and skip the others, and do a much more efficient full-buffer-at-a-time data write. This shortened the procedure to a very tolerable few minutes. The correctness was verified by building one disk using each boot loader and comparing them, bit by bit.

Officially, I was not allowed to use this tape because it wasn't sanctioned software, but my boss looked the other way because it saved a lot of time.

This worked without a hitch for two years, until my boss left the company and another guy was put in charge of my team.

The first thing he did was confiscate and delete my version of the software, insisting that we use only the official version. At that time, our first kid was ready to arrive, and I really didn't want to stay several hours late twice a week for no good reason. Given the choice of helping take care of my wife/kid or babysitting an artificially slow process, I chose to change jobs.

That manager forced the next guy to use the official software for the next seven years, until the company went out of business.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Krebs on Security4 Years After Target, the Little Guy is the Target

Dec. 18 marked the fourth anniversary of this site breaking the news about a breach at Target involving some 40 million customer credit and debit cards. It has been fascinating in the years since that epic intrusion to see how organized cyber thieves have shifted from targeting big box retailers to hacking a broad swath of small to mid-sized merchants.

In many ways, not much has changed: The biggest underground shops that sell stolen cards still index most of their cards by ZIP code. Only, the ZIP code corresponds not to the legitimate cardholder’s billing address but to the address of the hacked store at which the card in question was physically swiped (the reason for this is that buyers of these cards tend to prefer cards used by people who live in their geographic area, as the subsequent fraudulent use of those cards tends to set off fewer alarm bells at the issuing bank).

Last week I was researching a story published here this week on how a steep increase in transaction fees associated with Bitcoin is causing many carding shops to recommend alternate virtual currencies like Litecoin. And I noticed that popular carding store Joker’s Stash had just posted a new batch of cards dubbed “Dynamittte,” which boasted some 7 million cards advertised as “100 percent” valid — meaning the cards were so fresh that even the major credit card issuers probably didn’t yet know which retail or restaurant breach caused this particular breach.

An advertisement for a large new batch of stolen credit card accounts for sale at the Joker’s Stash Dark Web market.

Translation: These stolen cards were far more likely to still be active and useable after fraudsters encode the account numbers onto fake plastic and use the counterfeits to go shopping in big box stores.

I pinged a couple of sources who track when huge new batches of stolen cards hit the market, and both said the test cards they’d purchased from the Joker’s Stash Dynamittte batch mapped back to customers who all had one thing in common: They’d all recently eaten at a Jason’s Deli location.

Jason’s Deli is a fast casual restaurant chain based in Beaumont, Texas, with approximately 266 locations in 28 states. Seeking additional evidence as to the source of the breach, I turned to the Jason’s Deli Web site and scraped the ZIP codes for their various stores across the country. Then I began comparing those ZIPs with the ZIPs tied to this new Dynamittte batch of cards at Joker’s Stash.

Checking my work were the folks at Mindwise.io, a threat intelligence startup in California that monitors Dark Web marketplaces and tries to extract useful information from them. Mindwise found a nearly 100 percent overlap between the ZIP codes on the “Blasttt-US” unit of the Dynamittte cards for sale and the ZIP codes for Jason’s Deli locations.

Reached for comment, Jason’s Deli released the following statement:

“On Friday, Dec. 22, 2017, our company was notified by payment processors – the organizations that manage the electronic connections between Jason’s Deli locations and payment card issuers – that MasterCard security personnel had informed it that a large quantity of payment card information had appeared for sale on the ‘dark web,’ and that an analysis of the data indicated that at least a portion of the data may have come from various Jason’s Deli locations.”

“Jason’s Deli’s management immediately activated our response plan, including engagement of a leading threat response team, involvement of other forensic experts, and cooperation with law enforcement. Among the questions that investigators are working to determine is whether in fact a breach took place, and if so, to determine its scope, the method employed, and whether there is any continuing breach or vulnerability.”

“The investigation is in its early stages and, as is typical in such situations, we expect it will take some time to determine exactly what happened. Jason’s Deli will provide as much information as possible as the inquiry progresses, bearing in mind that security and law enforcement considerations may limit the amount of detail we can provide.”

It’s important to note that the apparent breach at Jason’s Deli almost certainly does not correspond to 7 million cards; typically, carding shop owners will mix cards stolen from multiple breaches into one much larger batch (Dynamittte), and often further subdivide the cards by region (US vs. European cards).

As run-of-the-mill as these card breaches have become, it’s still remarkable even in smaller batches of cards like those apparently stolen from Jason’s Deli customers just how many financial institutions are impacted with each breach.

Banks impacted by the apparent breach at Jason’s Deli, sorted by Bank ID Number (BIN) — i.e. the issuer identified by the first six digits in the card number.

Mindwise said it was comfortable concluding that at least 170,000 of the cards put up for sale this past week on Joker’s Stash map back to Jason’s Deli locations. That may seem like a drop in the bucket compared to the 40 million cards that thieves hauled away from Target four years ago, but the cards stolen from Jason’s Deli customers were issued by more than 250 banks and credit unions, most of which will adopt differing strategies on how to manage fraud on those cards.

In other words, by moving down the food chain to smaller but far more plentiful and probably less secure merchants (either by choice or because the larger stores became a harder target) — and by mixing cards stolen from multiple breaches — the fraudsters have made it less likely that breaches at chain stores will be detected and remediated quickly, thereby prolonging the value and use of the stolen cards put up for sale in underground marketplaces.

All that said, it’s really not worth it to spend time worrying about where your card number may have been breached, since it’s almost always impossible to say for sure and because it’s common for the same card to be breached at multiple establishments during the same time period.

Just remember that although consumers are not liable for fraudulent charges, it may still fall to you the consumer to spot and report any suspicious charges. So keep a close eye on your statements, and consider signing up for text message notifications of new charges if your card issuer offers this service. Most of these services also can be set to alert you if you’re about to miss an upcoming payment, so they can also be handy for avoiding late fees and other costly charges.

Related reading (i.e., other breach stories confirmed with ZIP code analysis):

Breach at Sonic Drive-in May Have Impacted Millions of Credit, Debit Cards

Zip Codes Show Extent of Sally Beauty Breach

Data: Nearly All U.S. Home Depot Stores Hit

Cards Stolen in Target Breach Flood Underground Markets

CryptogramThe "Extended Random" Feature in the BSAFE Crypto Library

Matthew Green wrote a fascinating blog post about the NSA's efforts to increase the amount of random data exposed in the TLS protocol, and how it interacts with the NSA's backdoor into the DUAL_EC_PRNG random number generator to weaken TLS.

Worse Than FailureBest of…: 2017: With the Router, In the Conference Room

This particular article originally ran in two parts, giving us a surprise twist ending (the surprise being… well, just read it!) -- Remy

One of the most important aspects of software QA is establishing a good working relationship with developers. If you want to get them to take your bug reports seriously, you have to approach them with the right attitude. If your bugs imply that their work is shoddy, they are likely to fight back on anything you submit. If you continuously submit trivial “bugs”, they will probably be returned right away with a “not an issue” or “works as designed” status. If you treat any bug like it’s a critical showstopper, they will think you’re crying wolf and not immediately jump on issues that actually are critical.

Then there’s people like Mr. Green, a former coworker of submitter Darren A., that give QA a bad name. The Mr. Greens of the QA world are so incompetent that their stupidity can cause project delays, rack up thousands of dollars in support costs, and cause a crapstorm between managers. Mr. Green once ran afoul of Darren’s subordinate Cathy, lead developer on the project Mr. Green was testing.

A shot from the film Clue, where Mrs. White holds a gun in front of Col. Mustard

Cathy was en route to the United States from London for a customer visit when her phone exploded with voicemail notifications immediately upon disabling airplane mode. There were messages from Darren, Mr. Green, and anyone else remotely involved with the project. It seemed there was a crippling issue with the latest build that was preventing any further testing during an already tight timeline.

Instead of trying to determine the cause, Mr. Green just told everyone “Cathy must have checked something in without telling us.” The situation was dire enough that Cathy, lacking the ability to remotely debug anything, had to immediately return to London. Mr. Green submitted a critical bug report and waited for her to cross the Atlantic.

What happened next is perfectly preserved in the following actual bug report from this incident. Some developers are known for their rude and/or snarky responses to bug reports that offend them. What Cathy did here takes that above and beyond to a legendary level.

====
Raised:         14/May/2015
Time:           09:27
Priority:       Critical
Impact:         Severe
Raised By:      Mr. Green

Description
===========
No aspect of GODZILLA functions at present. All machines fail to connect with the server and we are unable to complete any further testing today.
All screens just give a funny message.
Loss of functionality severely impacts our testing timescales and we must now escalate to senior management to get a resolution.

15/May/2015 22:38
        User:   Cathy Scarlett
        Updated: Status
        New Value: Resolved - User Error
        Updated: Comment
        New Value:
                Thank you for this Mr. Green. I loved the fact that the entire SMT ordered me back to head office to fix
                this - 28 separate messages on my voicemail while I was waiting for my baggage.
                I was of course supposed to be fixing an issue our US customer has suffered for over a year but I
                appreciated having to turn around after I'd landed in New Jersey and jump back on the first return
                flight to Heathrow.

                Do you remember when you set up the Test room for GODZILLA Mr. Green?

                Do you remember hanging the WIFI router on a piece of string from the window handle because the
                cable wasn't long enough?

                Do you remember me telling you not to do this as it was likely to fall?

                Do you remember telling me that you sorted this out and got Networks to setup a proper WIFI router
                for all the test laptops?

                I remember this Mr. Green and I'm sure you'll remember when I show you the emails and messages.

                I walked into the test room at 10 o'clock tonight (not having slept properly for nearly
                3 days) to find the WIFI router on the floor with the network cable broken.
                        ROOT CAUSE: The string snapped

                There was a spare cable next to it so I plugged this one in instead.

                Then, because this was the correct cable, I put the WIFI unit into the mounting that was provided
                for you by networks.

                As if by magic, all the laptops started working and those 'funny messages' have now disappeared.
                GODZILLA can now carry on testing. I'm struggling to understand why I needed to fly thousands of
                miles to fix this given that you set this room up in the first place. I'm struggling to understand
                why you told the SMT that this was a software error. I'm struggling to understand why you bypassed my
                manager who would have told you all of this. I'm closing this as 'user error' because there
                isn't a category for 'F**king moron'

                72 hours of overtime to cover an aborted trip from London to New York and back:
                        £3,600

                1 emergency return flight:
                        £1,500

                1 wasted return flight
                        £300

                1 very nice unused hotel room that has no refund:
                        £400

                1 emergency taxi fare from Heathrow:
                        £200

                16 man days of testing lost
                        £6,000

                Passing my undisguised contempt for you onto SMT:
                        Priceless

Mr. Green was obviously offended by her response. He escalated it to his manager, who demanded that Cathy be fired. This left Darren in a precarious position as Cathy’s manager. Sure, it was unprofessional. But it was like getting a call from your child’s school saying they punched a bully in the nose and they want your child to be disciplined for defending themselves. Darren decided to push back at the QA manager and insist that Mr. Green is the one who should be fired.

This story might have ended with Mr. Green and Cathy forced into an uneasy truce as the company management decided that they were both too valuable to lose. But that isn’t how this story ended. Or, perhaps Darren's push-back back-fired, and he's the one who ends up getting fired. That also isn't how the story ended. We invite our readers to speculate, extrapolate and fabricate in the comments. Later this morning, we’ll reveal the true killer outcome…

How It Really Ended

Darren took the case up to his boss, and then to their boss, up the management chain. No one was particularly happy with Cathy’s tone, and there was a great deal of tut-tutting and finger-wagging about professional conduct.

Ms. Scarlett, in Clue, delivering the line 'Flames, flames on the side of my face'

But she was right. It was Mr. Green who failed to follow instructions, it was Mr. Green who cost the company thousands, along with the customer relationship problems caused by Cathy’s sudden emergency trip back to the home office.

In what can only be considered a twist ending by the standards of this site, it was Mr. Green who was escorted out of the building by security.

The killer was Cathy, in the issue tracking system, with the snarky bug report.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Don MartiPredictions for 2018

Bitcoin to the moooon: The futures market is starting up, so here comes a bunch more day trader action. More important, think about all the bucket shops (I even saw an "invest in Bitcoin without owning Bitcoin" ad on public transit in London), legit financial firms, Libertarian true believers, and coins lost forever because of human error. Central bankers had better keep an eye on Bitcoin, though. Last recession we saw that printing money doesn't work as well as it used to, because it ends up in the hands of rich people who, instead of priming economic pumps with it, just drive up the prices of assets. I would predict "Entire Round of Quantitative Easing Gets Invested in Bitcoin Without Creating a Single New Job" but I'm saving that one for 2019. Central banks will need to innovate. Federal Reserve car crushers? Relieve medical deby by letting the UK operate NHS clinics at their consulates in the USA, and we trade them US green cards for visas that allow US citizens to get treated there? And—this is a brilliant quality of Bitcoin that I recognized too late—there is no bad news that could credibly hurt the value of a purely speculative asset.

The lesson for regular people here is not so much what to do with Bitcoin, but remember to keep putting some well-considered time into actions that you predict have unlikely but large and favorable outcomes. Must remember to do more of this.

High-profile Bitcoin kidnapping in the USA ends in tragedy: Kidnappers underestimate the amount of Bitcoin actually available to change hands, ask for more than the victim's family (or fans? a crowdsourced kidnapping of a celebrity is now a possibility) can raise in time. Huge news but not big enough to slow down something that the finance scene has already committed to.

Tech industry reputation problems hit open source. California Internet douchebags talk like a positive social movement but act like East Coast vampire squid—and people are finally not so much letting them define the terms of the conversation. The real Internet economy is moving to a three-class system: plutocrats, well-paid brogrammers with Aeron chairs, free snacks and good health insurance, and everyone else in the algorithmically-managed precariat. So far, people are more concerned about the big social and surveillance marketing companies, but open source has some of the same issues. Just as it was widely considered silly for people to call Facebook users "the Facebook community" in 2017, some of the "community" talk about open source will be questioned in 2018. Who's working for who, and who's vulnerable to the risks of doing work that someone else extracts the value of? College athletes are ahead of the open source scene on this one.

Adfraud becomes a significant problem for end users: Powerful botnets in data centers drove the pivot to video. Now that video adfraud is well-known, more of the fraud hackers will move to attribution fraud. This ties in to adtech consolidation, too. Google is better at beating simple to midrange fraud than the rest of the Lumascape, so the steady progress towards a two-logo Lumascape means fewer opportunities for bots in data centers.

Attribution fraud is nastier than servers-talking-to-servers fraud, since it usually depends on having fraudulent and legit client software on the same system—legit to be used for a human purchase, fraudulent to "serve the ad" that takes credit for it. Unlike botnets that can run in data centers, attribution fraud comes home with you. Yeech. Browsers and privacy tools will need to level up from blocking relatively simple Lumascape trackers to blocking cleverer, more aggressive attribution fraud scripts.

Wannabe fascists keep control of the US Congress, because your Marketing budget: "Dark" social campaigns (both ads and fake "organic" activity) are still a thing. In the USA, voter suppression and gerrymandering have been cleverly enough done that social manipulation can still make a difference, and it will.

In the long run, dark social will get filtered out by habits, technology, norms, and regulation—like junk fax and email spam before it—but we don't have a "long run" between now and November 2018. The only people who could make an impact on dark social now are the legit advertisers who don't want their brands associated with this stuff. And right now the expectations to advertise on the major social sites are stronger than anybody's ability to get an edgy, controversial "let's not SPONSOR ACTUAL F-----G NAZIS" plan through the 2018 marketing budget process.

Yes, the idea of not spending marketing money on supporting nationalist extremist forums is new and different now. What a year.

These Publishers Bought Millions Of Website Visits They Later Found Out Were Fraudulent

No boundaries for user identities: Web trackers exploit browser login managers

Best of 2017 #8: The World's Most Expensive Clown Show

My Internet Mea Culpa

2017 Was the Year I Learned About My White Privilege

With the people, not just of the people

When Will Facebook Take Hate Seriously?

Using Headless Mode in Firefox – Mozilla Hacks : the Web developer blog

Why Chuck E. Cheese’s Has a Corporate Policy About Destroying Its Mascot’s Head

Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

,

CryptogramPost-Quantum Algorithms

NIST has organized a competition for public-key algorithms secure against a quantum computer. It recently published all of its Round 1 submissions. (Details of the NIST efforts are here. A timeline for the new algorithms is here.)

Planet Linux AustraliaCraige McWhirter: First Look at Snaps

I've belatedly come to have a close up look at both Ubuntu Core (Snappy), Snaps and the Snappy package manager.

The first pass was to rebuild my rack of Raspberry Pi's from Debian armhf to Ubuntu Core for the Raspberry Pi.

Rack'o'Pi's)

This proved to be the most graceful install I've ever had on any hardware, ever. No hyperbole: boot, authenticate, done. I repeated this for all six Pi's in such a short time frame that I was concerned I'd done something wrong. Your SSH keys are already installed, you can log in immediately and just get on with it.

Which is where snaps come into play.

Back on my laptop, I followed the tutorial Create Your First Snap which uses GNU Hello as an example snap build and finishes with a push to the snap store at snapcraft.io.

I then created a Launchpad Repo, related a snap package, told it to build for armhf and amd64 and before long, I could install this snap on both my laptop and the Pi's.

Overall this was a pretty impressive and graceful process.

Worse Than FailureBest of…: 2017: The New Manager

We all dread the day we end up getting dragged, kicking and screaming, out of our core competencies and forced to be a manager. This is one of those stories. -- Remy

Error Message Example vbs

She'd resisted the call for years. As a senior developer, Makoto knew how the story ended: one day, she'd be drafted into the ranks of the manager, forswearing her true love webdev. She knew she'd eventually succumb, but she'd expected to hold out for a few years before she had to decide if she were willing to change jobs to avoid management.

But when her boss was sacked unexpectedly, mere weeks after the most senior dev quit, she looked around and realized she was holding the short straw. She was the most senior. Even if she didn't put in for the job, she'd be drafted into acting as manager while they filled the position.

This is the story of her first day on the job.

Makoto spent the weekend pulling together a document for their external contractors, who'd been plaguing the old boss with questions night and day— in Spanish, no less. Makoto made sure to document as clearly as she could, but the docs had to be in English; she'd taken Japanese in high school for an easy A. She sent it over first thing Monday morning, hoping to have bought herself a couple of days to wrap up her own projects before the deluge began in earnest.

It seemed at first to be working, but perhaps it just took time for them to translate the change announcement for the team. Just before noon, she received an instant message.

Well, I can just point them to the right page and go to lunch anyway, she thought, bracing herself.

Emilio: I am having error in application.
Makoto: What error are you having?

A minute passed, then another. She was tempted to go to lunch, but the message client kept taunting her, assuring her that Emilio was typing. Surely his question was just long and complicated. She should give him the benefit of the doubt, right?

Emilio: error i am having is: File path is too long

Makoto winced. Oh, that bug ... She'd been trying to get rid of the dependencies with the long path names for ages, but for the moment, you had to install at the root of C in order to avoid hitting the Windows character limits.

But I documented that. In bold. In three places!

Makoto: Did you clone the repository to a folder in the root of a drive? As noted in the documentation there are paths contained within that will exceed the windows maximum path length otherwise
Emilio: No i cloned it to C:\Program Files\Intelligent Communications Inc\Clients\Anonymized Company Name\Padding for length\

Makoto's head hit the desk. She didn't even look up as her fingers flew across the keys. I'll bet he didn't turn on nuget package restore, she thought, or configure IIS correctly.

Makoto: please clone the repository as indicated in the provided documentation, Additionally take careful note of the documented steps required to build the Visual Studio Solution for the first time, as the solution will not build successfully otherwise
Emilio: Yes.

Whatever that means. Makoto sighed. Whatever, I'm out, lunchtime.

Two hours later she was back at her desk, belly full, working away happily at her next feature, when the message bar blinked again.

Dammit!

Emilio: I am having error building application.
Makoto: Have you followed the documentation provided to you? Have you made sure to follow the "first time build" section?
Emilio: yes.
Makoto: And has that resolved your issue?
Emilio: Yes. I am having error building application
Makoto: And what error are you having?
Emilio: Yes. I am having error building application.

"Oh piss off," she said aloud, safe in the knowledge that he was located thousands of miles from her office and thus could not hear her.

"That bad?" asked her next-door neighbor, Mike, with a sympathetic smile.

"He'll figure it out, or he won't," she replied grimly. "I can't hold his hand through every little step. When he figures out his question, I'll be happy to answer him."

And, a few minutes later, it seemed he did figure it out:

Emilio: I am having error with namespaces relating to the nuget package. I have not yet performed nuget package restore

The sound of repeated thumps sent Mike scurrying back across the little hallway into Makoto's cube. He took one look at her screen, winced, and went to inform the rest of the team that they'd be taking Makoto out for a beer later to "celebrate her first day as acting manager." That cheered her enough to answer, at least.

Makoto: Please perform the steps indicated in the documentation for first time builds of the solution in order to resolve your error building the application.
Emilio: i will attempt this fix.

Ten minutes passed: just long enough for her to get back to work, but not so long she'd gotten back into flow before her IM lit up again.

Emilio: I am no longer having error build application.

"Halle-frickin-lujah", she muttered, closing the chat window and promptly resolving to forget all about Emilio ... for now.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

,

CryptogramAcoustical Attacks against Hard Drives

Interesting destructive attack: "Acoustic Denial of Service Attacks on HDDs":

Abstract: Among storage components, hard disk drives (HDDs) have become the most commonly-used type of non-volatile storage due to their recent technological advances, including, enhanced energy efficacy and significantly-improved areal density. Such advances in HDDs have made them an inevitable part of numerous computing systems, including, personal computers, closed-circuit television (CCTV) systems, medical bedside monitors, and automated teller machines (ATMs). Despite the widespread use of HDDs and their critical role in real-world systems, there exist only a few research studies on the security of HDDs. In particular, prior research studies have discussed how HDDs can potentially leak critical private information through acoustic or electromagnetic emanations. Borrowing theoretical principles from acoustics and mechanics, we propose a novel denial-of-service (DoS) attack against HDDs that exploits a physical phenomenon, known as acoustic resonance. We perform a comprehensive examination of physical characteristics of several HDDs and create acoustic signals that cause significant vibrations in HDDs internal components. We demonstrate that such vibrations can negatively influence the performance of HDDs embedded in real-world systems. We show the feasibility of the proposed attack in two real-world case studies, namely, personal computers and CCTVs.

Krebs on SecuritySkyrocketing Bitcoin Fees Hit Carders in Wallet

Critics of unregulated virtual currencies like Bitcoin have long argued that the core utility of these payment systems lies in facilitating illicit commerce, such as buying drugs or stolen credit cards and identities. But recent spikes in the price of Bitcoin — and the fees associated with moving funds into and out of it — have conspired to make Bitcoin a less useful and desirable payment method for many crooks engaged in these activities.

Bitcoin’s creator(s) envisioned a currency that could far more quickly and cheaply facilitate payments, with tiny transaction fees compared to more established and regulated forms of payment (such as credit cards). And indeed, until the beginning of 2017 those fees were well below $1, frequently less than 10 cents per transaction.

But as the price of Bitcoin has soared over the past few months to more than $15,000 per coin, so have the Bitcoin fees per transaction. This has made Bitcoin far less attractive for conducting small-dollar transactions (for more on this shift, see this Dec. 19 story from Ars Technica).

As a result, several major underground markets that traffic in stolen digital goods are now urging customers to deposit funds in alternative virtual currencies, such as Litecoin. Those who continue to pay for these commodities in Bitcoin not only face far higher fees, but also are held to higher minimum deposit amounts.

“Due to the drastic increase in the Bitcoin price, we faced some difficulties,” reads the welcome message for customers after they log in to Carder’s Paradise, a Dark Web marketplace that KrebsOnSecurity featured in a story last week.

“The problem is that we send all your deposited funds to our suppliers which attracts an additional Bitcoin transaction fee (the same fee you pay when you make a deposit),” Carder’s Paradise explains. “Sometimes we have to pay as much as 5$ from every 1$ you deposited.”

The shop continues:

“We have to take additionally a ‘Deposit fee’ from all users who deposit in Bitcoins. This is the amount we spent on transferring your funds to our suppliers. To compensate your costs, we are going to reduce our prices, including credit cards for all users and offer you the better bitcoin exchange rate.”

“The amount of the Deposit Fee depends on the load on the Bitcoin network. However, it stays the same regardless of the amount deposited. Deposits of 10$ and 1000$ attract the same deposit fee.”

“If the Bitcoin price continues increasing, this business is not going to be profitable for us anymore because all our revenue is going to be spent on the Bitcoin fees. We are no longer in possession of additional funds to improve the store.”

“We urge you to start using Litecoin as much as possible. Litecoin is a very fast and cheap way of depositing funds into the store. We are not going to charge any additional fees if you deposit Litecoins.”

On Carder’s Paradise, the current minimum deposit amount is 0.0066 BTCs, or approximately USD $100. The deposit fee for each transaction is $15.14. That means that anyone who deposits just the minimum amount into this shop is losing more than 15 percent of their deposit in transaction fees.

Incredibly, the administrators of Carder’s Paradise apparently received so much pushback from crooks using their service that they decided to lower the price of stolen credit cards to make potential buyers feel better about higher transaction fees.

“Our team made a decision to adjust the previous announcement and provide a fair solution for everyone by reducing the credit cards [sic] prices,” the message concludes.

Mainstream merchants that accept credit card payments have long griped about the high cost of transaction fees, which average $2.50 to $3.00 on a $100 charge. What’s fascinating about the spike in Bitcoin transaction fees is that crooks could end up paying five times as much in fees just to purchase the same amount in stolen credit card accounts!

Worse Than FailureBest of…: 2017: The Second Factor

As this is a holiday week, per our usual tradition, we're revisiting some of the most popular articles from the year. We start with The Second Factor, a tale of security gone wrong. -- Remy

Famed placeholder company Initech is named for its hometown, Initown. Initech recruits heavily from their hometown school, the University of Initown. UoI, like most universities, is a hidebound and bureaucratic institution, but in Initown, that’s creating a problem. Initown has recently seen a minor boom in the tech sector, and now the School of Sciences is setting IT policy for the entire university.

Derek manages the Business School’s IT support team, and thus his days are spent hand-holding MBA students through how to copy files over to a thumb drive, and babysitting professors who want to fax an email to the department chair. He’s allowed to hire student workers, but cannot fire them. He’s allowed to purchase consumables like paper and toner, but has to beg permission for capital assets like mice and keyboards. He can set direction and provide input to software purchase decisions, but he also has to continue to support the DOS version of WordPerfect because one professor writes all their papers using it.

A YubiKey in its holder, along with an instruction card describing its use.

One day, to his surprise, he received a notification from the Technology Council, the administrative board that set IT policy across the entire University. “We now support Two-Factor Authentication”. Derek, being both technologically savvy and security conscious, was one of the first people to sign up, and he pulled his entire staff along with him. It made sense: they were all young, technologically competent, and had smartphones that could run the school’s 2FA app. He encouraged their other customers to join them, but given that at least three professors didn’t use email and instead had the department secretary print out emails, there were some battles that simply weren’t worth fighting.

Three months went by, which is an eyeblink in University Time™. There was no further direction from the Technology Council. Within the Business School, very little happened with 2FA. A few faculty members, especially the ones fresh from the private sector, signed up. Very few tenured professors did.

And then Derek received this email:

To: AllITSManagers
From: ITS-Adminsd@initown.edu
Subject: Two-Factor Authentication
Effective two weeks from today, we will be requiring 2FA to be enabled on
all* accounts on the network, including student accounts. Please see attached, and communicate the changes to your customers.

Rolling out a change of this scale in two weeks would be a daunting task in any environment. Trying to get University faculty to change anything in a two week period was doomed to fail. Adding students to the mix promised to be a disaster. Derek read the attached “Transition Plan” document, hoping to see a cunning plan to manage the rollout. It was 15 pages of “Two-Factor Authentication(2FA) is more secure, and is an industry best practice,” and “The University President wants to see this change happen”.

Derek compiled a list of all of his concerns- it was a long list- and raised it to his boss. His boss shrugged: “Those are the orders”. Derek escalated up through the business school administration, and after two days of frantic emails and, “Has anyone actually thought this through?” Derek was promised 5 minutes at the end of the next Technology Council meeting… which was one week before the deadline.

The Technology Council met in one of the administrative conference rooms in a recently constructed building named after a rich alumni who paid for the building. The room was shiny and packed with teleconferencing equipment that had never properly been configured, and thus was useless. It also had a top-of-the-line SmartBoard display, which was also in the same unusable state.

When Derek was finally acknowledged by the council, he started with his questions. “So, I’ve read through the Transition Plan document,” he said, “but I don’t see anything about how we’re going to on-board new customers to this process. How is everyone going to use it?”

“They’ll just use the smartphone app,” the Chair said. “We’re making things more secure by using two-factor.”

“Right, but over in the Business School, we’ve got a lot of faculty that don’t have smartphones.”

Administrator #2, seated to the Chair’s left, chimed in, “They can just receive a text. This is making things more secure.”

“Okay,” Derek said, “but we’ve still got faculty without cellphones. Or even desk phones. Or even desks for that matter. Adjunct professors don’t get offices, but they still need their email.”

There was a beat of silence as the Chair and Administrators considered this. Administrator #1 triumphantly pounded the conference table and declared, “They can use a hardware token! This will make our network more secure!”

Administrator #2 winced. “Ah… this project doesn’t have a budget for hardware tokens. It’s a capital expense, you see…”

“Well,” the Chair said, “it can come out of their department’s budget. That seems fair, and it will make our network more secure.”

“And you expect those orders to go through in one week?” Derek asked.

“You had two weeks to prepare,” Administrator #1 scolded.

“And what about our faculty abroad? A lot of them don’t have a stable address, and I’m not going to be able to guarantee that they get their token within our timeline. Look, I agree, 2FA is definitely great for security- I’m a big advocate for our customers, but you can’t just say, let’s do this without actually having a plan in place! ‘It’s more secure’ isn’t a plan!”

“Well,” the Chair said, harrumphing their displeasure at Derek’s outburst. “That’s well and good, but you should have raised these objections sooner.”

“I’m raising these objections before the public announcement,” Derek said. “I only just found out about this last week.”

“Ah, yes, you see, about that… we made the public announcement right before this meeting.”

“You what?”

“Yes. We sent a broadcast email to all faculty, staff and students, announcing the new mandated 2FA, as well as a link to activate 2FA on their account. They just have to click the link, and 2FA will be enabled on their account.”

“Even if they have no way to received the token?” Derek asked.

“Well, it does ask them if they have a way to receive a token…”

By the time Derek got back to the helpdesk, the inbox was swamped with messages demanding to know what was going on, what this change meant, and half a dozen messages from professors who saw “mandatory” and “click this link” and followed instructions- leaving them unable to access their accounts because they didn’t have any way to get their 2FA token.

Over the next few days, the Technology Council tried to round up a circular firing squad to blame someone for the botched roll-out. For a beat, it looked like they were going to put Derek in the center of their sights, but it wasn’t just the Business School that saw a disaster with the 2FA rollout- every school in the university had similar issues, including the School of Sciences, which had been pushing the change in the first place.

In the end, the only roll-back strategy they had was to disable 2FA organization wide. Even the accounts which had 2FA previously had it disabled. Over the following months, the Technology Council changed its tone on 2FA from, “it makes our network more secure” to, “it just doesn’t work here.”

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

,

Cryptogram"Santa Claus is Coming to Town" Parody

Worse Than FailureDeveloper Carols (Merry Christmas)

Árbol navideño luminoso en Madrid 02

It’s Christmas, and thus technically too late to actually go caroling. Like any good project, we’ve delivered close enough to the deadline to claim success, but late enough to actually be useless for this year!

Still, enjoy some holiday carols specifically written for our IT employees. Feel free to annoy your friends and family for the rest of the day.

Push to Prod (to the tune of Joy To the World)

Joy to the world,
We’ve pushed to prod,
Let all,
record complaints,
“This isn’t what we asked you for,”
“Who signed off on these requirements,”
“Rework it,” PMs sing,
“Rework it,” PMs sing,
“Work over break,” the PMs sing.


Backups (to the tune of Deck the Halls)

Back the system up to tape drives,
Fa la la la la la la la la,
TAR will make the tape archives,
Fa la la la la la la la la,
Recov'ry don't need no testing,
Fa la la la la la la la la la,
Pray it works upon requesting,
Fa la la la la la la la la


Ode to CSS (to the tune of Silent Night)

Vertical height,
Align to the right,
CSS,
Aid my fight,
Round the corners,
Flattened design,
!important,
Please work this time,
It won't work in IE,
Never in goddamn IE


The Twelve Days of The Holiday Shift (to the tune of The Twelve Days of Christmas)

On my nth day of helpdesk, the ticket sent to me:
12 write arms leaping
11 Trojans dancing
10 bosses griping
9 fans not humming
8 RAIDs not striping
7 WANs a-failing
6 cables fraying
5 broken things
4 calling users
3 missing pens
2 turtled drives
and a toner cartridge that is empty.

(Contributed by Charles Robinson)


Here Comes a Crash Bug (to the tune of Here Comes Santa Claus)

Here comes a crash bug,
Here comes a crash bug,
Find th’ culprit with git blame,
Oh it was my fault,
It’s always my fault,
Patch and push again.

Issues raisin‘, users ‘plainin’,
Builds are failin’ tonight,
So hang your head and say your prayers,
For a crash bug comes tonight.


WCry the Malware (to the tune of Frosty the Snowman)

WCry the Malware, was a nasty ugly worm,
With a cryptolock and a bitcoin bribe,
Spread over SMB

WCry the Malware, is a Korean hack they say,
But the NSA covered up the vuln,
To use on us one day

There must have been some magic in that old kill-switch they found,
For when they register’d a domain,
The hack gained no more ground

WCry the Malware, was as alive as he could be,
Till Microsoft released a patch,
To fix up SMB

(Suggested by Mark Bowytz)


Oh Come All Ye Web Devs (to the tune of Oh Come All Ye Faithful)

Oh come, all ye web devs,
Joyful and triumphant,
Oh come ye to witness,
JavaScript's heir:

Come behold TypeScript,
It’s just JavaScript,
But we can conceal that,
But we can conceal that,
But we can conceal that,
With our toolchain


Thanks to Jane Bailey for help with scansion. Where it's right, thank her, where it's wrong, blame me.

As per usual, most of this week will be a retrospective of our “Best Of 2017”, but keep your eyes open- there will be a bit of a special “holiday treat” article to close out the year. I’m excited about it.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

,

TEDWhat’s the definition of feminism? 12 talks that explain it to you

Image courtesy Backbone Campaign. License CC BY 2.0

Earlier this month, Merriam-Webster announced that 2017’s word of the year is feminism. Searches for the word on the dictionary website spiked throughout the year, beginning in January around the Women’s March, again after Kellyanne Conway said in an interview that she didn’t consider herself a feminist, and during some of feminism’s many pop culture moments this year. And the steady stream of #MeToo news stories have kept the word active in search over the past few weeks and months.

It’s not surprising, really. Think of it as one of the outcomes of the current moral crisis in the US and around the world — along with a growing awareness of the scope of the global epidemic of sexual harassment and acts of violence against women, the continuing challenges of underrepresentation in all decision-making positions and the misrepresentation of women and girls in media. I believe this moment presents an opportunity to enlist more women and men to step forward as feminists, to join the drive toward a world in which women feel safe at work and home and enjoy freedom to pursue their dreams and their potential for themselves, their families, communities and countries.

Still, I hear every day the question: “What does feminism actually mean?” According to Merriam-Webster, it’s “the theory of the political, economic, and social equality of the sexes” and “organized activity on behalf of women’s rights and interests.”

That’s a good elevator pitch, but it could use more perspective, more context. Over my seven years as curator and host of the TEDWomen conference, we’ve seen more than a few TED Talks take up the subject of feminism from many angles. Here are a dozen, chosen from the more than 150 TEDWomen talks published on TED.com and the TED Archive YouTube channels so far — including a bonus talk from the TEDx archive that kicked off a global conversation.

Looking ahead to 2018, I hope these talks can inform how we channel the new awareness and activism of 2017 into strategic decisions for women’s rights. Could we eliminate the gender gap in leadership? Could we eliminate economic, racial, cultural and gender inequities? Imagine these as goals for a newly energized and focused global feminist community.

1. Courtney Martin: Reinventing feminism

What does it mean to be a millennial and a feminist in the 21st century? In her first TEDWomen talk, Courtney Martin admits that when she was younger, she didn’t claim the feminist label because it reminded her too much of her hippie mom and outdated notions of what it means to be a feminist. But in college, she changed her mind. Her feminism, she says, looks and sounds different from her mom’s version, but it’s not all that different underneath: feminist activism is on a continuum. While her mother talks about the patriarchy, Courtney talks about intersectionality and the ways that many other issues, such as racism and immigration, are part of the feminist equation. Blogging at Feministing.com, she says, is the 21st-century version of consciousness-raising.

2. Hanna Rosin: New data on the rise of women

Back in 2010 when we held the very first TEDWomen event in Washington, DC, one of our presenters was journalist Hanna Rosin. At the time, she was working on a book that came out in 2012 titled The End of Men. Her talk focused on a particular aspect of her research: how women were outpacing men in important aspects of American life, without even really trying. For instance, she found that for every two men who get a college degree, three women will do the same. Women, for the first time that year, became the majority of the American workforce. “The 200,000-year period in which men have been top dog,” she said, “is truly coming to an end, believe it or not.”

3. Kimberlé Crenshaw: The urgency of intersectionality

Now more than ever, it’s important to look boldly at the reality of race and gender bias — and understand how the two can combine to create even more harm. Kimberlé Crenshaw uses the term “intersectionality” to describe this phenomenon; as she says, if you’re standing in the path of multiple forms of exclusion, you’re likely to get hit by both. In this moving and informative talk, she calls on us to bear witness to the reality of intersectionality and speak up for everyone dealing with prejudice.

4. Sheryl Sandberg: Why we have too few women leaders

At the first TEDWomen in 2010, Facebook COO Sheryl Sandberg looked at why a smaller percentage of women than men reach the top of their professions — and offered three powerful pieces of advice to women aiming for the C-suite. Her talk was the genesis of a book you may have heard of: Lean In came out in 2013. In December of that year, we invited Sheryl to come back and talk about the revolution she sparked with Lean In. Onstage, Sheryl admitted to me that she was terrified to step onto the TED stage in 2010 — because she was going to talk, for the first time, about the lonely experience of being a woman in the top tiers of business. Millions of views (and a best-selling book) later, the Facebook COO talked about the reaction to her idea (watch video), and explored the ways that women still struggle with success.

5. Roxane Gay: Confessions of a bad feminist

Writer Roxane Gay says that calling herself a bad feminist started out as an inside joke and became “sort of a thing.” In her 2015 TEDWomen talk, she chronicles her own journey to becoming a feminist and cautions that we need to take into account all the differences — “different bodies, gender expressions, faiths, sexualities, class backgrounds, abilities, and so much more” — that affect us, at the same time we account for what we, as women, have in common. Sometimes she isn’t a perfect feminist — but as she puts it: “I would rather be a bad feminist than no feminist at all.”

6. Alaa Murabit: What my religion really says about women

Alaa Murabit champions women’s participation in peace processes and conflict mediation. As a young Muslim woman, she is proud of her faith. But when she was a teenager, she realized that her religion (like most others) was dominated by men, who controlled the messaging and the policies created in their likeness. “Until we can change the system entirely,” she says, “we can’t realistically expect to have full economic and political participation of women.” She talks about the work she did in Libya to change religious messaging and to provide an alternative narrative which promoted the rights of women there.

7. Madeleine Albright: Being a woman and a diplomat

Former US Secretary of State Madeleine Albright talks bluntly about being a powerful women in politics and the great advantage she feels in being a woman diplomat — because, as she puts it, “women are a lot better at personal relationships.” She talks about why, as a feminist, she believes that “societies are better off when women are politically and economically empowered.” She says she really dedicated herself to that, both at the UN and then as secretary of state. Far from being a soft issue, she says, women’s issues are often the very hardest ones, dealing directly with life and death.

8. Halla Tómasdóttir: It’s time for women to run for office

In early 2016, Halla Tómasdóttir ran for president in Iceland and — surprising her entire nation (and herself) — she nearly won. Tómasdóttir believes that if you’re going to change things, you have to do it from the inside. Earlier in her career, she infused the world of finance with “feminine values,” which she says helped her survive the financial meltdown in Iceland. In her 2016 TEDWomen talk, she talks about her campaign and how she overcame media bias, changed the tone of the political debate and inspired the next generation of future women leaders along the way. “What we see, we can be,” she says. “It matters that women run.”

9. Sandi Toksvig: A political party for women’s equity

Women’s equality won’t just happen, says British comedian and activist Sandi Toksvig, not unless more women are put in positions of power. In a very funny, very smart TEDWomen talk (she is the host of QI after all), Toksvig tells the story of how she helped start a new political party in Britain, the Women’s Equality Party, with the express purpose of putting equality on the ballot. Now she hopes people — and especially women — around the world (US women, are you listening?) will copy her party’s model and mobilize for equality.

10. Chinaka Hodge: “What Will You Tell Your Daughters About 2016?”

Poet, playwright, filmmaker and educator Chinaka Hodge uses her own life and experiences as the backbone of wildly creative, powerful works. In this incredible poem delivered before the 2016 election — that is perhaps even more stirring today given everything that has passed in 2017 — she asks the tough questions about a year that none of us will forget.

11. Gretchen Carlson on being fierce

In this #MeToo moment, Gretchen Carlson, the author of Be Fierce, talks about what needs to happen next. “Breaking news,” she says, “the untold story about women and sexual harassment in the workplace is that women just want a safe, welcoming and harass-free environment. That’s it.”  Ninety-eight percent of United States corporations already have sexual harassment training policies. But clearly, that’s not working. We need to turn bystanders into allies, outlaw arbitration clauses, and create spaces where women feel empowered and confident to speak up when they are not respected.

Bonus: Chimamanda Ngozi Adichie: We should all be feminists

This TEDx talk started a worldwide conversation about feminism. In 2012 at TEDxEuston, writer Chimamanda Ngozi Adichie explains why everyone — men and women — should be feminists. She talks about how men and women go through life with different experiences that are gendered — and because of that, they often have trouble understanding how the other can’t see what seems so self-evident. It’s a point even more relevant in the wake of this year’s #MeToo movement. “That many men do not actively think about gender or notice gender is part of the problem of gender,” Nigozi Adichie says. “Gender matters. Men and women experience the world differently. Gender colors the way we experience the world. But we can change that.”

As I mentioned, these are just a handful of the amazing, inspiring, thoughtful and smart women and the many ideas worth spreading, especially in these times when hope and innovative ideas are so necessary.

Happy 2018. Let’s make it a good one for women and for all us who proudly call ourselves feminists and stand ready to put ideas into actions.

— Pat


Don MartiSalary puzzle

Short puzzle relevant to some diversity and inclusion threads that encourage people to share salary info. (I should tag this as "citation needed" because I don't remember where I heard it.)

Alice, Bob, Carlos, and Dave all want to know the average salary of the four, but none wants to reveal their individual salary. How can the four of them work together to determine the average? Answer below.

 

 

 

 

 

 

 

 

 

 

 

 

 

Answer

Alice generates a random number, adds it to her salary, and gives the sum to Bob.

Bob adds his salary and gives the sum to Carlos.

Carlos adds his salary and gives the sum to Dave.

Dave adds his salary and gives the sum to Alice.

Alice subtracts her original random number, divides by the number of participants, and announces the average. No participant had to share their real salary, but everyone now knows if they are paid above or below the average for the group.

,

Cory DoctorowReviving my Christmas daddy-daughter podcast, with Poesy!

For nearly every year since my daughter Poesy was old enough to sing, we’ve recorded a Christmas podcast; but we missed it in 2016, due to the same factors that made the podcast itself dormant for a couple years — my crazy busy schedule.


But this year, we’re back, with my off-key accompaniment to her excellent “Deck the Halls,” as well as some of her favorite slime recipes, and a promise that I’ll be taking up podcasting again in the new year, starting with a serialized reading of my Sturgeon-winning story The Man Who Sold the Moon.

Here’s hoping for a better 2018 than 2017 or 2016 proved to be: I take comfort in the idea that the bumpers are well and truly off, which is why so many improbably terrible things were able to happen and worsen in the past couple years — but with the bumpers off, it also means that improbably wonderful things are also possible. All the impossible dreams of 2014 or so are looking no more or less likely than any of the other weird stuff we’re living through now.


See you in the new year!

MP3, Podcast feed

Don MartiWhat we have, what we need

Stuff the Internet needs: home fiber connections, symmetrical, flat rate, on neutral terms.

Stuff the Internet is going nuts over: cryptocurrencies.

Big problem with building fiber to the home: capital.

Big problem with cryptocurrencies: stability.

Two problems, one solution? Hard to make any kind of currency useful without something stable, with evidence-based value, to tie its value to. Fiat currencies are tied to something of value? Yes, people have to pay taxes in them. Hard to raise capital for "dumb pipe" Internet service because it's just worth about the same thing, month after month. So what if we could combine the hotness and capital-attractiveness of cryptocurrencies with the stability and actual usefulness of fiber?

,

Sociological ImagesListen Up! Great Social Science Podcasts

Photo via Oli (Flickr CC)

Whether you’re taking a long flight, taking some time on the treadmill, or just taking a break over the holidays, ’tis the season to catch up on podcasts. Between long-running hits and some strong newcomers this year, there has never been a better time to dive into the world of social science podcasts. While we bring the sociological images, do your ears a favor and check these out.

Also, this list is far from comprehensive. If you have tips for podcasts I missed, drop a note in the comments!

New in 2017

If you’re new to sociology, or want a more “SOC 101” flavor, The Social Breakdown is perfect for you. Hosts Penn, Ellen, and Omar take a core sociological concept in each episode and break it down, offering great examples both old and new (and plenty of sass). Check out “Buddha Heads and Crosses” for a primer on cultural appropriation from Bourdieu to Notorious B.I.G.

Want to dive deeper? The Annex is at the cutting edge of sociology podcasting. Professors Joseph Cohen, Leslie Hinkson, and Gabriel Rossman banter about the news of the day and bring you interviews and commentary on big ideas in sociology. Check out the episode on Conspiracy Theories and Dover’s Greek Homosexuality for—I kid you not—a really entertaining look at research methods.

Favorite Shows Still Going Strong

In The Society Pages’ network, Office Hours brings you interviews with leading sociologists on new books and groundbreaking research. Check out their favorite episode of 2017: Lisa Wade on American Hookup!

Felling wonky? The Scholars Strategy Network’s No Jargon podcast is a must-listen for the latest public policy talk…without jargon. Check out recent episodes on the political rumor mill and who college affirmative action policies really serve.

I was a latecomer to The Measure of Everyday Life this year, finding it from a tip on No Jargon, but I’m looking forward to catching up on their wide range of fascinating topics. So far, conversations with Kieran Healy on what we should do with nuance and the resurrection of typewriters have been wonderful listens.

And, of course, we can’t forget NPR’s Hidden Brain. Tucked away in their latest episode on fame is a deep dive into inconspicuous consumption and the new, subtle ways of wealth in America.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramFriday Squid Blogging: Gonatus Squid Eating a Dragonfish

There's a video:

Last July, Choy was on a ship off the shore of Monterey Bay, looking at the video footage transmitted by an ROV many feet below. A Gonatus squid was spotted sucking off the face of a "really huge dragonfish," she says. "It took a little while to figure out what's going on here, who's eating whom, how is this going to end?" (The squid won.)

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDFree report: Bright ideas in business from TEDWomen 2017

At a workshop at TEDWomen 2017, the Brightline Initiative helped attendees parse the topic, “Why great ideas fail and how to make sure they don’t.” Photo: Stacie McChesney/TED

The Brightline Initiative helps leaders from all types of organizations build bridges between ideas and results. So they felt strong thematic resonance with TEDWomen 2017, which took place in New Orleans from November 1-3, and the conference theme of “Bridges.” In listening to the 50+ speakers who shared ideas, Brightline noted many that felt especially helpful for anyone who wants to work more boldly, more efficiently or more collaboratively.

We’re pleased to share Brightline’s just-released report on business ideas from the talks of TEDWomen 2017. Give it a read to find out how thinking about language can help you shake off a rut, and why a better benchmark for success might just be your capacity to form meaningful partnerships.

Get the report here >>


CryptogramAmazon's Door Lock Is Amazon's Bid to Control Your Home

Interesting essay about Amazon's smart lock:

When you add Amazon Key to your door, something more sneaky also happens: Amazon takes over.

You can leave your keys at home and unlock your door with the Amazon Key app -- but it's really built for Amazon deliveries. To share online access with family and friends, I had to give them a special code to SMS (yes, text) to unlock the door. (Amazon offers other smartlocks that have physical keypads).

The Key-compatible locks are made by Yale and Kwikset, yet don't work with those brands' own apps. They also can't connect with a home-security system or smart-home gadgets that work with Apple and Google software.

And, of course, the lock can't be accessed by businesses other than Amazon. No Walmart, no UPS, no local dog-walking company.

Keeping tight control over Key might help Amazon guarantee security or a better experience. "Our focus with smart home is on making things simpler for customers ­-- things like providing easy control of connected devices with your voice using Alexa, simplifying tasks like reordering household goods and receiving packages," the Amazon spokeswoman said.

But Amazon is barely hiding its goal: It wants to be the operating system for your home. Amazon says Key will eventually work with dog walkers, maids and other service workers who bill through its marketplace. An Amazon home security service and grocery delivery from Whole Foods can't be far off.

This is happening all over. Everyone wants to control your life: Google, Apple, Amazon...everyone. It's what I've been calling the feudal Internet. I fear it's going to get a lot worse.

Worse Than FailureError'd: 'Tis the Season for Confidentiality

"For the non-German speaking people: it's highly confidential & highly restricted information that our canteen is closed between Christmas and New Year's Eve. Now, sue me for disclosing this," Stella writes.

 

Jeff C. writes, "Since when did Ingenico start installing card readers on the other side of the looking glass?!"

 

"Well, looks like someone let their intern into Production unsupervised," wrote Lincoln K.

 

"Clearly, this is a marketing tactic that's being perpetuated by the Keebler Elves in hopes that the lure of (theoretically) saving millions of dollars will make consumers want to buy the entire display," wrote Jared S.

 

George writes, "Glad to see I will be protected against all the latest nulls."

 

"Ok...it's a stretch, I guess that you could make a connection between the items," wrote Ian T.

 

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

,

TEDThe Big Idea: TED’s 4 step guide to the holiday season

More charmingly referred to as a garbage fire that just keeps burning, 2017 has been a tough, relentless year of tragedy and strife. As we approach the holiday season, it’s important to connect and reconnect with those you love and want in your life. So, in these last few weeks of the year, here are a few ways to focus on building and honoring the meaningful relationships in your life.

1. Do some emotional housekeeping

Before you get into the emotional trenches with anyone (or walk into a house full of people you don’t agree with), check in with yourself. How you engage with your inner world drives everything from your ability to lead and moderate your mood, to your quality of sleep. Be compassionate and understanding of where you are in your life, internally and externally.

Psychologist Guy Winch makes a compelling case to practice emotional hygiene — taking care of our emotions, our minds, with the same diligence we take care of our bodies.

“We sustain psychological injuries even more often than we do physical ones, injuries like failure or rejection or loneliness. And they can also get worse if we ignore them, and they can impact our lives in dramatic ways,” he says. “And yet, even though there are scientifically proven techniques we could use to treat these kinds of psychological injuries, we don’t. It doesn’t even occur to us that we should. ‘Oh, you’re feeling depressed? Just shake it off; it’s all in your head. Can you imagine saying that to somebody with a broken leg: ‘Oh, just walk it off; it’s all in your leg.’”

In his article, 7 ways to practice emotional first aid, Winch lays out useful ways to reboot and fortify your emotional health:

  1. Pay attention to emotional pain — recognize it when it happens and work to treat it before it feels all-encompassing. The body evolved the sensation of physical pain to alert us that something is wrong and we need to address it. The same is true for emotional pain. If a rejection, failure or bad mood is not getting better, it means you’ve sustained a psychological wound and you need to treat it. For example, loneliness can be devastatingly damaging to your psychological and physical health, so when you or your friend or loved one is feeling socially or emotionally isolated, you need to take action.
  2. Monitor and protect your self-esteem. When you feel like putting yourself down, take a moment to be compassionate to yourself.Self-esteem is like an emotional immune system that buffers you from emotional pain and strengthens your emotional resilience. As such, it is very important to monitor it and avoid putting yourself down, particularly when you are already hurting. One way to “heal” damaged self-esteem is to practice self-compassion. When you’re feeling critical of yourself, do the following exercise: imagine a dear friend is feeling bad about him or herself for similar reasons and write an email expressing compassion and support. Then read the email. Those are the messages you should be giving yourself.
  3. Find meaning in loss. Loss is a part of life, but it can scar us and keep us from moving forward if we don’t treat the emotional wounds it creates — and the holidays are normally a time when these wounds become sensitive or even reopen completely. If sufficient time has passed and you’re still struggling to move forward after a loss, you need to introduce a new way of thinking about it. Specifically, the most important thing you can do to ease your pain and recover is to find meaning in the loss and derive purpose from it. It might be hard, but think of what you might have gained from the loss (for instance, “I lost my spouse but I’ve become much closer to my kids”). Sometimes, being rejected by your friends and/or family also feels like loss. Consider how you might gain or help others gain a new appreciation for life, or imagine the changes you could make that will help you live a life more aligned with your values and purpose.
  4. Learn what treatments for emotional wounds work for you. Pay attention to yourself and learn how you, personally, deal with common emotional wounds. For instance, do you shrug them off, get really upset but recover quickly, get upset and recover slowly, squelch your feelings, or …? Use this analysis to help yourself understand which emotional first aid treatments work best for you in various situations (just as you would identify which of the many pain relievers on the shelves works best for you). The same goes for building emotional resilience. Try out various techniques and figure out which are easiest for you to implement and which tend to be most effective for you. But mostly, get into the habit of taking note of your psychological health on a regular basis — and especially after a stressful, difficult, or emotionally painful situation.

Yes, practicing emotional hygiene takes a little time and effort, but it will seriously elevate your entire quality of life, the good doctor promises.

2. Sit down and have a chat

Friends are one thing; family, on the other hand, can be an entirely different (and potentially more stressful) situation. More than likely, it’s possible that you’ll get caught in a discussion that you don’t want to be a part of, or a seemingly harmless conversation that may take a frustrating turn.

There’s no reason to reinvent the conversation. But it’s useful to understand how to expertly pivot a talk between you and another person.

Radio host Celeste Headlee (TED Talk: 10 ways to have a better conversation) interviews people for her day job. As such, she accrued a helpful set of strategies and rules to follow when a discussion doesn’t go quite as planned. Check out her article (above) for insights on what to do when:

  • You want to go beyond small talk to have a meaningful conversation
  • An awkward silence happens and you don’t know what to say next
  • It seems like the other person isn’t listening
  • You start, or another person, starts a conversation that might end in an argument
  • You unintentionally offend someone

3. Make new memories while resurfacing old (good) ones

One of the best parts of getting everyone together for holidays or similar events is reminiscing, gathering around and talking about when your grandmother was young or that funny thing your cousin did when he was seven that no one is quite ready to let go of just yet. Resurfacing those moments everyone can enjoy, in one way or another, is a great way to fortify existing bonds and feel closer to loved ones. Who knows, from these stories, you may uncover ones never heard before.

Storycorps, a nonprofit whose founder, Dave Isay, won the 2015 TED Prize, is dedicated to preserving humanity’s cultural history through storytelling and has an expansive collection of great questions to ask just about anyone.

These questions are great for really digging into memories that are both cherished and important to preserve for generations to come. It may be interesting, fascinating and potentially emotional to hear about a loved one’s thoughts, feels and experiences from their lifetime.

For a good place to start, you can download the Storycorps app to start recording from your phone, which will you walk you through a few simple instructions. Then, you can start with these questions to warm-up the conversation:

  • What was your childhood like?
  • Tell me about the traditions that have been passed down through our family. How did they get started?
  • What are your most vivid memories of school?
  • How did you meet your wife/husband/partner?
  • What piece of wisdom or advice would you like to share with future generations?

4. Or if you’re far and can’t make it home to visit your friends and family regularly — get old fashioned.

With the speed and ease of email and texting, it may be hard to see the point in sitting down with a pen and paper.

But being abroad or unable to afford a ticket home is a reality that can feel equal parts isolating and emotionally-exhausting, no matter how many Skype sessions you have. Letter-writing is a lasting way to connect with your loved ones, a tangible collection of your thoughts and feelings at a specific point in your life. If you can’t always send home souvenirs, a thoughtful letter is a delightful, tangible reminder that you care — and helps the person on the receiving end just as much.

Lakshmi Pratury makes a beautiful case for letters to remember the people in your life, that they are a way to keep a person with you long after they’ve passed.

However, if family isn’t so big in your life for one reason or another, or you’d like to send some thoughtful words to someone who may needs them — write a letter to a stranger. The concept may sound strange, but the holiday season is habitually a rough one for those without close connections.

Hannah Brencher’s mother always wrote her letters. So when she felt herself bottom into depression after college, she did what felt natural — she wrote love letters and left them for strangers to find. The act has become a global initiative, The World Needs More Love Letters, which rushes handwritten letters to those in need of a boost. Brencher’s website will set you up with how to format your letter, who to write it to, and even the return address to write on the envelope.

So, here are four ways to do for yourself, but there are several ways to give back during the holiday season and year-round. Happy holidays from the TED staff!


Krebs on SecurityU.K. Man Avoids Jail Time in vDOS Case

A U.K. man who pleaded guilty to launching more than 2,000 cyberattacks against some of the world’s largest companies has avoided jail time for his role in the attacks. The judge in the case reportedly was moved by pleas for leniency that cited the man’s youth at the time of the attacks and a diagnosis of autism.

In early July 2017, the West Midlands Police in the U.K. arrested 19-year-old Stockport resident Jack Chappell and charged him with using a now-defunct attack-for-hire service called vDOS to launch attacks against the Web sites of AmazonBBCBTNetflixT-MobileVirgin Media, and Vodafone, between May 1, 2015 and April 30, 2016.

One of several taunting tweets Chappell sent to his DDoS victims.

Chappell also helped launder money for vDOS, which until its demise in September 2016 was by far the most popular and powerful attack-for-hire service — allowing even completely unskilled Internet users to launch crippling assaults capable of knocking most Web sites offline.

Using the Twitter handle @fractal_warrior, Chappell would taunt his victims while  launching attacks against them. The tweet below was among several sent to the Jisc Janet educational support network and Manchester College, where Chappell was a student. In total, Chappell attacked his school at least 21 times, prosecutors showed.

Another taunting Chappell tweet.

Chappell was arrested in April 2016 after investigators traced his Internet address to his home in the U.K. For more on the clues that likely led to his arrest, check out this story.

Nevertheless, the judge in the case was moved by pleas from Chappell’s lawyer, who argued that his client was just an impressionable youth at the time who has autism, a range of conditions characterized by challenges with social skills, repetitive behaviors, speech and nonverbal communication.

The defense called on an expert who reportedly testified that Chappell was “one of the most talented people with a computer he had ever seen.”

“He is in some ways as much of a victim, he has been exploited and used,” Chappell’s attorney Stuart Kaufman told the court, according to the Manchester Evening News. “He is not malicious, he is mischievous.”

The same publication quoted Judge Maurice Greene at Chappell’s sentencing this week, saying to the young man: “You were undoubtedly taken advantage of by those more criminally sophisticated than yourself. You would be extremely vulnerable in a custodial element.”

Judge Greene decided to suspend a sentence of 16 months at a young offenders institution; Chappell will instead “undertake 20 days rehabilitation activity,” although it’s unclear exactly what that will entail.

ANALYSIS/RANT

It’s remarkable when someone so willingly and gleefully involved in a crime spree such as this can emerge from it looking like the victim. “Autistic Hacker Had Been Exploited,” declared a headline about the sentence in the U.K. newspaper The Times.

After reading the coverage of this case in the press, I half expected to see another story saying someone had pinned a medal on Chappell or offered him a job.

Jack Chappell, outside of a court hearing in the U.K. earlier this year.

Yes, Chappell will have the stain of a criminal conviction on his record, and yes autism can be a very serious and often debilitating illness. Let me be clear: I am not suggesting that offenders like this young man should be tossed in jail with violent criminals.

But courts around the world continue to send a clear message that young men essentially can do whatever they like when it comes to DDoS attacks and that there will be no serious consequences as a result.

Chappell launched his attacks via vDOS, which provided a simple, point-and-click service that allowed even completely unskilled Internet users to launch massive DDoS attacks. vDOS made more than $600,000 in just two of the four years it was in operation, launching more than 150,000 attacks against thousands of victims (including this site).

In September 2016, vDOS was taken offline and its alleged co-creators — two Israeli man who created the business when they were 14 and 15 years old — were arrested and briefly detained by Israeli authorities. But despite assurances that the men (now adults) would be tried for their crimes, neither has been prosecuted.

In July 2017, a court in Germany issued a suspended sentence for Daniel Kaye, a 29-year-old man who allegedly launched extortionist DDoS attacks against several bank Web sites.

After the source code for the Mirai botnet malware was released in September 2016, Kaye built his own Mirai botnet and used it in several high-profile attacks, including a fumbled assault that knocked out Internet service to more than 900,000 Deutsche Telekom customers.

In his trial, Kaye admitted that a customer of his paid him $10,000 to attack the Liberian ISP Lonestar. He’s also thought to have launched DDoS attacks on Lloyds Banking Group and Barclays banks in January 2017. Kaye is now facing related cybercrime charges in the U.K.

Last week, the U.S. Justice Department unsealed the cases of two young men in the United States who have pleaded guilty to co-authoring Mirai, an “Internet of Things” (IoT) malware strain that has been used to create dozens of copycat Mirai botnets responsible for countless DDoS attacks over the past 15 months. Jha and his co-defendants in that case launched highly disruptive and extortionist attacks against a number of Web sites and used their creation to conduct lucrative click fraud schemes.

Like Chappell, the core author of Mirai — 21-year-old Fanwood, N.J. resident Paras Jha — launched countless DDoS attacks against his school, costing Rutgers University between $3.5 million and $9 million to defend against and clean up after the assaults (the actual damages will be decided at Jha’s sentencing in March 2018).

Time will tell if Kaye or Jha and his co-defendants receive any real punishment for their crimes. But I would submit that if we don’t have the stomach to put these “talented young hackers” in jail when they’re ultimately found guilty, perhaps we should consider harnessing their skills in less draconian but still meaningfully punitive ways, such as requiring them to serve several years participating in programs designed to keep other kids from following in their footsteps.

Doing anything less smacks of a disservice to justice, glorifies DDoS as an essentially victimless crime, and serves little deterrent that might otherwise make it less likely that we will see fewer such cases going forward.

Worse Than FailureNotepad Development

Nelson thought he hit the jackpot by getting a paid internship the summer after his sophomore year of majoring in Software Engineering. Not only was it a programming job, it was in his hometown at the headquarters of a large hardware store chain known as ValueAce. Making money and getting real world experience was the ideal situation for a college kid. If it went well enough, perhaps he could climb the ranks of ValueAce IT and never have to relocate to find a good paying job.

A notebook with a marker and a pen resting on it

He was assigned to what was known as the "Internet Team", the group responsible for the ValueAce eCommerce website. It all sounded high-tech and fun, sure to continue to inspire Nelson towards his intended career. On his first day he met his supervisor, John, who escorted him to his first-ever cubicle. He sat down in his squeaky office chair and soaked in the sterile office environment.

"Welcome aboard! This is your development machine," John said, pressing the power buttons on an aging desktop and CRT monitor. "You can start by setting up everything you will need to do your development. I'll be just down the hall in my office if you have any issues!"

Eager to get started, Nelson went down the checklist John provided. He would have to install TortoiseSVN, check out the Internet Team's codebase, then install all the dependencies. Nelson figured it would take the rest of the day, then by Tuesday morning he could get into some real coding. That's when the security prompts started.

Anything Nelson tried to access was met with an abrupt "Access denied" prompt and login dialog that asked for admin credentials. "Ok... I guess they just don't want me installing any old thing on here, makes sense," Nelson said to himself. He tried to do a few other benign things like launching Calculator and Notepad, only to be met with the same roadblocks. He went down the hall to fetch John to find out how to proceed.

"Dammit, they just implemented a bunch of new security policies on our workstations. Only managers like me can do anything on our own machines," John bemoaned. "I'll come by and enter my credentials for now so you can get set up."

The trick worked and Nelson was able to get the codebase and begin poking around on it. He was curious about some of the things they were doing in code, so he opened a web browser to search for them. He was allowed to open the browser only to get nothing but "The page is not available" and a login prompt for any site he tried to browse. "Son of a..." he muttered under his breath. He got up for another trip to John's office.

"Hey John, sorry to bother you again. You'll love this one. As a member of the Internet Team, I'm unable to access the internet," Nelson quipped with a nervous chuckle. "I was just hoping to learn some things about how the code works."

"Oh no, don't even bother with that," John told him, rolling his eyes. "Internet is a four-letter word around here if you aren't a manager. The internet is dark and full of terrors and is not to be trusted in the hands of anyone else. They expect you to learn everything from good old-fashioned books." John motioned to his vast library of programming books. Nelson grabbed a few and took them home to study after a frustrating initial day.

After a late-night cram session, Nelson arrived Tuesday morning prepared to actually accomplish something. He hoped to fire up a local instance of the eCommerce site and make some modifications just to see what he could do. As it turned out, he still couldn't do much of anything. He was still getting blocked on local web pages. To add injury to insult, any of the .aspx pages he had tried to access were replaced with the HTML for "page not found" in source.

After travelling the familiar route to John's office, Nelson explained what happened, hoping to borrow admin credentials again. "Sorry, kid. I can't help you," John told him, sounding dejected. "The network overlords noticed that I logged in to your machine, so they wrote me up for it. Any coding you want to do will have to be done via notepad."

"I already said I can't even launch Notepad though... literally everything is locked down!" Nelson exclaimed, growing further irritated.

"Oh I didn't mean Notepad the program. An actual notepad." John pulled a spiral pad of paper and a pen out of his drawer and slid it over to Nelson." Write down what you want on here, give it to me, and I'll enter it into source and check it in. That's the best I can do."

Nelson grabbed his new "development environment" and went back to his desk to brood. It was going to be a long summer. Perhaps Software Engineering wasn't the right major for him. Maybe something like Anthropology or Art would be more fulfilling.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Planet Linux AustraliaRussell Coker: Designing Shared Cars

Almost 10 years ago I blogged about car sharing companies in Melbourne [1]. Since that time the use of such services appears to have slowly grown (judging by the slow growth in the reserved parking spots for such cars). This isn’t the sudden growth that public transport advocates and the operators of those companies hoped for, but it is still positive. I have just watched the documentary The Human Scale [2] (which I highly recommend) about the way that cities are designed for cars rather than for people.

I think that it is necessary to make cities more suited to the needs of people and that car share and car hire companies are an important part of converting from a car based city to a human based city. As this sort of change happens the share cars will be an increasing portion of the new car sales and car companies will have to design cars to better suit shared use.

Personalising Cars

Luxury car brands like Mercedes support storing the preferred seat position for each driver, once the basic step of maintaining separate driver profiles is done it’s an easy second step to have them accessed over the Internet and also store settings like preferred radio stations, Bluetooth connection profiles, etc. For a car share company it wouldn’t be particularly difficult to extrapolate settings based on previous use, EG knowing that I’m tall and using the default settings for a tall person every time I get in a shared car that I haven’t driven before. Having Bluetooth connections follow the user would mean having one slave address per customer instead of the current practice of one per car, the addressing is 48bit so this shouldn’t be a problem.

Most people accumulate many items in their car, some they don’t need, but many are needed. Some of the things in my car are change for parking meters, sunscreen, tools, and tissues. Car share companies have deals with councils for reserved parking spaces so it wouldn’t be difficult for them to have a deal for paying for parking and billing the driver thus removing the need for change (and the risk of a car window being smashed by some desperate person who wants to steal a few dollars). Sunscreen is a common enough item in Australia that a car share company might just provide it as a perk of using a shared car.

Most people have items like tools, a water bottle, and spare clothes that can’t be shared which tend to end up distributed in various storage locations. The solution to this might be to have a fixed size storage area, maybe based on some common storage item like a milk crate. Then everyone who is a frequent user of shared cars could buy a container designed to fit that space which is divided in a similar manner to a Bento box to contain whatever they need to carry.

There is a lot of research into having computers observing the operation of a car and warning the driver or even automatically applying the brakes to avoid a crash. For shared cars this is more important as drivers won’t necessarily have a feel for the car and can’t be expected to drive as well.

Car Sizes

Generally cars are designed to have 2 people (sports car, Smart car, van/ute/light-truck), 4/5 people (most cars), or 6-8 people (people movers). These configurations are based on what most people are able to use all the time. Most car travel involves only one adult. Most journeys appear to have no passengers or only children being driven around by a single adult.

Cars are designed for what people can drive all the time rather than what would best suit their needs most of the time. Almost no-one is going to buy a personal car that can only take one person even though most people who drive will be on their own for most journeys. Most people will occasionally need to take passengers and that occasional need will outweigh the additional costs in buying and fueling a car with the extra passenger space.

I expect that when car share companies get a larger market they will have several vehicles in the same location to allow users to choose which to drive. If such a choice is available then I think that many people would sometimes choose a vehicle with no space for passengers but extra space for cargo and/or being smaller and easier to park.

For the common case of one adult driving small children the front passenger seat can’t be used due to the risk of airbags killing small kids. A car with storage space instead of a front passenger seat would be more useful in that situation.

Some of these possible design choices can also be after-market modifications. I know someone who removed the rear row of seats from a people-mover to store the equipment for his work. That gave a vehicle with plenty of space for his equipment while also having a row of seats for his kids. If he was using shared vehicles he might have chosen to use either a vehicle well suited to cargo (a small van or ute) or a regular car for transporting his kids. It could be that there’s an untapped demand for ~4 people in a car along with cargo so a car share company could remove the back row of seats from people movers to cater to that.

CryptogramDetails on the Mirai Botnet Authors

Brian Krebs has a long article on the Mirai botnet authors, who pled guilty.

Worse Than FailureCodeSOD: How is an Employee ID like a Writing Desk?

Chris D’s has a problem. We can see a hint of the kind of problem he needs to deal with by looking at this code:

FUNCTION WHOIS (EMPLOYEE_ID IN VARCHAR2, Action_Date IN DATE)
   RETURN varchar2
IS
  Employee_Name varchar2(50);
BEGIN
   SELECT  employee_name INTO Employee_Name
     FROM eyps_manager.tbemployee_history
    WHERE  ROWNUM=1 AND   employee_id = EMPLOYEE_ID
          AND effective_start_date <= Action_Date
          AND (Action_Date < effective_end_date OR effective_end_date IS NULL);

   RETURN (Employee_Name);
END WHOIS;

This particular function was written many years ago. The developer responsible, Artie, was fired a short time later, because he broke the production database in an unrelated accident involving a badly aimed `DELETE FROM…`.

It’s a weird function- given an EMPLOYEE_ID, it returns an EMPLOYEE_NAME… but why all this work? Why check dates?

This particular business system was purchased back in 1997. The vendor didn’t ship the product with anything so mundane as an EMPLOYEES table- since every business was a unique and special snowflake, there was no way for the vendor to give them exactly the employee-tracking features they needed, so instead it fell to the customer to build the employee features themselves. The vendor would then take that code on for maintenance.

Which brings us to Artie. Artie was told to implement some employee tracking for the sales team. So he did. He gave everyone on the sales team an EMPLOYEE_ID, but instead of using an auto-numbered sequence, or a UUID, he invented a complicated algorithm for generating his own kind-of-unique IDs. These were grouped in blocks, so, for example, all of the IDs in the range “AA1000-AA9999” were assigned to widget sales, while “AB1000A-AB9999A” were for office supply sales.

This introduced a new problem. You see, EMPLOYEE_ID wasn’t a unique ID for an employee. It was actually a sales portfolio ID, a pile of customers and their orders and sales. Sales people would swap portfolios around as one employee left, or a new hire took on a new portfolio. This made it impossible to know who was actually responsible for which sale.

Artie was ready to solve that problem, though, as he quickly added the EFFECTIVE_START_DATE and EFFECTIVE_END_DATE fields. Instead of updating rows as portfolios moved around, you could simply add new rows, keeping an ongoing history of which employee held which portfolio at any given time.

There’s also a UI to manage this data, which was written circa 2000. It is a simple data-grid with absolutely no validation on any of the fields, which means anyone using it corrupts data on a fairly regular basis, and then Chris or one of his peers has to go into the production database and manually correct the data.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

Krebs on SecurityBuyers Beware of Tampered Gift Cards

Prepaid gift cards make popular presents and no-brainer stocking stuffers, but before you purchase one be on the lookout for signs that someone may have tampered with it. A perennial scam that picks up around the holidays involves thieves who pull back and then replace the decals that obscure the card’s redemption code, allowing them to redeem or transfer the card’s balance online after the card is purchased by an unwitting customer.

Last week KrebsOnSecurity heard from Colorado reader Flint Gatrell, who reached out after finding that a bunch of Sam’s Club gift cards he pulled off the display rack at Wal-Mart showed signs of compromise. The redemption code was obscured by a watermarked sticker that is supposed to make it obvious if it has been tampered with, and many of the cards he looked at clearly had stickers that had been peeled back and then replaced.

“I just identified five fraudulent gift cards on display at my local Wal-Mart,” Gatrell said. “They each had their stickers covering their codes peeled back and replaced. I can only guess that the thieves call the service number to monitor the balances, and try to consume them before the victims can.  I’m just glad I thought to check!”

In the picture below, Gatrell is holding up three of the Sam’s Club cards. The top two showed signs of tampering, but the one on the bottom appeared to be intact.

The top two gift cards show signs that someone previously peeled back the protective sticker covering the redemption code. Image: Flint Gatrell.

Kevin Morrison, a senior analyst on the retail banking and payments team at market analysis firm Aite Group, said the gift card scheme is not new but that it does tend to increase in frequency around the holidays, when demand for the cards is far higher.

“Store employees are instructed to look for abnormalities at the [register] but this happens [more] around the holiday season as attention spans tend to shorten,” he said. “While gift card packaging has improved and some safe-guards put in place, fraudsters look for the weakest link and hit hard when they find one.”

Gift cards make great last-minute gifts, but don’t let your guard down in your haste to wrap up your holiday shopping. There are so many variations on the above-described scheme that many stores have taken to keeping gift cards at or behind the register, where cashiers can more easily spot customers trying to tamper with the cards. As a result, stores that take this basic precaution may be the safest place to purchase gift cards.

Update, Dec. 20, 7:30 a.m. ET: Mr. Gatrell just shared a link to this story, which incredibly is about another man who was found to have bought tampered gift cards in the very same Wal-Mart where Gatrell found the above-pictured cards.

That story includes some other security tips when buying and/or giving gift cards:

When purchasing a gift card, pull from the middle of the pack because those are less likely to be tampered with. Also, get a receipt when buying the card so you have proof of the purchase. Include that receipt if you give the card as a gift. Finally, activate the card quickly and use it quickly and keep a close eye on the balance.

CryptogramGCHQ Found -- and Disclosed -- a Windows 10 Vulnerability

Now this is good news. The UK's National Cyber Security Centre (NCSC) -- part of GCHQ -- found a serious vulnerability in Windows Defender (their anti-virus component). Instead of keeping it secret and all of us vulnerable, it alerted Microsoft.

I'd like believe the US does this, too.

Worse Than FailureCodeSOD: Titration Frustration

From submitter Christoph comes a function that makes your average regex seem not all that bad, actually:

According to "What is a Titration?" we learn that "a titration is a technique where a solution of known concentration is used to determine the concentration of an unknown solution." Since this is an often needed calculation in a laboratory, we can write a program to solve this problem for us.

Part of the solver is a formula parser, which needs to accept variable names (either lower or upper case letters), decimal numbers, and any of '+-*/^()' for mathematical operators. Presented here is the part of the code for the solveTitration() function that deals with parsing of the formula. Try to read it in an 80 chars/line window. Once with wrapping enabled, and once with wrapping disabled and horizontal scrolling. Enjoy!
String solveTitration(char *yvalue)
{
    String mreport;
    lettere = 0;
    //now we have to solve the system of equations
    //yvalue contains the equation of Y-axis variable
    String tempy = "";
    end = 1;
    mreport = "";
    String tempyval;
    String ptem;
    for (int i = 0; strlen(yvalue) + 1; ++i) {
        if (!(yvalue[i]=='q' || yvalue[i]=='w' || yvalue[i]=='e' 
|| yvalue[i]=='r' || yvalue[i]=='t' || yvalue[i]=='y' || yvalue[i]=='u' || 
yvalue[i]=='i' || yvalue[i]=='o' || yvalue[i]=='p' || yvalue[i]=='a' || 
yvalue[i]=='s' || yvalue[i]=='d' || yvalue[i]=='f' || yvalue[i]=='g' || 
yvalue[i]=='h' || yvalue[i]=='j' || yvalue[i]=='k' || yvalue[i]=='l' || 
yvalue[i]=='z' || yvalue[i]=='x' || yvalue[i]=='c' || yvalue[i]=='v' || 
yvalue[i]=='b' || yvalue[i]=='n' || yvalue[i]=='m' || yvalue[i]=='+' || 
yvalue[i]=='-' || yvalue[i]=='^' || yvalue[i]=='*' || yvalue[i]=='/' || 
yvalue[i]=='(' || yvalue[i]==')' || yvalue[i]=='Q' || yvalue[i]=='W' || 
yvalue[i]=='E' || yvalue[i]=='R' || yvalue[i]=='T' || yvalue[i]=='Y' || 
yvalue[i]=='U' || yvalue[i]=='I' || yvalue[i]=='O' || yvalue[i]=='P' || 
yvalue[i]=='A' || yvalue[i]=='S' || yvalue[i]=='D' || yvalue[i]=='F' || 
yvalue[i]=='G' || yvalue[i]=='H' || yvalue[i]=='J' || yvalue[i]=='K' || 
yvalue[i]=='L' || yvalue[i]=='Z' || yvalue[i]=='X' || yvalue[i]=='C' || 
yvalue[i]=='V' || yvalue[i]=='B' || yvalue[i]=='N' || yvalue[i]=='M' || 
yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || yvalue[i]=='4' || 
yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || yvalue[i]=='8' || 
yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || yvalue[i]==',')) {
            break; //if current value is not a permitted value, this means that something is wrong
        }
        if (yvalue[i]=='q' || yvalue[i]=='w' || yvalue[i]=='e' 
|| yvalue[i]=='r' || yvalue[i]=='t' || yvalue[i]=='y' || yvalue[i]=='u' || 
yvalue[i]=='i' || yvalue[i]=='o' || yvalue[i]=='p' || yvalue[i]=='a' || 
yvalue[i]=='s' || yvalue[i]=='d' || yvalue[i]=='f' || yvalue[i]=='g' || 
yvalue[i]=='h' || yvalue[i]=='j' || yvalue[i]=='k' || yvalue[i]=='l' || 
yvalue[i]=='z' || yvalue[i]=='x' || yvalue[i]=='c' || yvalue[i]=='v' || 
yvalue[i]=='b' || yvalue[i]=='n' || yvalue[i]=='m' || yvalue[i]=='Q' || 
yvalue[i]=='W' || yvalue[i]=='E' || yvalue[i]=='R' || yvalue[i]=='T' || 
yvalue[i]=='Y' || yvalue[i]=='U' || yvalue[i]=='I' || yvalue[i]=='O' || 
yvalue[i]=='P' || yvalue[i]=='A' || yvalue[i]=='S' || yvalue[i]=='D' || 
yvalue[i]=='F' || yvalue[i]=='G' || yvalue[i]=='H' || yvalue[i]=='J' || 
yvalue[i]=='K' || yvalue[i]=='L' || yvalue[i]=='Z' || yvalue[i]=='X' || 
yvalue[i]=='C' || yvalue[i]=='V' || yvalue[i]=='B' || yvalue[i]=='N' || 
yvalue[i]=='M' || yvalue[i]=='.' || yvalue[i]==',') {
            lettere = 1; //if lettere == 0 then the equation contains only mnumbers
        }
        if (yvalue[i]=='+' || yvalue[i]=='-' || yvalue[i]=='^' || 
yvalue[i]=='*' || yvalue[i]=='/' || yvalue[i]=='(' || yvalue[i]==')' || 
yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || yvalue[i]=='4' || 
yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || yvalue[i]=='8' || 
yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || yvalue[i]==',') {
            tempyval = tempyval + String(yvalue[i]);
        } else {
            tempy = tempy + String(yvalue[i]);
            for (int i = 0; i < uid.tableWidget->rowCount(); ++i) {
                TableItem *titem = uid.table->item(i, 0);
                TableItem *titemo = uid.table->item(i, 1);
                if (!titem || titem->text().isEmpty()) {
                    break;
                } else {
                    if (tempy == uid.xaxis->text()) {
                        tempyval = uid.xaxis->text();
                        tempy = "";
                    }
                    ... /* some code omitted here */
                    if (tempy!=uid.xaxis->text()) {
                        if (yvalue[i]=='+' || yvalue[i]=='-' 
|| yvalue[i]=='^' || yvalue[i]=='*' || yvalue[i]=='/' || yvalue[i]=='(' || 
yvalue[i]==')' || yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || 
yvalue[i]=='4' || yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || 
yvalue[i]=='8' || yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || 
yvalue[i]==',') {
                            //actually nothing
                        } else {
                            end = 0;
                        }
                    }
                }
            }
        } // simbol end
        if (!tempyval.isEmpty()) {
            mreport = mreport + tempyval;
        }
        tempyval = "";
    }
    return mreport;
}
[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

,

Rondam RamblingsBook review: "A New Map for Relationships: Creating True Love at Home and Peace on the Planet" by Dorothie and Martin Hellman

We humans dream of meeting our soul mates, someone to be Juliet to our Romeo, Harry to our Sally, Jobs to our Woz, Larry to our Sergey.  Sadly, the odds are stacked heavily against us.  If you do the math you will find that in a typical human lifetime we can only hope to meet a tiny fraction of our 7 billion fellow humans.  And if you factor in the time it takes to properly vet someone to see if

Krebs on SecurityThe Market for Stolen Account Credentials

Past stories here have explored the myriad criminal uses of a hacked computer, the various ways that your inbox can be spliced and diced to help cybercrooks ply their trade, and the value of a hacked company. Today’s post looks at the price of stolen credentials for just about any e-commerce, bank site or popular online service, and provides a glimpse into the fortunes that an enterprising credential thief can earn selling these accounts on consignment.

Not long ago in Internet time, your typical cybercriminal looking for access to a specific password-protected Web site would most likely visit an underground forum and ping one of several miscreants who routinely leased access to their “bot logs.”

These bot log sellers were essentially criminals who ran large botnets (collections of hacked PCs) powered by malware that can snarf any passwords stored in the victim’s Web browser or credentials submitted into a Web-based login form. For a few dollars in virtual currency, a ne’er-do-well could buy access to these logs, or else he and the botmaster would agree in advance upon a price for any specific account credentials sought by the buyer.

Back then, most of the stolen credentials that a botmaster might have in his possession typically went unused or unsold (aside from the occasional bank login that led to a juicy high-value account). Indeed, these plentiful commodities held by the botmaster for the most part were simply not a super profitable line of business and so went largely wasted, like bits of digital detritus left on the cutting room floor.

But oh, how times have changed! With dozens of sites in the underground now competing to purchase and resell credentials for a variety of online locations, it has never been easier for a botmaster to earn a handsome living based solely on the sale of stolen usernames and passwords alone.

If the old adage about a picture being worth a thousand words is true, the one directly below is priceless because it illustrates just how profitable the credential resale business has become.

This screen shot shows the earnings panel of a crook who sells stolen credentials for hundreds of Web sites to a dark web service that resells them. This botmaster only gets paid when someone buys one of his credentials. So far this year, customers of this service have purchased more than 35,000 credentials he’s sold to this service, earning him more than $288,000 in just a few months.

The image shown above is the wholesaler division of “Carder’s Paradise,” a bustling dark web service that sells credentials for hundreds of popular Web destinations. The screen shot above is an earnings panel akin to what you would see if you were a seller of stolen credentials to this service — hence the designation “Seller’s Paradise” in the upper left hand corner of the screen shot.

This screen shot was taken from the logged-in account belonging to one of the more successful vendors at Carder’s Paradise. We can see that in just the first seven months of 2017, this botmaster sold approximately 35,000 credential pairs via the Carder’s Paradise market, earning him more than $288,000. That’s an average of $8.19 for each credential sold through the service.

Bear in mind that this botmaster only makes money based on consignment: Regardless of how much he uploads to Seller’s Paradise, he doesn’t get paid for any of it unless a Carder’s Paradise customer chooses to buy what he’s selling.

Fortunately for this guy, almost 9,000 different customers of Carder’s Paradise chose to purchase one or more of his username and password pairs. It was not possible to tell from this seller’s account how many credential pairs total that he has contributed to this service which went unsold, but it’s a safe bet that it was far more than 35,000.

[A side note is in order here because there is some delicious irony in the backstory behind the screenshot above: The only reason a source of mine was able to share it with me was because this particular seller re-used the same email address and password across multiple unrelated cybercrime services].

Based on the prices advertised at Carder’s Paradise (again, Carder’s Paradise is the retail/customer side of Seller’s Paradise) we can see that the service on average pays its suppliers about half what it charges customers for each credential. The average price of a credential for more than 200 different e-commerce and banking sites sold through this service is approximately $15.

Part of the price list for credentials sold at this dark web ID theft site.

Indeed, fifteen bucks is exactly what it costs to buy stolen logins for airbnb.com, comcast.com, creditkarma.com, logmein.com and uber.com. A credential pair from AT&T Wireless — combined with access to the victim’s email inbox — sells for $30.

The most expensive credentials for sale via this service are those for the electronics store frys.com ($190). I’m not sure why these credentials are so much more expensive than the rest, but it may be because thieves have figured out a reliable and very profitable way to convert stolen frys.com customer credentials into cash.

Usernames and passwords to active accounts at military personnel-only credit union NavyFederal.com fetch $60 apiece, while credentials to various legal and data aggregation services from Thomson Reuters properties command a $50 price tag.

The full price list of credentials for sale by this dark web service is available in this PDF. For CSV format, see this link. Both lists are sorted alphabetically by Web site name.

This service doesn’t just sell credentials: It also peddles entire identities — indexed and priced according to the unwitting victim’s FICO score. An identity with a perfect credit score (850) can demand as much as $150.

Stolen identities with high credit scores fetch higher prices.

And of course this service also offers the ability to pull full credit reports on virtually any American — from all three major credit bureaus — for just $35 per bureau.

It costs $35 through this service to pull someone’s credit file from the three major credit bureaus.

Plenty of people began freaking out earlier this year after a breach at big-three credit bureau Equifax jeopardized the Social Security Numbers, dates of birth and other sensitive date on more than 145 million Americans. But as I have been trying to tell readers for many years, this data is broadly available for sale in the cybercrime underground on a significant portion of the American populace.

If the threat of identity theft has you spooked, place a freeze on your credit file and on the file of your spouse (you may even be able to do this for your kids). Credit monitoring is useful for letting you know when someone has stolen your identity, but these services can’t be counted on to stop an ID thief from opening new lines of credit in your name.

They are, however, useful for helping to clean up identity theft after-the-fact. This story is already too long to go into the pros and cons of credit monitoring vs. freezes, so I’ll instead point to a recent primer on the topic and urge readers to check it out.

Finally, it’s a super bad idea to re-use passwords across multiple sites. KrebsOnSecurity this year has written about multiple, competing services that sell or sold access to billions of usernames and passwords exposed in high profile data breaches at places like Linkedin, Dropbox and Myspace. Crooks pay for access to these stolen credential services because they know that a decent percentage of Internet users recycle the same password at multiple sites.

One alternative to creating and remembering strong, lengthy and complex passwords for every important site you deal with is to outsource this headache to a password manager.  If the online account in question allows 2-factor authentication (2FA), be sure to take advantage of that.

Two-factor authentication makes it much harder for password thieves (or their customers) to hack into your account just by stealing or buying your password: If you have 2FA enabled, they also would need to hack that second factor (usually your mobile device) before being able to access your account. For a list of sites that support 2FA, check out twofactorauth.org.

CryptogramLessons Learned from the Estonian National ID Security Flaw

Estonia recently suffered a major flaw in the security of their national ID card. This article discusses the fix and the lessons learned from the incident:

In the future, the infrastructure dependency on one digital identity platform must be decreased, the use of several alternatives must be encouraged and promoted. In addition, the update and replacement capacity, both remote and physical, should be increased. We also recommend the government to procure the readiness to act fast in force majeure situations from the eID providers.. While deciding on the new eID platforms, the need to replace cryptographic primitives must be taken into account -- particularly the possibility of the need to replace algorithms with those that are not even in existence yet.

Worse Than FailurePromising Equality

One can often hear the phrase, “modern JavaScript”. This is a fig leaf, meant to cover up a sense of shame, for JavaScript has a bit of a checkered past. It started life as a badly designed language, often delivering badly conceived features. It has a reputation for slowness, crap code, and things that make you go “wat?

Thus, “modern” JavaScript. It’s meant to be a promise that we don’t write code like that any more. We use the class keyword and transpile from TypeScript and write fluent APIs and use promises. Yes, a promise to use promises.

Which brings us to Dewi W, who just received some code from contractors. It has some invocations that look like this:

safetyLockValidator(inputValue, targetCode).then(function () {
        // You entered the correct code.
}).catch(function (err) {
        // You entered the wrong code.
})

The use of then and catch, in this context, tells us that they’re using a Promise, presumably to wrap up some asynchronous operation. When the operation completes successfully, the then callback fires, and if it fails, the catch callback fires.

But, one has to wonder… what exactly is safetyLockValidator doing?

safetyLockValidator = function (input, target) {
        return new Promise(function (resolve, reject) {
                if (input === target)
                        return resolve()
                else return reject('Wrong code');
        })
};

It’s just doing an equality test. If input equals target, the Promise resolve()s- completes successfully. Otherwise, it reject()s. Well, at least it’s future-proofed against the day we switch to using an EaaS platform- “Equality as a Service”.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaColin Charles: Percona Live Santa Clara 2018 CFP

Percona Live Santa Clara 2018 call for papers ends fairly soon — December 22 2017. It may be extended, but I suggest getting a topic in ASAP so the conference committee can view everything fairly and quickly. Remember this conference is bigger than just MySQL, so please submit topics on MongoDB, other databases like PostgreSQL, time series, etc., and of course MySQL.

What are you waiting for? Submit TODAY!
(It goes without saying that speakers get a free pass to attend the event.)

Don Martiquick question on tracking protection

One quick question for anyone who still isn't convinced that tracking protection needs to be a high priority for web browsers in 2018. Web tracking isn't just about items from your online shopping cart following you to other sites. Users who are vulnerable to abusive practices for health or other reasons have tracking protection needs too.

Screenshot from the American Cancer Society site, showing 24 web trackers

Who has access to the data from each of the 24 third-party trackers that appear on the American Cancer Society's Find Cancer Treatment and Support page, and for what purposes can they use the data?

,

Don MartiForbidden words

You know how the US government's Centers for Disease Control and Prevention is now forbidden from using certain words?

vulnerable
entitlement
diversity
transgender
fetus
evidence-based
science-based

(source: Washington Post)

Well, in order to help slow down the spread of political speech enforcement that is apparently stopping all of us cool innovator type people from saying the Things We Can't Say, here's a Git hook to make sure that every time you blog, you include at least one of the forbidden words.

If you blog without including one of the forbidden words, you're obviously internalizing censorship and need more freedom, which you can maybe get by getting out of California for a while. After all, a lot of people here seem to think that "innovation" is building more creepy surveillance as long as you call it "growth hacking" or writing apps to get members of the precariat to do the stuff that your Mom used to do for you.

You only have to include one forbidden word every time you commit a blog entry, not in every file. You only need forbidden words in blog entries, not in scripts or templates. You can always get around the forbidden word check with the --no-verify command-line option.

Suggestions and pull requests welcome. script on GitHub

,

Rondam RamblingsHere comes the next west coast mega-drought

As long as I'm blogging about extreme weather events I would also like to remind everyone that we just came off of the longest drought in California history, followed immediately by the wettest rainy season in California history.  Now it looks like that history could very well be starting to repeat itself.  The weather pattern that caused the six-year-long Great Drought is starting to form again.

Rondam RamblingsThis should convince the climate skeptics. But it probably won't.

One of the factoids that climate-change denialists cling to is the fact (and it is a fact) that major storms haven't gotten measurably worse.  The damage from storms has gotten measurably worse, but that can be attributed to increased development on coastlines.  It might be that the storms themselves have gotten worse, but the data is not good enough to disentangle the two effects. But storms

Cory DoctorowTalking Walkaway on the Barnes and Noble podcast

I recorded this interview last summer at San Diego Comic-Con; glad to hear it finally live!

Authors are, without exception, readers, and behind every book there is…another book, and another. In this episode of the podcast, we’re joined by two writers for conversations about the vital books and ideas that influence inform their own work. First, Cory Doctorow talks with B&N’s Josh Perilo about his recent novel of an imagined near future, Walkaway, and the difference between a dystopia and a disaster. Then we hear from Will Schwalbe, talking with Miwa Messer about the lifetime of reading behind his book Books for Living: Some Thoughts on Reading, Reflecting, and Embracing Life.


Hubert Vernon Rudolph Clayton Irving Wilson Alva Anton Jeff Harley Timothy Curtis Cleveland Cecil Ollie Edmund Eli Wiley Marvin Ellis Espinoza—known to his friends as Hubert, Etc—was too old to be at that Communist party.


But after watching the breakdown of modern society, he really has no where left to be—except amongst the dregs of disaffected youth who party all night and heap scorn on the sheep they see on the morning commute. After falling in with Natalie, an ultra-rich heiress trying to escape the clutches of her repressive father, the two decide to give up fully on formal society—and walk away.


After all, now that anyone can design and print the basic necessities of life—food, clothing, shelter—from a computer, there seems to be little reason to toil within the system.


It’s still a dangerous world out there, the empty lands wrecked by climate change, dead cities hollowed out by industrial flight, shadows hiding predators animal and human alike. Still, when the initial pioneer walkaways flourish, more people join them. Then the walkaways discover the one thing the ultra-rich have never been able to buy: how to beat death. Now it’s war – a war that will turn the world upside down.


Fascinating, moving, and darkly humorous, Walkaway is a multi-generation SF thriller about the wrenching changes of the next hundred years…and the very human people who will live their consequences.

,

CryptogramFriday Squid Blogging: Baby Sea Otters Prefer Shrimp to Squid

At least, this one does.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBen Saunders’ solo crossing of Antarctica, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

A solo crossing of Antarctica. With chilling detail, Ben Saunders documents his journey across Antarctica as he attempts to complete the first successful solo, unsupported and unassisted crossing. The journey is a way of honoring his friend Henry Worsley, who died attempting a similar crossing last year. While being attacked by intense winds, Saunders writes of his experiences trekking through the hills, the cold, and the ice, the weight he carries, and even the moments he’s missing, as he wishes his dear friends a jolly and fun wedding day back home. (Watch Saunders’ TED Talk)

The dark side of AI. A chilling new video, “Slaughterbots,” gives viewers a glimpse into a dystopian future where people can be targeted and killed by strangers using autonomous weapons simply for having dissenting opinions. This viral video was the brainchild of TED speaker Stuart Russell and a coalition of AI researchers and advocacy organizations. The video warns viewers that while AI has the potential to solve many of our problems, the dangers of AI weapons must be addressed first. “We have an opportunity to prevent the future you just saw,” Stuart states at the end of the video, “but the window to act is closing fast.” (Watch Russell’s TED Talk)

Corruption investigators in paradise. Charmian Gooch and her colleagues at Global Witness have been poring over the Paradise Papers, a cache of 13.4 million files released by the the International Consortium of Investigative Journalists that detail the secret world of offshore financial deals. With the 2014 TED Prize, Gooch wished to end anonymously owned companies, and the Paradise Papers show how this business structure can be used to nefarious end. Check out Global Witness’ report on how the commodities company Glencore appears to have funneled $45 million to a notorious billionaire middleman in the Democratic Republic of the Congo to help them negotiate mining rights. And their look at how a US-based bank helped one of Russia’s richest oligarchs register a private jet, despite his company being on US sanctions lists. (Watch Gooch’s TED Talk)

A metric for measuring corporate vitality. Martin Reeves, director of the Henderson Institute at BCG, and his colleagues have taken his idea that strategies need strategies and expanded it into the creation of the Fortune Future 50, a categorization of companies based on more than financial data. Companies are divided into “leaders” and “challengers,” with the former having a market capitalization over $20 billion as of fiscal year 2016 and the latter including startups with a market capitalization below $20 billion. However, instead of focusing on rear-view analytics, BCG’s assessment uses artificial intelligence and natural language processing to review a company’s vitality, or their “capacity to explore new options, renew strategy, and grow sustainably,” according to a publication by Reeves and his collaborators. Since only 7% of companies that are market-share leaders are also profit leaders, the analysis can provide companies with a new metric to judge progress. (Watch Reeves’ TED Talk)

The boy who harnessed the wind — and the silver screen. William Kamkwamba’s story will soon reach the big screen via the upcoming film The Boy Who Harnessed the Wind. Kamkwamba built a windmill that powered his home in Malawi with no formal education. He snuck into a library, deciphered physics on his own, and trusted his intuition that he had an idea he could execute. His determination ultimately saved his family from a deadly famine. (Watch Kamkwamba’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


Krebs on SecurityFormer Botmaster, ‘Darkode’ Founder is CTO of Hacked Bitcoin Mining Firm ‘NiceHash’

On Dec. 6, 2017, approximately USD $52 million worth of Bitcoin mysteriously disappeared from the coffers of NiceHash, a Slovenian company that lets users sell their computing power to help others mine virtual currencies. As the investigation into the heist nears the end of its second week, many Nice-Hash users have expressed surprise to learn that the company’s chief technology officer recently served several years in prison for operating and reselling a massive botnet, and for creating and running ‘Darkode,” until recently the world’s most bustling English-language cybercrime forum.

In December 2013, NiceHash CTO Matjaž Škorjanc was sentenced to four years, ten months in prison for creating the malware that powered the ‘Mariposa‘ botnet. Spanish for “Butterfly,” Mariposa was a potent crime machine first spotted in 2008. Very soon after, Mariposa was estimated to have infected more than 1 million hacked computers — making it one of the largest botnets ever created.

An advertisement for the ButterFly Flooder, a crimeware product based on the ButterFly Bot.

ButterFly Bot, as it was more commonly known to users, was a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. The ButterFly Bot kit sold for prices ranging from $500 to $2,000.

Prior to his initial arrest in Slovenia on cybercrime charges in 2010, Škorjanc was best known to his associates as “Iserdo,” the administrator and founder of the exclusive cybercrime forum Darkode.

A message from Iserdo warning Butterfly Bot subscribers not to try to reverse his code.

On Darkode, Iserdo sold his Butterfly Bot to dozens of other members, who used it for a variety of illicit purposes, from stealing passwords and credit card numbers from infected machines to blasting spam emails and hijacking victim search results. Microsoft Windows PCs infected with the bot would then try to spread the disease over MSN Instant Messenger and peer-to-peer file sharing networks.

In July 2015, authorities in the United States and elsewhere conducted a global takedown of the Darkode crime forum, arresting several of its top members in the process. The U.S. Justice Department at the time said that out of 800 or so crime forums worldwide, Darkode represented “one of the gravest threats to the integrity of data on computers in the United States and around the world and was the most sophisticated English-speaking forum for criminal computer hackers in the world.”

Following Škorjanc’s arrest, Slovenian media reported that his mother Zdenka Škorjanc was accused of money laundering; prosecutors found that several thousand euros were sent to her bank account by her son. That case was dismissed in May of this year after prosecutors conceded she probably didn’t know how her son had obtained the money.

Matjaž Škorjanc did not respond to requests for comment. But local media reports state that he has vehemently denied any involvement in the disappearance of the NiceHash stash of Bitcoins.

In an interview with Slovenian news outlet Delo.si, the NiceHash CTO described the theft “as if his kid was kidnapped and his extremities would be cut off in front of his eyes.” A roughly-translated English version of that interview has been posted to Reddit.

According to media reports, the intruders were able to execute their heist after stealing the credentials of a user with administrator privileges at NiceHash. Less than an hour after breaking into the NiceHash servers, approximately 4,465 Bitcoins were transferred out of the company’s accounts.

NiceHash CTO Matjaž Škorjanc, as pictured on the front page of a recent edition of the Slovenian daily Delo.si

A source close to the investigation told KrebsOnSecurity that the NiceHash hackers used a virtual private network (VPN) connection with a Korean Internet address, although the source said Slovenian investigators were reluctant to say whether that meant South Korea or North Korea because they did not want to spook the perpetrators into further covering their tracks.

CNN, Bloomberg and a number of other Western media outlets reported this week that North Korean hackers have recently doubled down on efforts to steal, phish and extort Bitcoins as the price of the currency has surged in recent weeks.

“North Korean hackers targeted four different exchanges that trade bitcoin and other digital currencies in South Korea in July and August, sending malicious emails to employees, according to police,” CNN reported.

Bitcoin’s blockchain ledger system makes it easy to see when funds are moved, and NiceHash customers who lost money in the theft have been keeping a close eye on the Bitcoin payment address that received the stolen funds ever since. On Dec. 13, someone in control of that account began transferring the stolen bitcoins to other accounts, according to this transaction record.

The NiceHash theft occurred as the price of Bitcoin was skyrocketing to new highs. On January 1, 2017, a single Bitcoin was worth approximately $976. By December 6, the day of the NiceHash hack, the price had ballooned to $11,831 per Bitcoin.

Today, a single Bitcoin can be sold for more than $17,700, meaning whoever is responsible for the NiceHash hack has seen their loot increase in value by roughly $27 million in the nine days since the theft.

In a post on its homepage, NiceHash said it was in the final stages of re-launching the surrogate mining service.

“Your bitcoins were stolen and we are working with international law enforcement agencies to identify the attackers and recover the stolen funds. We understand it may take some time and we are working on a solution for all users that were affected.

“If you have any information about the attack, please email us at [email protected]. We are giving BTC rewards for the best information received. You can also join our community page about the attack on reddit.

However, many followers of NiceHash’s Twitter account said they would not be returning to the service unless and until their stolen Bitcoins were returned.

TEDExploring the boundaries of legacy at TED@Westpac

Cyndi Stivers and Adam Spencer host TED@Westpac — a day of talks and performances themed around “The Future Legacy” — in Sydney, Australia, on Monday, December 11th. (Photo: Jean-Jacques Halans / TED)

Legacy is a delightfully complex concept, and it’s one that the TED@Westpac curators took on with gusto for the daylong event held in Sydney, Australia, on Monday December 11th. Themed around the idea of “The Future Legacy,” the day was packed with 15 speakers and two performers and hosted by TED’s Cyndi Stivers and TED speaker and monster prime number aficionado Adam Spencer. Topics ranged from education to work-health balance to designer babies to the importance of smart conversations around death.

For Westpac managing director and CEO Brian Hartzer, the day was an opportunity both to think back over the bank’s own 200-year-legacy — and a chance for all gathered to imagine a bold new future that might suit everyone. He welcomed talks that explored ideas and stories that may shape a more positive global future. “We are so excited to see the ripple effect of your ideas from today,” he told the collected speakers before introducing Aboriginal elder Uncle Ray Davison to offer the audience a traditional “welcome to country.”

And with that, the speakers were up and running.

“Being an entrepreneur is about creating change,” says Linda Zhang. She suggests we need to encourage the entrepreneurial mindset in high-schoolers. (Photo: Jean-Jacques Halans / TED)

Ask questions, challenge the status quo, build solutions. Who do you think of when you hear the word “entrepreneur?” Steve Jobs, Mark Zuckerberg, Elon Musk and Bill Gates might come to mind. What about a high school student? Linda Zhang might just have graduated herself but she’s been taking entrepreneurial cues from her parents, who started New Zealand’s second-largest thread company. Zhang now runs a program to pair students with industry mentors and get them to work for 48 hours on problems they actually want to solve. The results: a change in mindset that could help prepare them for a tumultuous but opportunity-filled job market. “Being an entrepreneur is about creating change,” Zhang says. “This is what high school should be about … finding things you care about, having the curiosity to learn about those things and having the drive to take that knowledge and implement it into problems you care about solving.”

Should we bribe kids to study math? In this sparky talk, Mohamad Jebara shares a favorite quote from fellow mathematician Francis Su: “We study mathematics for play, for beauty, for truth, for justice, and for love.” Only problem: kids today, he says, often don’t tend to agree, instead finding math “difficult and boring.” Jebara has a counterintuitive potential solution: he wants to bribe kids to study math. His financial incentive plan works like this: his company charges parents a monthly subscription fee; if students complete their weekly math goal then the program refunds that amount of the fee directly into the student’s bank account; if not, the company pockets the profit. Ultimately, Jebara wants kids to discover math’s intrinsic worth and beauty, but until they get there, he’s happy to pay them. And this isn’t just about his own business model. “Unless we find a way to improve student engagement with mathematics, we’ll have not only a huge skills shortage crisis, but a fickle population easily manipulated by whoever can get the most airtime,” he says.

You, cancer and the workplace. When lawyer Sarah Donnelly was diagnosed with breast cancer, she turned to her friends and family for support — but she also sought refuge in her work. “My job and my coworkers would make me feel valuable and human at times when I would have otherwise felt like a statistic,” she says. “Work gave me focus and stability when I was dealing with so many unknowns and difficult personal decisions.” But, she says, not all employers realize that work can be a sanctuary for the sick, and often — believing themselves polite and thoughtful — cast out their employees. Now, Donnelly is striving to change the experiences of individuals coping with serious illness — and the perceptions others might have of them. Together with a colleague, she created a “Working with Cancer” toolkit that provides a framework and guidance for all those professionally involved in an employee’s life, and she is traveling to different companies around Australia to implement it.

Digital strategist Will Jenkins asks that we need to think about what we really want from life, not just our day-to-day. (Photo: Jean-Jacques Halans / TED)

The connection between time and money. We all need more time, says digital strategist Will Jenkins, and historically we’ve developed systems and technologies to save time for ourselves and others by reducing waste and inefficiency. But there’s a problem: even after spending centuries trying to perfect time-saving techniques, it too often still doesn’t feel like we’re getting anywhere. “As individuals, we’re busier than ever,” Jenkins points out, before calling for us to look beyond specialized techniques to think about what we actually really want from life itself, not just our day-to-day. In taking a holistic approach to time, we might, he says, channel John Maynard Keynes to figure out new ways that will allow all of us “to live wisely, agreeably, and well.”

Creating a digital future for Australia’s first people. Indigenous Australian David Unaipon (1862-1967) was called his country’s Leonardo da Vinci — he was responsible for at least 19 inventions, including a tool that led to modern sheep shears. But according to Westpac business analyst Michael Mieni, we need to find better ways to encourage future Unaipons. Right now, he says, too many Aboriginals are on the far side of the digital divide, lacking access to computers and the Internet as well as basic schooling in technology. Mieni was the first indigenous IT honors students at the University of Technology Sydney and he makes the case that tech-savvy Aboriginals are badly needed to serve as role models and teachers, as inventors of ways to record and promote their culture and as guardians of their people’s digital rights. “What if the next ground-breaking idea is already in the mind of a young Aboriginal student but will never surface because they face digital disadvantage or exclusion?” he asks. Everyone in Australia — not just the first people — gains when every citizen has the opportunity and resources to become digitally literate.

Shade Zahrai and Aric Yegudkin perform a gorgeous, sensual dance at TED@Westpac. (Photo: Jean-Jacques Halans / TED)

The beauty of a dance duet. “Partner dance embodies the coming together of two people,” Shade Zahrai‘s voice whispers to a dark auditorium as she and her partner take the TED stage. In the middle of session one, the pair perform a gorgeous and sensual modern dance, complete with Zahrai’s recorded voiceover explaining the coordination and unity that partner dance requires of its participants.

The power of inclusiveness. Inclusion strategist Hayley Yeates shares how her identity as a proud Australian was dimmed by prejudice shown towards her by those who saw her as Asian. When in school, she says, fellow students didn’t want to associate with her in classrooms, while she didn’t add a picture to her LinkedIn profile for fear her race would deem her less worthy of a job. But Yeates focuses on more than the personal stories of those who’ve been dubbed an outsider, and makes the case that diversity leads to innovation and greater profitability for companies. She calls for us all to sponsor safe spaces where authentic, unrestrained conversations about the barriers faced by cultural minorities can be held freely. And she invites leaders to think about creating environments where people’s whole selves can work, and where an organization can thrive because of, not in spite of, its employees’ differences.

Olivia Tyler tracks the complexity of global supply chains, looking to develop smart technology that can allow both corporations and consumers to understand buying decisions. (Photo: Jean-Jacques Halans / TED)

How to do yourself out of a job. As a sustainability practitioner, Olivia Tyler is trying hard to develop systems that will put her out of work. Why? For the good of us all, of course. And how? By encouraging all of us to ask questions about where what we buy, wear or eat comes from. Tyler tracks the fiendish complexity of today’s global supply chains, and she is attempting to develop smart technology that can allow both corporations and consumers to have the visibility they need to understand the buying decisions they make. When something as ostensibly simple as a baked good can include hundreds of data points about the ingredients it contains — a cake can be a minefield, she jokes — it’s time to open up the cupboard and use tech such as the blockchain to crack open the sustainability code. “We can adopt new and exciting ways to change the game on how we conduct ourselves as corporates and consumers across our increasingly smaller world,” she promises.

Can machine intelligence liberate human purpose? Much has been made of the threat robots place to the very existence of certain jobs, with some estimates reckoning that as much as 80% of low skill jobs have already been automated. Self-styled “datapreneur” Tomer Garzberg shares how he researched 11,000 of the world’s most widely held jobs to create the “Short-Term Automation Susceptibility Index” to identify the types of role that might be up for automation next. Perhaps unsurprisingly, highly specialized roles held by those such as neurosurgeons, chemical engineers and, well, acrobats face the least risk of being automated, while even senior blue collar positions or standard white collar roles such as pharmacists, accountants and health inspectors can expect a 25% shrinkage over the next 10 years. But Garzberg believes that we can — must — embrace this cybernated future.”Prepare your family to be okay with change, as uncomfortable as it may be,” he says. “We’ll likely be switching careers far more frequently in the near future.”

Everything’s gonna be alright. After a quick break and a breather, Westpac’s own Rowan Fitzpatrick and his band Heart of Mind played in session two with a sweet, uplifting rock ballad about better days and leaning on one another with love and hope. “Keep looking forward / Don’t lose your grip / One step at a time,” the trained jazz singer croons.

Alastair O’Neill shares the ethical wrangling his family undertook as they figured out how they felt about potentially eradicating a debilitating disease with gene editing. (Photo: Jean-Jacques Halans / TED)

You have the ability to end a hereditary disease. Do you take it? “Recently I had to sign a form promising that I wouldn’t have sex with my wife,” says a deadpan Alastair O’Neill as he kicks off the session’s talks. “Why? Because we decided to have a baby.” He waits a beat. “Let me rewind.” As the audience settles in for a rollercoaster talk of emotional highs and lows, he explains his family’s journey through the ethical minefield of embryonic genetic testing, also known as preimplantation genetic diagnosis or PGD. It was a journey prompted by a hereditary condition in his wife’s family — his father-in-law Phil had inherited the gene for retinal dystrophy and was declared legally blind at 30 years old. The odds that his own young family would have a baby either carrying or inheriting the disease were as low as one in two. In this searingly personal talk, O’Neill shares the ups and downs of both the testing process and the ethical wrangling that their entire family undertook as they tried to figure out how they felt about potentially eradicating a debilitating disease. Spoiler alert: O’Neill is in favor. “PGD gives couples the ability to choose to end a hereditary disease,” he says. “I think we should give every potential parent that choice.”

A game developer’s solution to the housing crisis. When Sarah Murray wanted to buy her first house, she discovered that home prices far exceeded her budget — and building a new house would be prohibitively costly and time-consuming. Frustrated by her lack of self-determination, Murray decided to create a computer game to give control back to buyers. The program allows you to design all aspects of your future home (even down to attention to price and environmental impact) and then delivers the final product directly to you in modular components that can be assembled onsite. Murray’s innovative idea both cuts costs and makes more sustainable dwellings; the first physical houses should be ready by 2018. But the digital housing developer isn’t done yet. Now she is working on adapting the program and investing in construction techniques such as 3D printing so that when a player designs and builds a home, they can also contribute to a home for someone in need. As she says, “I want to put every person who wants one in a home of their own design.”

Tough guys need mental-health help, too. In 2013 in Castlemaine, Victoria, painter and decorator Jeremy Forbes was shaken when a friend and fellow tradie (or tradesman), committed suicide. But what truly shocked him were the murmurs he overheard at the man’s wake — people asking, “Who’s next?” Tradies deal with the same struggles faced by many — depression, alcohol and drug dependency, gambling, financial hardship — but they often don’t feel comfortable opening up about them. “You’re expected to be silent in the face of adversity,” says Forbes. So he and artist Catherine Pilgrim founded HALT (Hope Assistance Local Tradies), a mental health awareness organization for tradie men and women, apprentices, builders, farmers, and their partners. HALT meets people where they are, hosting gatherings at hardware stores, football and sports clubs, and vocational training facilities. There, people learn about the warning signs of depression and anxiety and the available services. According to Forbes, who received a Westpac Social Change Fellowship in 2016, HALT has now held around 150 events, and he describes the process as both empowering and cathartic. We need to know how to respond if people are not OK, he says.

The conversation about death you need to have. “Most of us don’t want to acknowledge death, we don’t want to plan for it, and we don’t want to discuss it with the most important people in our lives,” says mortal realist and portfolio manager Michelle Knox. She’s got stats to prove it: 45% of people in Australia over the age of 18 don’t have a legal will. But dying without one is complicated and expensive for those left behind, and just one reason Knox believes it’s time we take ownership of our own deaths. Others include that talking about death before it happens can help us experience a good death, reduce stress on our loved ones, and also help us support others who are grieving. Knox experienced firsthand the power of talking about death ahead of time when her father passed away earlier this year. “I discovered this year it’s actually a privilege to help someone exit this life and although my heart is heavy with loss and sadness, it is not heavy with regret,” she says, “I knew what Dad wanted and I feel at peace knowing I could support his wishes.”

“What would water do?” asks Raymond Tang. “This simple and powerful question has changed my life for the better.” (Photo: Jean-Jacques Halans / TED)

The philosophy of water. How do we find fulfillment in a world that’s constantly changing? IT strategy manager and “agent of flow” Raymond Tang struggled mightily with this question — until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water and, inspired, he’s now applying the concepts to his everyday life. In this charming talk, he shares three lessons he’s learned so far from the “philosophy of water.” First, humility: in the same way water helps plants and animals grow without seeking reward, Tang finds fulfillment and meaning in helping others overcome their challenges. Next, harmony: just as water is able to navigate its way around obstacles without force or conflict, Tang believes we can find a greater sense of fulfillment in our endeavors by shifting our focus away from achieving success and towards achieving harmony. Finally, openness: water can be a liquid, solid or gas, and it adapts to the shape in which it’s contained. Tang finds in his professional life that the teams most open to learning (and un-learning) do the best work. “What would water do?” Tang asks. “This simple and powerful question has changed my life for the better.”

With great data comes great responsibility. Remember the hacks on companies such as Equifax and JP Morgan? Well, you ain’t seen nothing yet. As computer technology becomes more powerful (think quantum) the systems we use to protect our wells of data become ever more vulnerable. However, there is still time to plan countermeasures against the impending data apocalypse, reassures encryption expert Vikram Sharma. He and his team are designing security devices and programs that also rely on quantum physics to power a defense against the most sophisticated attacks. “The race is on to build systems that will remain secure in the face of rapid technological advance,” he says.

Rach Ranton brings the leadership lessons she learned in the military to corporations, suggesting that leaders succeed when everyone knows the final goal they’re working toward. (Photo: Jean-Jacques Halans / TED)

Leadership lessons from the front line. How does a leader give their people a sense of purpose and direction? Rach Ranton spent more than a decade in the Australian Army, including tours of Afghanistan and East Timor. Now, she brings the lessons she learned in the military to companies, blending organizational psychology aimed at corporations with the planning and best practices of a well-oiled military unit. Even in a situation of extreme uncertainty, she says, military units function best if everyone understands the leader’s objective exactly as well as they understand their own role, not just their individual part to play but also the whole. She suggests leaders spend time thinking about how to communicate “commander’s intent,” the final goal that everyone is working toward. As a test, she asks: If you as a leader were absent from the scene, would your team still know what to do … and why they were doing it?


CryptogramTracking People Without GPS

Interesting research:

The trick in accurately tracking a person with this method is finding out what kind of activity they're performing. Whether they're walking, driving a car, or riding in a train or airplane, it's pretty easy to figure out when you know what you're looking for.

The sensors can determine how fast a person is traveling and what kind of movements they make. Moving at a slow pace in one direction indicates walking. Going a little bit quicker but turning at 90-degree angles means driving. Faster yet, we're in train or airplane territory. Those are easy to figure out based on speed and air pressure.

After the app determines what you're doing, it uses the information it collects from the sensors. The accelerometer relays your speed, the magnetometer tells your relation to true north, and the barometer offers up the air pressure around you and compares it to publicly available information. It checks in with The Weather Channel to compare air pressure data from the barometer to determine how far above sea level you are. Google Maps and data offered by the US Geological Survey Maps provide incredibly detailed elevation readings.

Once it has gathered all of this information and determined the mode of transportation you're currently taking, it can then begin to narrow down where you are. For flights, four algorithms begin to estimate the target's location and narrows down the possibilities until its error rate hits zero.

If you're driving, it can be even easier. The app knows the time zone you're in based on the information your phone has provided to it. It then accesses information from your barometer and magnetometer and compares it to information from publicly available maps and weather reports. After that, it keeps track of the turns you make. With each turn, the possible locations whittle down until it pinpoints exactly where you are.

To demonstrate how accurate it is, researchers did a test run in Philadelphia. It only took 12 turns before the app knew exactly where the car was.

This is a good example of how powerful synthesizing information from disparate data sources can be. We spend too much time worried about individual data collection systems, and not enough about analysis techniques of those systems.

Research paper.

Worse Than FailureError'd: These are not the Security Questions You're Looking for

"If it didn't involve setting up my own access, I might've tried to find what would happen if I dared defy their labeling," Jameson T. wrote.

 

"I think that someone changed the last sentence in a hurry," writes George.

 

"Now I may not be able to read, or let alone type in Italian, but I bet if given this particular one, I could feel my way through it," Anatoly writes.

 

"Wow! The best rates on default text, guaranteed!" writes Peter G.

 

Thomas R. wrote, "Doing Cyber Monday properly takes some serious skills!"

 

"I'm unsure what's going on here. Is the service status page broken or is it telling me that the service is broken?" writes Neil H.

 

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet Linux AustraliaOpenSTEM: Celebration Time!

Here at OpenSTEM we have a saying “we have a resource on that” and we have yet to be caught out on that one! It is a festive time of year and if you’re looking for resources reflecting that theme, then here are some suggestions: Celebrations in Australia – a resource covering the occasions we […]

,

CryptogramSecurity Planner

Security Planner is a custom security advice tool from Citizen Lab. Answer a few questions, and it gives you a few simple things you can do to improve your security. It's not meant to be comprehensive, but instead to give people things they can actually do to immediately improve their security. I don't see it replacing any of the good security guides out there, but instead augmenting them.

The advice is peer reviewed, and the team behind Security Planner is committed to keeping it up to date.

Note: I am an advisor to this project.