Planet Russell

,

Planet DebianMike Gabriel: Résumé of our Edu Workshop in Kiel (26th - 29th January)

In the last week of January, the project IT-Zukunft Schule (Logo EDV-Systeme GmbH and DAS-NETZWERKTEAM) had visitors from Norway: Klaus Ade Johnstad and Linnea Skogtvedt from LinuxAvdelingen [1] came by for exchanging insights, knowledge, technology, stories regarding IT services at school in Norway and Nothern Germany.

This was our schedule...

Tuesday

  • 3pm – Arrival of Klaus Ade and Linnea, meet up at LOGO with coffee and cake
  • 4pm – Planning the workshop, coming up with an agenda for the next two days (Klaus Ade, Andreas, Mike)
  • 5pm – Preparing OPSI demo sites (Mike, Linnea)
  • 8pm – Grünkohl and Rotkohl and ... at Traum GmbH, Kiel (Klaus Ade, Linnea, Andreas, Mike)

Wednesday

  • 8.30am – more work on the OPSI demo site (Mike, Linnea)
  • 10am – pfSense (esp. captive portal functionality), backup solutions (Klaus Ade, all)
  • 11am – ITZkS overlay packages, basic principles of Debian packaging (Mike, special guests: Torsten, Lucian, Benni)
  • 12am-2pm – lunch break
  • 2pm – OPSI demonstration, discussion, foundation of the OpsiPackages project [2] (Mike)
  • 4pm – Puppet (Linnea)
  • 7pm – dinner time (eating in, Thai fast food :-) )
  • 20pm – Sepiida hacking (Mike, Linnea), customer care (Andreas, Klaus Ade)
  • 22:30pm – zZzZzZ time...

Thursday

read more

Planet DebianOrestis Ioannou: Using debsources API to determine the license of foo.bar

Following up on the hack of Matthieu - A one-liner to catch'em all! - and the recent features of Debsources I got the idea to modify a bit the one liner in order to retrieve the license of foo.bar.

The script will calculate the SHA256 hash of the file and then query the Debsources API in order to retrieve the license of that particular file.

Save the following in a file as license-of and add it in your $PATH

#!/bin/bash

function license-of {
    readlink -f $1 | xargs dpkg-query --search | awk -F ": " '{print $1}' | xargs apt-cache showsrc | grep-dctrl -s 'Package' -n '' | awk -v sha="$(sha256sum $1 | awk '{ print $1 }')" -F " " '{print "https://sources.debian.net/copyright/api/sha256/?checksum="sha"&packagename="$1""}' | xargs curl -sS
}

CMD="$1"
license-of ${CMD}

Then you can try something like:

    license-of /usr/lib/python2.7/dist-packages/pip/exceptions.py

Notes:

  • if the checksum is not found in the DB (compiled file, modified file, not part of any package) this will fail
  • if the debian/copyright file of the specific package is not machine readable then you are out of luck!
  • if there are more than 1 versions of the package you will get all the available information. If you want to get just testing then add "&suite=testing" after the &packagename="$1" in the debsources link.

Kelvin ThomsonIntergenerational Equity – Intergenerational Report 2015

The real purpose of the Intergenerational Report 2015 was to try to justify the 2014 Budget cuts to pensions, education and health. The report made numerous misleading claims to try to convince us that we could not afford our present levels of commitment to older and younger people.

The first misleading claim is that labour force participation is going to fall and reduce per capita economic growth. The second misleading claim is that the costs of providing for an older population will increase significantly as a percentage of GDP over the next forty years. The third misleading claim is that in order to deal with these costs Australia must maintain high immigration, on the grounds that migrants tend to be younger than the average resident. The Intergenerational Report assumes Australia's population will rise from 23.8 as of mid-2015 to 40 million in 2055, a massive two-thirds increase in just 40 years. The Report is completely inadequate in dealing with the numerous economic, social, and environmental consequences of such an increase.

In fact an analysis of the Report by Bob Birrell and Katherine Betts shows there is NO net change in per capita economic growth over the next 40 years. There is a slight fall in real per capita economic growth from declining labour force participation of 0.1 percentage points a year. But this is totally offset by an increase of 0.1 percentage points a year because the proportion of the population who are children will fall relative to those aged 15 plus. Any decline in labour force participation of those aged 15 plus is offset by the rising share of the population in this broad age group. ("The 2015 intergenerational Report: Misleading findings and hidden agendas", Bob Birrell and Katherine Betts, The Australian Population Research Institute, Research Report, July 2015). It is regularly the case that the people who want to use a scare campaign about population and workforce ageing to attack social security "accidentally" forget to take into account the ways in which population ageing makes life easier for governments and communities.

The Report is also misleading about rising medical and hospital costs. It is true that health expenditure is rising and will continue to do so. But the vast majority of this extra cost is due to the higher costs of providing health care for everyone, including the implementation of new technology. While Commonwealth spending per person is projected to increase by $3700 by 2054-55, $3100, or 84 per cent of this, can be ascribed to non-demographic causes. Ageing is only a minor factor. (Ibid). I don't accept that our spending on health, welfare and pensions is unsustainable. We spend a lower proportion of GDP on government funded age pensions than most OECD countries.

And what of the third claim that we need high migration to slow down population ageing? Bob Birrell and Katherine Betts have calculated from the data in the report that every extra 70,000 migrants up to the year 2055 only increases economic growth by a mere 0.06 per cent. And yet an extra 70,000 net overseas migration adds over four million people, and the Report says nothing about the extra costs on the community that this imposes. Indeed the Report works on the basis of population growth between 2015 and 2055 of nearly 16 million!

The infrastructure costs of such an increase are glossed over with the claim, which Bob Birrell and Katherine Betts describe as bizarre, that infrastructure costs "are not linked explicitly to demographic factors".

The Report also is misleading about the issue of productivity. It says high levels of migration MIGHT increase productivity because migrants may, on average, be better educated than the average Australian. No evidence is advanced to support this optimism, and given the extent of the rorting of migrant worker and overseas student programs it seems to me to be doubtful. In any event Ross Gittins has reached the opposite conclusion - that high migration lowers national productivity. Rapid population growth, through its effects on congestion and land and housing prices, acts as a drag on productivity.

The Report also papers over the impacts of rapid population growth on the environment, saying the 'level of government spending on the environment is not directly linked with demographic factors". This is amateur hour. Population growth is a direct and indirect cause of environmental damage and should not be glossed over in this way. The fact that State and local governments have to do much of the heavy lifting in terms of maintaining water quality, and environmental repair, does not make these costs any less real.

The IGR finds that per capita income will be higher in real terms than it is today. Its modelling shows an increase, in constant dollars, from $64,400 to $117,300 by 2054-55. Bob Birrell and Katharine Betts say our descendants should he selves be able to comfortably deal with any extra costs that arise from providing for a larger cohort of older persons (Ibid, p.6). They say the IGR's own data show that the supposed ill-effects of ageing are trivial, and should be easily managed by future generations themselves. The IGR's lukewarm endorsement of massive immigration-driven population growth just about completely overlooks and fails to take into account the massive costs of such growth. (Ibid, p.v).

The problems which the Intergenerational Report 2015 says are looming are looming due to the policy failure of neo-classical economics, which has dominated economic policy-making since the 1970s. Its signature policies of economic growth, rapid population growth, globalisation, free trade, privatisation, and deregulation have progressively generated deficit and debt, de-industrialisation and unemployment, and a declining capacity to care for older Australians, younger Australians, and the environment.

After the war the Marshall Plan of 1947 paved the way for the re-industrialisation of Europe and a long period of economic prosperity. Lessons learned from the 1929 financial crash and the Great Depression saw an essentially tripartite political setting, with business, labour and government roughly in balance.

The free trade theory of comparative advantage espoused by David Ricardo in the 19th Century and the Washington Consensus in the 20th was not actually applied in western countries. Countries which actually applied the theory, for example Somalia with its comparative advantage in agriculture, continued to specialise in agriculture and remained poor. By contrast, Korea, through very heavy- handed industry policy, broke away from its comparative advantage in agriculture, and it's GDP per capita skyrocketed, whereas Somalia's remained static.

Other Asian nations which industrialised were also successful. As Eric Reinert says a nation with an inefficient manufacturing sector is much better off than a nation without any manufacturing sector at all.

But from the mid 1970s neo-liberal economic policies started de-industrialising countries both in the developed world and in the developing one. Free trade has undermined the manufacturing base of many western countries, including Australia. Neo-classical economics has failed to distinguish between the financial sector and real wealth creation. Where Roosevelt's New Deal reigned in the financial sector to become the servant rather than the master of capitalist development. Countries which did better during this period, such as Brazil, India, and China, did not embrace or implement neo-classical economics.

We need to return to the middle ground represented by initiatives such as the New Deal. Otherwise we will continue to de-industrialise, with the financial sector destroying value in the real economy, and business, labour and governments out of balance. Our debt and deficit will continue to grow, and we will be unable to meet the needs of older Australians and younger Australians, and we will continue to trash the environment in a futile quest for economic growth in the mistaken belief that this will solve our problems.

,

Rondam RamblingsIs Spirituality Irrational?

Just published my first ever essay in a third-party venue: Is Spirituality Irrational? over at Intentional Insights.  It was written by invitation as a response to this piece by Rev. Caleb Pitkin.

TEDWant to give a tech demo or talk at TED2016? Apply to share your idea

Speaker Boaz Almog demonstrated how a superconductor disk can glide over a magnetic rail in a completely frictionless system. It's just one of dozens of TED tech demos that have raised oohs and ahhs from the audience. Could your demo be the next? Photo: James Duncan Davidson/TED

Boaz Almog demonstrates how a superconductor disk can glide over a magnetic rail in a completely frictionless system. It’s just one of dozens of TED tech demos that have raised oohs and ahhs from the audience. Could your demo be next? Photo: James Duncan Davidson/TED

Attention all engineers, inventors, makers, technology professionals and enthusiasts! TED is seeking jaw-dropping tech demos or talks, to be shared at our 2016 conference in Vancouver. The ideal talk/demo will feature a new piece of technology that can be shown in just a few minutes from the stage, tell us about something brand-new, or offer a new lens on a tech topic. It can be something that might change the way we do things in the future, or change the way we think.

If it’s a demo you’re proposing, the tech you show us can be simple or complex, tiny or big, homemade or slick, weird or elegant. It can come from a company, a university lab or a garage. Regardless, it should be innovative and smart with the potential to make a positive impact on some corner of the world. For either a demo or a talk, we’ll want to know why you’re uniquely qualified to share the idea at TED.

Think you’ve got something that might fit? Apply and tell us more »

Below, a variety of TED tech talks and demos, to show you the wide range of ideas we are looking for. May they inspire you.

TEDAnnouncing our 2016 TED Prize winner: Satellite archaeologist Sarah Parcak

Sarah Parcak has a big idea on how we can protect ancient sites and, with them, our cultural history. Sign up to follow her 2016 TED Prize quest. Photo: Ryan Lash/TED

Sarah Parcak has a big idea on how we can protect ancient sites and, with them, our cultural history. Sign up to follow her 2016 TED Prize quest. Photo: Ryan Lash/TED

She’s best described as the modern-day Indiana Jones. Using infrared imagery from satellites, she identifies ancient sites lost in time. In Egypt, she helped locate 17 potential pyramids, plus 1,000 forgotten tombs and 3,100 unknown settlements. And that’s in addition to her discoveries throughout the Roman Empire.

Sarah Parcak uses 21st century technology to make the world’s invisible history visible again. That’s why TED is thrilled to announce her as the winner of the 2016 TED Prize.

Parcak has a bold, ambitious wish to help uncover and protect the world’s hidden cultural heritage, and to invite others into her work. On February 16, during the TED2016 conference, she will share this $1 million idea in a TED Talk and reveal her plan to make it a reality. Sign up for updates from our new TED Prize winner »

Sarah Parcak is a professor at the University of Alabama at Birmingham, where she founded the Laboratory for Global Observation. She, quite literally, wrote the textbook on satellite archaeology and gained international attention in 2011 when she satellite-mapped all of Egypt, identifying thousands of undiscovered sites. Parcak is now using satellite data to fight the looting happening at archaeological sites across the Middle East.

Her work is critical right now, as shown by ISIS’s recent takeover of the ancient city of Palmyra. They are destroying this history-rich site, “bit by bit,” said Parcak.

“The last four and half years have been horrific for archaeology. I’ve spent a lot of time, as have many of my colleagues, looking at the destruction,” she said. “This Prize is not about me. It’s about our field. It’s about the thousands of men and women around the world, particularly in the Middle East, who are defending and protecting sites.”

Parcak embraces the comparisons to Indiana Jones — her Twitter handle is @indyfromspace, and she proudly wears the signature hat — but she also stresses that the analogy isn’t perfect.

“Discoveries aren’t made by one person exploring by themselves,” she said. “And discoveries aren’t made overnight. People don’t see the thousands of hours that go into it.”

Her wish, however, stands to speed up the process of locating ancient sites. And because she is deeply connected to the TED community — Parcak is a TED Fellow, as well as an organizer of TEDxBirmingham — she is excited to see how TED’s global network of innovators and changemakers can help the effort.

“We can use this TED Prize to get the world involved,” she said.

The TED Prize is a $1 million grant, given annually to a bold leader with a wish to spark global change. Applications are accepted year-round, on a rolling basis. Nominate yourself or someone else »


TEDMeet the 2016 class of TED Fellows and Senior Fellows

TEDFellows 2016

We are thrilled to announce the new class of Fellows for TED2016. These 21 world-changers work at the very forefront of their fields and represent 12 countries around the world – including, for the first time in the program, Kyrgyzstan. This group includes a Hawaiian geneticist focused on establishing ethnic diversity in genome studies; an Iranian photographer capturing often undocumented youth culture in the Middle East; an Italian biomedical entrepreneur creating a revolutionary device to beat pancreatic cancer; a Ghanaian-American producer who created a viral web series hailed as Africa’s “Sex and the City;” a Canadian biohacker using low-cost, open source materials to grow human tissue and many more!

Below, meet the new group of Fellows who will join us at TED2016, February 15-19 in Vancouver.


Nicole Amarteifio headshot
Nicole Amarteifio (Ghana + USA)
TV director / producer
Ghanaian-American creator, writer and director of “An African City,” a hit web series that follows five successful women from Ghana navigating 21st century life in Accra.


Samuel Bazawule headshot
Samuel ‘Blitz the Ambassador’ Bazawule (Ghana)
Musician + filmmaker
Ghanaian artist using music and film to explore magical realism and how it influences modern African identity and aesthetic, globally.


Sanford Biggers headshot
Sanford Biggers (USA)
Interdisciplinary artist
Visual artist creating multimedia and musical performances which upend traditional narratives about far-ranging topics from hip hop to Buddhism to American history.


Sanford Biggers

Sanford Biggers, “Everyday a Sunset Dies” (LKG), 2014, 78h x 78w in. Antique quilts, assorted fabrics, fabric treated acrylic, spray paint. Photo: Sanford Biggers


Prosanta Chakrabarty headshot
Prosanta Chakrabarty (USA)
Ichthyologist
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.


Keolu Fox headshot
Keolu Fox (USA)
Geneticist + indigenous rights activist
Hawaiian geneticist exploring the links between human genetic variation and disease in underrepresented populations with the goal of eliminating health disparities.


Kiana Hayeri headshot
Kiana Hayeri (Iran + Canada)
Photographer
Iranian-Canadian photographer exploring complex topics such as youth culture, migration and sexuality in Iran and Afghanistan, highlighting an often hidden side of life in the Middle East.


Laura Indolfi headshot
Laura Indolfi (Italy + USA)
Biomedical entrepreneur
Biomedical innovator revolutionizing cancer treatments with new technologies — including implantable devices for delivering drugs locally at the site of a tumor.


Bektour Iskender headshot
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop Media, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and culture.


Artist's concept of NASA's NEOWISE spacecraft

Azat Ruziev was one of the youngest students Kloop Journalism School ever had — he was just 14 when he started attending classes and publishing his first stories. He’s 18 today, and after covering politics for three years, he’s challenging himself in more complicated and longer formats including feature stories and mini-documentaries. Photo: Anna Lelik, Kloop Media


Mitchell Jackson headshot
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring black masculinity, the criminal justice system and family relationships in fiction, essays and documentary film. He published his debut novel The Residue Years in 2013 to critical acclaim.


Jessica Ladd headshot
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Sexual Health Innovations, a nonprofit dedicated to creating technology to advance sexual health in the U.S. Her most recent initiative, Callisto, provides a platform for college students to confidentially report sexual assault.


Majala Mlagui headshot
Majala Mlagui (Kenya)
Gemologist + mining entrepreneur
Kenyan founder of Thamani Gems, which works with artisanal and small-scale gemstone miners in East Africa to create sustainable livelihoods through responsible mining, ethical sourcing and access to fair trade markets.


Helene Morlon headshot
Hélène Morlon (France)
Biodiversity mathematician
French researcher using mathematics and computer codes to understand the history of life and evolution on our planet and examine the factors that affect rates of speciation, extinction and trait evolution.


Amanda Nguyen headshot
Amanda Nguyen (USA)
Policymaker
Founder and president of Rise, a national nonprofit working with multiple state legislatures and the U.S. Congress to implement a Sexual Assault Survivor Bill of Rights.


Carrie Nugent headshot
Carrie Nugent (USA)
Asteroid hunter
Astronomer using NASA’s NEOWISE spacecraft to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.


Artist's concept of NASA's NEOWISE spacecraft

Artist’s concept of NASA’s NEOWISE spacecraft. NEOWISE is an infrared telescope that orbits Earth, and Carrie Nugent works on the team that has used it to observe 128,000 asteroids and comets. Photo: NASA/JPL-Caltech


Nikhil Pahwa headshot
Nikhil Pahwa (India)
Digital rights advocate
Indian journalist and free speech and digital rights advocate, who co-founded the SavetheInternet.in coalition, which galvanized the support of over a million Indian citizens for net neutrality.


Andrew Pelling headshot
Andrew Pelling (Canada)
Scientist + biohacker
Canadian scientist disrupting traditional approaches by using low-cost, open source materials — like apples and LEGOs — for next generation medical innovations.


Madeline Sayet headshot
Madeline Sayet (USA)
Theater director + playwright
Director and playwright who explores cultural intersections through original theater productions — from Shakespeare reinterpretations to absurdist comedy — in an effort to highlight indigenous perspectives.


Sam Stranks headshot
Sam Stranks (UK + Australia)
Solar energy researcher
Experimental physicist studying how light interacts with solar materials, pioneering discoveries in the field of low-cost, efficient solar cells made from a revolutionary material called hybrid perovskites.


Trevor Timm headshot
Trevor Timm (USA)
Free speech advocate
Co-founder and executive director of Freedom of the Press Foundation, a non-profit organization that supports and defends journalism dedicated to transparency and accountability.


Michael Twitty headshot
Michael Twitty (USA)
Culinary historian
Culinary historian, blogger and chef who uses food to ignite a conversation about cultural appropriation, African identity and American racial history.


Prof. Dr. Vanessa Wood, ETHZ in ihrem Labor an der Gloriastrasse. Bild Tom Kawara fuer ETH Globe Zuerich 12.7.2011. Copyright kawara.com und ethz.ch
Vanessa Wood (Switzerland + USA)
Electrical engineer
Electrical engineer using nanomaterials to revolutionize energy systems — from solar cells to batteries.


We’re also excited to share our new class of Senior Fellows for TED2016. We honor our Senior Fellows with an additional two years of engagement in the TED community, offering continued support to their work while they in turn give back and mentor new Fellows and enrich the community as a whole. They embody the values of the TED Fellows program.


uldus-bakhtiozina-photo

Stormtrooper: A portrait of a 12-year-old boy who hides his aspirations to be a ballet dancer from his friends. Photo: Uldus Bakhtiozina


Uldus Bakhtiozina. TED 2014, The Next Chapter, March 17 - 21, 2014, Vancouver Conference Center, Vancouver, Canada. Photo by: Bret Hartman
Uldus Bakhtiozina (Russia)
Photographer + visual artist
Russian art photographer whose elaborately staged, costumed, surreal portraits are dedicated to Russian folklore, explore ancient pagan archetypes, femininity and magic, often subverting Russian cultural stereotypes.


Benedetta_Berti_headshot
Benedetta Berti (Israel + Italy)
Conflict and security researcher + author
Policy consultant studying armed groups and internal conflicts, analyzing the impact of insecurity on civilians and working to build more peaceful communities.


Steve Boyes headshot
Steve Boyes (South Africa)
Conservation biologist
South African conservation biologist who explores and studies remote wildernesses in Africa to protect and restore them.


Kitra Cahana headshot
Kitra Cahana (USA)
Documentary filmmaker + photographer
Canadian documentary filmmaker and photographer who embeds herself in communities, often for months at a time, creating in-depth portrayals of her subjects.


Ryan Holladay. Photo: Ryan Lash
Ryan Holladay (USA)
Musical artist
In collaboration with his younger brother, Ryan combines music and technology to create site-specific sound installations, interactive concerts and GPS-based compositions for sites around the world.


Ryan Holladay

Ryan Holladay and his brother, Hays, perform at the Sweetlife Festival wearing masks of one another’s faces, designed by Kashuo Bennett. Photo: Margot MacDonald


Gastromotiva_Davi_Angelo Dal Bó_032
David Hertz (Brazil)
Chef + social entrepreneur
Brazilian Chef and founder of Gastromotiva, which utilizes the transformative power of food and gastronomy to foster inclusive growth and social integration in Latin America.


Michele Koppes headshot
Michele Koppes (USA)
Glaciologist
Glaciologist, geomorphologist and science communicator investigating the way glaciers and landscapes are responding to rapid climate change.


Jimmy Lin headshot
Jimmy Lin (USA)
Geneticist
Geneticist pioneering early cancer detection techniques and founder of the Rare Genomics Institute, which helps patients crowdsource funds and genomes to accelerate research of their rare genetic diseases.


Joshua Roman headshot
Joshua Roman (USA)
Cellist
Internationally recognized cellist and composer, frequently collaborating with genre-spanning musicians and creating original compositions for new audiences.


Bahia Shehab headshot
Bahia Shehab (Egypt + Lebanon)
Artist + creative director + Islamic art historian
Lebanese-Egyptian artist, designer and Islamic art historian studying ancient Arabic script and visual culture to solve modern-day issues.


Planet DebianIngo Juergensmann: rpcbind listening on all interfaces

Currently I'm testing GlusterFS as a replicating network filesystem. GlusterFS depends on rpcbind package. No problem with that, but I usually want to have the services that run on my machines to only listen on those addresses/interfaces that are needed to fulfill the task. This is especially important, because rpcbind can be abused by remote attackers for rpc amplification attacks (dDoS). So, the rpcbind man page states: 

-h : Specify specific IP addresses to bind to for UDP requests. This option may be specified multiple times and is typically necessary when running on a multi-homed host. If no -h option is specified, rpcbind will bind to INADDR_ANY, which could lead to problems on a multi-homed host due to rpcbind returning a UDP packet from a different IP address than it was sent to. Note that when specifying IP addresses with -h, rpcbind will automatically add 127.0.0.1 and if IPv6 is enabled, ::1 to the list.

Ok, although there is neither a /etc/default/rpcbind.conf nor a /etc/rpcbind.conf nor a sample-rpcbind.conf under /usr/share/doc/rpcbind, some quick websearch revealed a sample config file. I'm using this one: 

# /etc/init.d/rpcbind
OPTIONS=""

# Cause rpcbind to do a "warm start" utilizing a state file (default)
# OPTIONS="-w "

# Uncomment the following line to restrict rpcbind to localhost only for UDP requests
OPTIONS="${OPTIONS} -h 192.168.1.254"
#127.0.0.1 -h ::1"

# Uncomment the following line to enable libwrap TCP-Wrapper connection logging
OPTIONS="${OPTIONS} -l "

As you can see, I want to bind to 192.168.1.254. After a /etc/init.d/rpcbind restart verifying that everything works as desired with netstat is showing...

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 2084266 30777/rpcbind
tcp6 0 0 :::111 :::* LISTEN 0 2084272 30777/rpcbind
udp 0 0 0.0.0.0:848 0.0.0.0:* 0 2084265 30777/rpcbind
udp 0 0 192.168.1.254:111 0.0.0.0:* 0 2084264 30777/rpcbind
udp 0 0 127.0.0.1:111 0.0.0.0:* 0 2084260 30777/rpcbind
udp6 0 0 :::848 :::* 0 2084271 30777/rpcbind
udp6 0 0 ::1:111 :::* 0 2084267 30777/rpcbind

Whoooops! Although I've specified that rpcbind should only listen to 192.168.1.254 (and localhost as described by the man page) rpcbind is still listening on all addresses. Quick check if the process is using the correct options: 

root     30777  0.0  0.0  37228  2360 ?        Ss   16:11   0:00 /sbin/rpcbind -h 192.168.1.254 -l

Hmmm, yes, -h 192.168.1.254 is specified. Ok, something is going wrong here...

According to an entry in Ubuntus Launchpad I'm not the only one that experienced this problem. However this Launchpad entry mentioned that upstream seems to have a fix in version 0.2.3, but I experienced the same behaviour in stable as well as in unstable, where the package version is 0.2.3-0.2. Apparently the problem still exists in Debian unstable.

I'm somewhat undecided whether to file a normal bug against rpcbind or if I should label it as a security bug, because it opens a service to the public that can be abused for amplification attacks, although you might have configured rpcbind to just listen on internal addresses.

Kategorie: 
 

Planet DebianNiels Thykier: Performance tuning of lintian, take 3

About 7 months ago, I wrote about we had improved Lintian’s performance. In 2.5.41, we are doing another memory reduction, where we primarily reduce the memory consumption of data about ELF binaries.  Like previously, memory reductions follows the “less is more” pattern.

My initial test subject was linux-image-4.4.0-trunk-rt-686-pae_4.4-1~exp1_i386.deb. It had a somewhat unique property that the ELF data made up a little over half the cache.

  • We do away with a lot of unnecessary default values [f4c57bb, 470875f]
    • That removed about ~3MB (out of 10.56MB) of that ELF data cache
  • Discard section information we do not use [3fd98d9]
    • This reduced the ELF data cache to 2MB (down from the 7MB).
  • Then we stop caching the output of file(1) twice [7c2bee4]
    • While a fairly modest reduction (only 0.80MB out of 16MB total), it also affects packages without ELF binaries.

At this point, we had reduced the total memory usage from 18.35MB to 8.92MB (the ELF data going from 10.56MB to 1.98MB)[1]. At this point, I figured that I was happy with the improvement and discarded my test subject.

While impressive, the test subject was unsurprisingly a special case.  The improvement in “regular” packages[2] (with ELF binaries) were closer to 8% in total.  Not being satisfied with that, I pulled one more trick.

  • Keep only “UND” and “.text” symbols [2b21621]
    • This brought coreutils (just the lone deb) another 10% memory reduction in total.

In the grand total, coreutils 8.24-1 amd64 went from 4.09MB to 3.48MB.  The ELF data cache went from 3.38MB to 2.84MB.  Similarly, libreoffice/4.2.5-1 (including its ~170 binaries) has also seen a 8.5% reduction in total cache size[3] and is now down to 260.48MB (from 284.83MB).

 

[1] If you are wondering why I in 3fd98d9 wrote “The total “cache” memory usage is approaching 1/3 of the original for that package”, then you are not alone.  I am not sure myself any more, but it seems obviously wrong.

[2] FTR: The sample size of “regular packages” is 2 in this case.  Of which one of them being coreutils…

[3] Admittedly, since “take 2” and not since 2.5.40.2 like the rest.


Filed under: Debian, Lintian

CryptogramData and Goliath Published in Paperback

Today, Data and Goliath is being published in paperback.

Everyone tells me that the paperback version sells better than the hardcover, even though it's a year later. I can't really imagine that there are tens of thousands of people who wouldn't spend $28 on a hardcover but are happy to spend $18 on the paperback, but we'll see. (Amazon has the hardcover for $19, the paperback for $11.70, and the Kindle edition for $14.60, plus shipping, if any. I am still selling signed hardcovers for $28 including domestic shipping -- more for international.)

I got a box of paperbacks from my publisher last week. They look good. Not as good as the hardcover, but good for a trade paperback.

TEDWelcome to TED2017: The Future You

TED2017_photostrip2

We’re thrilled to announce the opening of registration for TED2017: The Future You. In this exciting week, April 24–28, 2017, a program of more than 70 speakers will explore deep scientific and psychological insights, the latest medical advances, the neatest inventions, all mixed with a dose of ancient wisdom, to ask the questions about ourselves that we often don’t have time for.

Most TEDs are focused on the world at large and the ideas that animate it. But for 2017, we want to make it more explicitly personal, to invite our speakers to help each of us build a toolkit for personal learning, growth, empowerment.

And we’re not talking soft-edged self-help woo. We’re talking about questions like:

How is it possible to give my reflective self power over my instinctive self?

Can I use technology to limit my addiction to technology?

Which inventions will have a big impact on me over the next few years?

How can I become a transformational leader?

What are the medical discoveries I absolutely need to know about?

What skills can my kids build that are future-proof?

How do we design an environment that is nurturing, human, satisfying, exciting?

What does success look like, without self at the center?

Who am I?

TED2017: The Future You takes place April 24–28, 2017, in Vancouver. You’re invited to apply now to attend.

What is the TED Conference?

Every year, TED hosts an annual conference — it’s where a lot of our free TED Talks come from. We invite speakers from around the world to join a five-day gathering to share ideas, conversations and insights with an audience of about 1,400 people.

And we’d love to encourage you to apply to join the audience at TED2017. There are many ways to attend TED, from winning one of our prestigious TED Fellows positions, which comes with free attendance, to attending at a standard, Donor, or Patron level, or attending as an active member of one of TED’s volunteer communities, such the TEDx program. While the cost to attend can be significant, more than half of the fee is tax-deductible, as a donation to the Sapling Foundation, which owns TED and supports our mission of sharing ideas freely to the world in multiple languages and on multiple platforms.

You may have heard you need to be invited to come to TED. Well, yes and no. If you’d like to be invited — please apply! Let us know you’re out there and interested in attending. We’re always looking for fascinating new people to add to our global, multi-generational and every-year-more-diverse audience.

Want to hear about all our new conferences, live events and announcements like the TED Prize winner? Sign up for our community list for very occasional (big) news.


Planet DebianJoachim Breitner: Protecting static content with mod_rewrite

Since fourteen years, I have been photographing digitally and putting the pictures on my webpage. Back then, online privacy was not a big deal, but things have changed, and I had to at least mildly protect the innocent. In particular, I wanted to prevent search engines from accessing some of my pictures.

As I did not want my friends and family having to create an account and remember a password, I set up an OpenID based scheme five years ago. This way, they could use any of their OpenID enabled account, e.g. their Google Mail account, to log in, without disclosing any data to me. As my photo album consists of just static files, I created two copies on the server: the real one with everything, and a bunch of symbolic links representing the publicly visible parts. I then used mod_auth_openid to prevent access to the protected files, unless the users logged in. I never got around of actually limiting who could log in, so strangers were still able to see all photos, but at least search engine spiders were locked out.

But, very unfortunately, OpenID did never really catch on, Google even stopped being a provider, and other promising decentralized authentication schemes like Mozilla Persona are also phased out. So I needed an alternative.

A very simply scheme would be a single password that my friends and family can get from me and that unlocks the pictures. I could have done that using HTTP Auth, but that is not very user-friendly, and the login does not persist (at least not without the help of the browser). Instead, I wanted something that involves a simple HTTP form. But I also wanted to avoid server-side programming, for performance and security reasons. I love serving static files whenever it is feasible.

Then I found that mod_rewrite, Apache’s all-around-tool for URL rewriting and request mangling, supports reading and writing cookies! So I came up with a scheme that implements the whole login logic in the Apache server configuration. I’d like to describe this setup here, in case someone finds it inspiring.

I created a login.html with a simple HTML form:

<form method="GET" action="/bilder/login.html">
 <div style="text-align:center">
  <input name="password" placeholder="Password" />
  <button type="submit">Sign-In</button>
 </div>
</form>

It sends the user to the same page again, putting the password into the query string, hence the method="GET"mod_rewrite unfortunately cannot read the parameters of a POST request.

The Apache configuration is as follows:

RewriteMap public "dbm:/var/www/joachim-breitner.de/bilder/publicfiles.dbm"
<Directory /var/www/joachim-breitner.de/bilder>
 RewriteEngine On

 # This is a GET request, trying to set a password.
 RewriteCond %{QUERY_STRING} password=correcthorsebatterystaple
 RewriteRule ^login.html /bilder/loggedin.html [L,R,QSD,CO=bilderhp:correcthorsebatterystaple:www.joachim-breitner.de:2000000:/bilder]

 # This is a GET request, trying to set a wrong password.
 RewriteCond %{QUERY_STRING} password=
 RewriteRule ^login.html /bilder/notloggedin.html [L,R,QSD]

 # No point in loggin in if there is already the right password
 RewriteCond %{HTTP:Cookie} bilderhp=correcthorsebatterystaple
 RewriteRule ^login.html /bilder/loggedin.html [L,R]

 # If protected file is requested, check for cookie.
 # If no cookie present, redirect pictures to replacement picture
 RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple
 RewriteCond ${public:$0|private} private
 RewriteRule ^.*\.(png|jpg)$ /bilder/pleaselogin.png [L]

 RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple
 RewriteCond ${public:$0|private} private
 RewriteRule ^.+$ /bilder/login.html [L,R]
</Directory>

The publicfiles.dbm file is generated from a text file with lines like

login.html.en 1
login.html.de 1
pleaselogin.png 1
thumbs/20030920165701_thumb.jpg 1
thumbs/20080813225123_thumb.jpg 1
...

using

/usr/sbin/httxt2dbm -i publicfiles.txt -o publicfiles.dbm

and whitelists all files that are visible without login. Make sure it contains the login page, otherwise you’ll get a redirect circle.

The other directives in the above configure fulfill these tasks:

  • If the password (correcthorsebatterystaple) is in the query string, the server redirects the user to a logged-in-page that tells him that the login was successful and instructs him to reload the photo album. It also sets a cookie that will last very long -- after all, I want this to be convenient for my visitors. The query string parsing is not very strict (e.g. a password of correcthorsebatterystaplexkcdrules would also work), but that’s ok.
  • The next request detects an attempt to set a password. It must be wrong (otherwise the first rule would have matched), so we redirect the user to a variant of the login page that tells him so.
  • If the user tries to access the login page with a valid cookie, just log him in.
  • The next two rules implement the actual protection. If there no valid cookie and the accessed file is not whitelisted, then access is forbidden. For requests to images, we do an internal redirect to a placeholder image, while for everything else we redirect the user to the login page.

And that’s it! No resource-hogging web frameworks, not security-dubious scripting languages, and a dead-simple way to authenticate.

Oh, and if you believe you know me well enough to be allowed to see all photos: The real password is not correcthorsebatterystaple; just ask me what it is.

Planet DebianLunar: Reproducible builds: week 41 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

After remarks from Guillem Jover, Lunar updated his patch adding generation of .buildinfo files in dpkg.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: dracut, ent, gdcm, guilt, lazarus, magit, matita, resource-agents, rurple-ng, shadow, shorewall-doc, udiskie.

The following packages became reproducible after getting fixed:

  • disque/1.0~rc1-5 by Chris Lamb, noticed by Reiner Herrmann.
  • dlm/4.0.4-2 by Ferenc Wágner.
  • drbd-utils/8.9.6-1 by Apollon Oikonomopoulos.
  • java-common/0.54 by by Emmanuel Bourg.
  • libjibx1.2-java/1.2.6-1 by Emmanuel Bourg.
  • libzstd/0.4.7-1 by Kevin Murray.
  • python-releases/1.0.0-1 by Jan Dittberner.
  • redis/2:3.0.7-2 by Chris Lamb, noticed by Reiner Herrmann.
  • tetex-brev/4.22.github.20140417-3 by Petter Reinholdtsen.

Some uploads fixed some reproducibility issues, but not all of them:

  • anarchism/14.0-4 by Holger Levsen.
  • hhvm/3.11.1+dfsg-1 by Faidon Liambotis.
  • netty/1:4.0.34-1 by Emmanuel Bourg.

Patches submitted which have not made their way to the archive yet:

  • #813309 on lapack by Reiner Herrmann: removes the test log and sorts the files packed into the static library locale-independently.
  • #813345 on elastix by akira: suggest to use the $datetime placeholder in Doxygen footer.
  • #813892 on dietlibc by Reiner Herrmann: remove gzip headers, sort md5sums file, and sort object files linked in static libraries.
  • #813912 on git by Reiner Herrmann: remove timestamps from documentation generated with asciidoc, remove gzip headers, and sort `md5sums and tclIndex files.

reproducible.debian.net

For the first time, we've reached more than 20,000 packages with reproducible builds for sid on amd64 with our current test framework.

Vagrant Cascadian has set up another test system for armhf. Enabling four more builder jobs to be added to Jenkins. (h01ger)

Package reviews

233 reviews have been removed, 111 added and 86 updated in the previous week.

36 new FTBFS bugs were reported by Chris Lamb and Alastair McKinstry.

New issue: timestamps_in_manpages_generated_by_yat2m. The description for the blacklisted_on_jenkins issue has been improved. Some packages are also now tagged with blacklisted_on_jenkins_armhf_only.

Misc.

Steven Chamberlain gave an update on the status of FreeBSD and variants after the BSD devroom at FOSDEM’16. He also discussed how jails can be used for easier and faster reproducibility tests.

The video for h01ger's talk in the main track of FOSDEM’16 about the reproducible ecosystem is now available.

Krebs on SecurityIoT Reality: Smart Devices, Dumb Defaults

Before purchasing an “Internet of things” (IoT) device — a thermostat, camera or appliance made to be remotely accessed and/or controlled over the Internet — consider whether you can realistically care for and feed the security needs of yet another IoT thing. After all, there is a good chance your newly adopted IoT puppy will be:

-chewing holes in your network defenses;
-gnawing open new critical security weaknesses;
-bred by a vendor that seldom and belatedly patches;
-tough to wrangle down and patch

In April 2014, researchers at Cisco alerted HVAC vendor Trane about three separate critical vulnerabilities in their ComfortLink II line of Internet-connected thermostats. These thermostats feature large color LCD screens and a Busybox-based computer that connects directly to your wireless network, allowing the device to display not just the temperature in your home but also personal photo collections, the local weather forecast, and live weather radar maps, among other things.

Trane ComfortLink II thermostat.

Trane ComfortLink II thermostat.

Cisco researchers found that the ComfortLink devices allow attackers to gain remote access and also use these devices as a jumping off point to access the rest of a user’s network. Trane has not yet responded to requests for comment.

One big problem is that the ComfortLink thermostats come with credentials that have hardcoded passwords, Cisco found. By default, the accounts can be used to remotely log in to the system over “SSH,” an encrypted communications tunnel that many users allow through their firewall.

The two other bugs Cisco reported to Trane would allow attackers to install their own malicious software on vulnerable Trane devices, and use those systems to maintain a persistent presence on the victim’s local network.

On January 26, 2016, Trane patched the more serious of the flaws (the hardcoded credentials). According to Cisco, Trane patched the other two bugs part of a standard update released back in May 2015, but apparently without providing customers any indication that the update was critical to their protection efforts.

What does this mean for the average user?

“Compromising IoT devices allow unfettered access though the network to any other devices on the network,” said Craig Williams, security outreach manager at Cisco. “To make matters worse almost no one has access to their thermostat at an [operating system] layer to notice that it has been compromised. No one wakes up and thinks, ‘Hey, it’s time to update my thermostats firmware.’ Typically once someone compromises these devices they will stay compromised until replaced. Basically it gives an attacker a perfect foothold to move laterally though a network.”

Hidden accounts and insecure defaults are not unusual for IoT devices. What’s more, patching vulnerable devices can be complicated, if not impossible, for the average user or for those who are not technically savvy. Trane’s instructions for applying the latest update are here.

“For organizations that maintain large amounts of IoT devices on their network, there may not be a way to update a device that scales, creating a nightmare scenario,” Williams wrote in an email explaining the research. “I suspect as we start seeing more IoT devices that require security updates this is going to become a common problem as the lifetime of IoT devices greatly exceed what would be thought of as the typical software lifetime (2 years vs 10 years).”

If these IoT vulnerabilities sound like something straight out of a Hollywood hacker movie script, that’s not far from the truth. In the first season of the outstanding television series Mr. Robot, the main character [SPOILER ALERT] plots to destroy data on backup tapes stored at an Iron Mountain facility by exploiting a vulnerability in an HVAC system to raise the ambient temperature at the targeted facility.

Cisco’s writeup on its findings is here; it includes a link to a new Metasploit module the researchers developed to help system administrators find and secure exploitable systems on a network. It also can be used by bad guys to exploit vulnerable systems, so if you use one of these ComfortLink systems, consider updating soon before this turns into a Trane wreck (sorry, couldn’t help it).

RacialiciousWhite entitlement is white supremacy

 

Some years ago I decided I was done with certain kinds of “race writing.” Specifically, the kinds that involved responding at length to arguments so transparently shallow, ignorant, and self-serving in their racism that refuting them would only wrongly imply that such arguments raise questions that legitimately merit debate/response.

White Hollywood, apparently smarting at the thought that the world doesn’t naturally revolve around white people, has given us multiple examples in recent weeks of just such arguments:

  • Charlotte Rampling (who is English) calling criticisms of the overwhelming whiteness of this year’s Oscar nominations, and subsequent calls for a boycott of the Oscars, “racism against white people”
  • Michael Caine (also English) telling Black actors to be more patient about decades of systematic exclusion from acting roles, from directing and producing opportunities, from industry accolades, and from the tables and boardrooms where it happens. After all, even Michael Caine—as Michael Caine helpfully reminds us—had to wait “years and years” for his Oscars. So Black actors, Michael Caine concludes, should hang tight and wait patiently, like Michael Caine did. (Fun trivia note: not only does Caine have two Oscars, he’s also one of two actors—the other being fellow white dude Jack Nicholson—to be nominated for an Oscar in every decade from the 1960s to the 2000s. But he had to wait years and YEARS to win, y’all.)
  • Rampling and Caine tilting at the windmills of nonexistent racial quotas, both objecting that it would be wrong to give mediocre performances by Black actors awards just because the actors are Black—an argument literally no one has made in their critique of the Oscars.
  • Julie Delpy (French) saying the hardest thing in Hollywood is being a woman and that she wishes she were “African American” because, apparently, “women” get heated backlash for speaking up that “African Americans” do not. No one in Julie Delpy’s world, which is apparently another planet, ever lashes out at Black people in the industry for calling out discrimination, which makes all those stories about white actors miffed over Black actors’ criticisms of industry racism kind of awkward, no? And apparently it’s also impossible to be a woman and an “African American” at the same time. Guess I and every other Black woman missed the memo that we’re unicorns.
  • I could go on: add Kristen Stewart and the Coen Brothers to the list of white Hollywood elites who have put their apparently racist feet in their mouths.

These comments have a few things in common. For one, as Racialicious’s own Kendra James notes, they’re all statements from white actors who are not American, on racism in Hollywood and the Academy of Motion Picture Sciences, both of which are, hmm, American. We can maybe extrapolate something from this about how global investment in whiteness and white supremacy transcends national borders, making white people who literally don’t know the culture they are talking about feel entitled—compelled, even—to weigh in in defense of their white brethren across the water, and to step in to impress on Black Americans our failure, even inability, to properly understand our own experiences and histories in our own country.

They’re also all blatantly solipsistic. Delpy, in a few words, and in the name of speaking up for the category of “woman,” renders Black women as mythological, nonexistent beings. In a few words, Caine shows he feels his individual experience as a white man qualifies him to lecture a whole race of people on what we should put up with. A whole race of people who, not merely in Caine’s lifetime but during his career as an actor, were by law barred from professional opportunities available to Caine and his fellow white actors, and were and still are excluded by custom from opportunities and accolades that white actors can reasonably hope for. Rampling complains of “racism” against her and her fellow white people even as she’s one of a sea of Oscar nominees who are almost nothing but white.

They’re blatantly, completely counterfactual: Delpy claims to wish to be Black in a Hollywood whose admittedly scant recognition of female excellence goes almost exclusively to white women. In an industry where the stakes for Black actresses who speak up about misogynoir, the toxic nexus of misogyny and anti-Black racism, are more dire than any white actress could imagine or fathom. It is the height of absurdity to complain about “reverse racism” when for two years only white actors have been nominated for some of the highest acting awards in the world, when barely anyone who doesn’t look like Charlotte Rampling has been nominated for, much less won, such recognition in the 87-year history of the Oscars. It is riiiiii-fucking-diculous for a white actor like Michael Caine to compare waiting in his individual professional career for accolades to whole categories of people being passed over for industry recognition for the better part of a century.

I won’t call these statements “stupid”; intelligence has nothing to do with this. I will call them laughable. They ought to be, at least. These are patently ridiculous statements utterly unhinged from anything resembling reality.

So I decided, rather than unpacking such statements at length, my response would be merely to point, and laugh. And laugh. And laugh.

I decided that rather than get into the nitty gritty of why these people are wrong as fuck, I would instead stress how their comments show how deeply entrenched white supremacy is. How much white people think they are *owed* cultural, structural, and social supremacy over other groups.

Because here’s the thing. The comments from Rampling, Caine, Delpy—just a few recent examples of widespread sentiment in the white Western world—are merely veiled ways of saying white people’s rightful place is at the top of the heap and no one should dare question it.

Really, can you get more arrogant and supremacist than getting offended just by the *idea* that white people shouldn’t get all the nominations?

Can you get more arrogant and supremacist than claiming the reason not only Black people, but people of color across the board are barely represented among Oscar nominees and similarly elite ranks, is that white actors and directors and producers and writers are just the best, pretty much all the time?

Think about it. These folks are upset, angry even, over statements so mild they don’t—shouldn’t—even warrant the label of critique. This should be common sense: professional and artistic awards shouldn’t go only to white people.

That’s it. Awards shouldn’t go only to white people. That should not be the regular state of things. This is what amounts to a controversial, questionable, even “racist” statement in a white supremacist world.

These folks are upset that other people DARE to even think this, much less voice it and demand that it be acted on.

These folks are upset that people DARE to say when a room (literal, figurative) is all white people, this is not the result of natural “meritocracy.” It’s artificial—just one result of a centuries-long process of deliberate, systematic exclusion and oppression of people of color. It shows that equal opportunity and access *does not exist* for the 33% of Americans who are not white.

So this is the deal. If you are mad, offended, feel DISCRIMINATED AGAINST simply by having to hear—not actually confront, not actually act on—the IDEA that white people shouldn’t own or get everything? If you feel something that is rightfully yours is being threatened or taken away simply by the expression of the thought that all the nice things shouldn’t go exclusively to white people (and perhaps the occasional token or exceptional brown person)?

If this is you? I will not try to persuade you to think or feel otherwise. I will not engage with your ridiculous “arguments” about why a status quo of white supremacy is just the way things should be.

I will not validate what is essentially an assumption that white people are the bar for excellence. I will not indulge demands that Black people and other people of color prove that we can be as good as white people.

I will not participate in a debate that asks that I “prove” that white people aren’t the cream of the crop, rather than questioning why, when the world has at least twice as many people of color in it for every white person, so much wealth and power—in the U.S. and globally—is hoarded by and for whiteness.

I will not pretend that you are questioning anything less than the humanity of Black people and other people of color—because let’s be real, that’s what we’re ultimately talking about, human capacity and potential and therefore our human-ness itself.

I will not argue with you about whether my humanity is equal to yours. Nope.

Instead, I will point and laugh.

Instead, I will note that you are catching feelings and pitching a fit because you think you and your fellow white people are entitled to everything.

Instead, I will point, laugh, and note the absurdity, the solipsism, the naked belief in white superiority that you are clumsily trying to cloak as logic and reason.

And then I will move on.

Because ridicule and contempt? Is all this mindset—that the natural state of things is white entitlement, for you and yours to have more than the rest of the world—deserves.

The post White entitlement is white supremacy appeared first on Racialicious - the intersection of race and pop culture.

Sociological ImagesBefore Daybreak, Mardi Gras: The Skull & Bones Gang

On Mardi Gras mornings before dawn, members of the North Side Skull and Bones Gang prowl the streets. It’s a 200 year old tradition belonging to African American residents of the city. They first prowled in 1819.

Members of the gang dress up like ominous skeletons. At nola.com, Sharon Litwin writes:

Because the origins of the Gang were with working class folk who had little money for silks and satins, the skeleton suits are made from everyday items and simple fabrics. Baling wire (to construct the shape of the head) along with flour and water to bind together old newspapers, create the head itself.

Their message is to “warn [people] away from violence” — says the North Side Chief, Bruce “Sunpie” Barnes — especially young people, and especially gun and domestic violence. He explains:

The bone gang represents people… waking people up about what they’re doing in life, if they don’t change their lifestyle. You know. We’re like the dead angels. We let you know, if you keep doing what you’re doing, you’re gonna be with us.

Up before most residents, members of the gang cause a ruckus. They sing songs, bang on doors, and play-threaten their neighbors.

Here’s some footage:

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

RacialiciousNo Hot Takes Here: Enjoy Beyoncé’s Super Bowl-Slaying Show One More Time

We’re all bracing for an onslaught of analyses surrounding Beyoncé’s “Formation” this week. But at least for right now, via Mic, here’s another chance to see her and Bruno Mars do their thing at Sunday’s Super Bowl halftime show with all of that Coldplay stuff cut out.

And of course, we’d be remiss if we didn’t include WNBC-TV’s … creative recap of the event:

The post No Hot Takes Here: Enjoy Beyoncé’s Super Bowl-Slaying Show One More Time appeared first on Racialicious - the intersection of race and pop culture.

CryptogramExploiting Google Maps for Fraud

The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google's crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.

But they involve chicanery with two platforms: Google My Business, essentially the company's version of the Yellow Pages, and Map Maker, which is Google's crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.

Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.

But the very quality that makes Google's systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.

"This is what you get when you rely on crowdsourcing for all your 'up to date' and 'relevant' local business content," Mr. Seely said. "You get people who contribute meaningful content, and you get people who abuse the system."

The scam is growing:

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

What's interesting to me are the economic incentives involved:

Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a "Report a Problem" tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.

Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a "locksmith Phoenix" search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.

"We're in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who've been a thorn in the Internet's side for over a decade," a Google spokesman wrote in an email. "As spammers change their techniques, we're continually working on new, better ways to keep them off Google Search and Maps. There's work to do, and we want to keep doing better."

There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google's critics say, the change to local results will not rise even to the level of superficial.

And that's Google's best option, really. It's not the one losing money from these scammers, so it's not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it's just going to do superficial things.

This is exactly the sort of market failure that government regulation needs to fix.

Worse Than FailureJust Check The Notepad

Icon-notepad

As the last deployment script ran to completion, Michael sat back in his chair and let out a small sigh of relief.

He knew nothing could go wrong. After all, he'd rehearsed every step of the process hundreds of times during the last six months. But deploying his projects to production always made him as nervous as a highschooler before a drama club premiere.

However, the rollout was nearly complete, and Initech's Hyderabad branch was all set up and ready to enjoy their new, polished and improved inventory management system. All that was left was to log into the application for the last sanity check and celebrate the success with his team.

Michael opened the browser, typed in the server address and hit Enter, expecting to see the familiar portal.

Instead, a standard, white-and-red ASP.NET error page stared back at him.

A few minutes of digging around the Hyderabad servers exposed the problem. While all the local applications were configured to use Windows logins to connect to the database, the overseas branch used simple SQL Server authentication with a constant login and password. After a bit of back and forth, Michael was armed with contact information to Hyderabad's database administrator, and set off to resolve the issue.

Hi, it's Michael from Initech Seattle, he typed into his Lync window. Can I get a database username and password for the inventory management application?

A few minutes later, the reply arrived. Expecting just the username and password, Michael was ready to say thanks and get back to fixing his problem ... but the DBA had a different idea.

Sure, just check the Notepad on the server.

You mean a text file? Michael responded, somewhat confused.

Yes, it's open now, just check your screen.

Michael went through all his open Remote Desktop sessions, but none of them had a single text file open. The only programs running were the ones he'd started himself.

I don't see it, can you tell me the filename and the directory? He decided to play along. After all, maybe there was some security-related reason why the DBA couldn't send the credentials over IM.

It's in the Notepad, the DBA responded. Just check it.

No, it's not! Michael was growing a bit irritated. He took the screenshots of both the database and application servers' desktops and attached them to the message, hoping that would finally convince the DBA he wasn't going crazy.

After a long while, the DBA finally wrote back. You're on Remote Desktop. That's different. Can you just use TeamViewer?

"Are you serious?" Michael asked out loud, but the will to resolve the issue as soon as possible overcame the urge to chew out the clueless administrator. He downloaded the software, sat through the installation process, then typed in the TeamViewer ID.

A connection could not be established.

"Of course," Michael muttered.

A quick call to system administrators confirmed Michael's fears: the company firewall blocked all TeamViewer connections. Gaining access meant submitting an exemption request and letting the bureaucracy take its time to process it. The process usually required several days, and the rollout couldn't wait that long.

Facing a dead end, he tried to explain the situation to the DBA.

I can't get on TeamViewer. Can you please just tell me the username and password for the SQL Server? It would really save us a lot of time!!

He sat back in his chair and waited, eyes glued to the IM window. His hopes weren't high, but maybe he'd at least get an excuse to present to his boss.

Instead, the reply came:

Username: dba, password: rosebud. Let me know if I can be of any further help.

It was all Michael could do not to slam his head against his desk. After careful consideration, he decided to pass on the DBA's kind offer.

[Advertisement] BuildMaster is more than just an automation tool: it brings together the people, process, and practices that allow teams to deliver software rapidly, reliably, and responsibly. And it's incredibly easy to get started; download now and use the built-in tutorials and wizards to get your builds and/or deploys automated!

Planet Linux AustraliaMichael Still: Adolf Hitler: My Part in His Downfall




ISBN: 9780241958094
LibraryThing
This is another book I read as a teenager and decided to re-read. Frankly, its great. Confused teenager signs up for the British Army (or is conscripted, its not totally clear) and ends up as an artillery gunner. Has hilarious adventures while managing to still be a scrawny nerd. I loved it. A light hearted look at a difficult topic.

Tags for this post: book spike_milligan combat ww2 biography
Related posts: Cryptonomicon; The Man in the Rubber Mask; Skimpy; The Crossroad; Don't Tell Mum I Work On The Rigs; Some Girls: My Life in a Harem
Comment Recommend a book

Planet DebianOrestis Ioannou: Debian - your patches and machine readable copyright files are available on Debsources

TL;DR All Debian license and patches are belong to us. Discover them here and here.

In case you hadn't already stumbled upon sources.debian.net in the past, Debsources is a simple web application that allows to publish an unpacked Debian source mirror on the Web. On the live instance you can browse the contents of Debian source packages with syntax highlighting, search files matching a SHA-256 hash or a ctag, query its API, highlight lines, view accurate statistics and graphs. It was initially developed at IRILL by Stefano Zacchiroli and Matthieu Caneill.

During GSOC 2015 I helped introduce two new features.

License Tracker

Since Debsources has all the debian/copyright files and that many of them adopted the DEP-5 suggestion (machine readable copyright files) it was interesting to exploit them for end users. You may find interesting the following features:

  • an API that allows users to find the license of file "foo" or the licenses for a bunch of packages, using filenames or SHA-256 hashes

  • a better looking interface for debian/copyright files

Have a look at the documentation to discover more!

Patch tracker

The old patch tracker unfortunately died a while ago. Since Debsources stores all the patches it was, probably, natural for it to be able to exploit them and present them over the web. You can navigate through packages by prefix or by searching them here. Among the use cases:

  • a summary which contains all the patches of a package together with their diffs and summaries/subjects
  • links to view and download (quilt-3.0) patches.

Read more about the API!

Coming ...

  • In the future these informations will be added in the DB. This will allow:

    • the license tracker to provide interesting statistics and graphs about licensing trends (What do Pythonistas usually choose as a license, how many GPL-3 files are in Jessie etc). Those are going to be quite accurate since they will take into account each file in a given package and not just the "general" license of the package.

    • the patch tracker to produce a list of packages that contain patches - this will enable providing links from PTS to the patch tracker.

  • Not far in the horizon there is also an initial work for exporting debian/copyright files into SPDX documents. You can have a look at a beta / testing version on debsources-dev. (Example)

I hope you find these new features useful. Don't hesitate to report any bugs or suggestions you come accross.

Planet Linux AustraliaMichael Still: Halo: The Flood




ISBN: 076532833X
LibraryThing
The reviews online for this book aren't great, and frankly they're right. The plot is predictable, and there isn't much character development. Just lots and lots of blow-by-blow combat. It gets wearing after a while, and I found this book at bit of a slog. Not recommended.

Tags for this post: book william_c_dietz combat halo engineered_human cranial_computer personal_ai aliens
Related posts: Halo: The Fall of Reach; The Last Colony ; The End of All Things; The Human Division; Old Man's War ; The Ghost Brigades
Comment Recommend a book

Planet DebianRuss Allbery: Converted personal web sites to TLS

I've been in favor of using TLS and encryption for as much as possible for a while, but I never wanted to pay money to the certificate cartel. I'd been using certificates from CAcert, but they're not recognized by most browsers, so it felt rude to redirect everything to TLS with one of their certificates.

Finally, the EFF and others put together Let's Encrypt with free, browser-recognized certificates and even a really solid automatic renewal system. That's perfect, and also eliminated my last excuse to go do the work, so now all of my personal web sites use TLS and HTTPS by default and redirect to the encrypted version of the web site. And better yet, all the certificates should just renew themselves automatically, meaning one less thing I have to keep track of and deal with periodically.

Many thanks to Wouter Verhelst for his short summary of how to get the Let's Encrypt client to work properly from the command line without doing all the other stuff it wants to do in order to make things easier for less sophisticated users. Also useful was the SSL Labs server test to make sure I got the modern TLS configuration right. (All my sites should now be an A. I decided to not cut off support for Internet Explorer older than version 11 yet.

I imported copies of the Debian packages needed for installation of the Let's Encrypt package on Debian jessie that weren't already in Debian backports into my personal Debian repository for my own convenience, but they're also there for anyone else.

Oh, that reminds me: this also affects the archives.eyrie.org APT repository (the one linked above), so if any of you were using that, you'll now need to install apt-transport-https and might want to change the URL to use HTTPS.

Planet Linux AustraliaStewart Smith: My linux.conf.au 2016 talk “Adventures in OpenPower Firmware” is up!

Thanks to the absolutely amazing efforts of the LCA video team, they’ve already (only a few days after I gave it) got the video from my linux.conf.au 2016 talk up!

Abstract

In mid 2014, IBM released the first POWER8 based systems with the new Free and Open Source OPAL firmware. Since then, several members of the OpenPower foundation have produced (or are currently producing) machines based on the POWER8 processor with the OPAL firmware.

This talk will cover the POWER8 chip with an open source firmware stack and how it all fits together.

We will walk through all of the firmware components and what they do, including the boot sequence from power being applied up to booting an operating system.

We’ll delve into:
– the time before you have RAM
– the time before you have thermal management
– the time before you have PCI
– runtime processor diagnostics and repair
– the bootloader (and extending it)
– building and flashing your own firmware
– using a simulator instead
– the firmware interface that Linux talks to
– device tree and OPAL calls
– fun in firmware QA and testing

View

Youtube: https://www.youtube.com/watch?v=a4XGvssR-ag

Download (webm): http://mirror.linux.org.au/linux.conf.au/2016/03_Wednesday/Costa_Hall/Adventures_in_OpenPower_Firmware.webm

Kelvin ThomsonIntergenerational Equity - Change in the Eighties

John Edwards (Australian Financial Review March 2015) said "Elected on a platform of opposition to the Campbell recommendations and of budget expansionism, the Hawke Government abruptly moved in the reverse direction".

And there is no shortage of people working for big corporations or their political or media cheer squads who are happy to regard the 1980s and 1990s as a halcyon era of political and economic reform which served Australia well. The argument goes that floating the dollar, pulling down tariff barriers, deregulating the financial markets, and implementing National Competition Policy, laid the foundation for economic growth and rising living standards in the years that followed. It is said that Australia's income per head rose during this period as a result of these changes.

But less simplistic and more detailed analysis suggests that the deregulation and destruction of industry support, and the ripping up of the Australian Settlement which occurred in these years and subsequently, has not been the claimed road to paradise. Much of Australia's increased income has been a consequence of exports to China. China's increased prosperity has led to demand for Australian commodities, particularly minerals. The mining boom could not go on forever, and it has not. It has come at the cost of narrowing our economy, when we need a broader, more resilient, one.

Secondly Bob Birrell and Ernest Healy have pointed out that the achievements of the 1990s were not just attributable to the protection offered by the low Australian dollar and therefore vulnerable to the currency rise that came with the mining boom, much of the elaborately transformed manufacture (ETM) exports of the 1990s can be attributed to foundations which had been established during the protectionist period, which the market liberal policies of the 1980s and 1990s dismantled.

Peter Sheehan and colleagues showed that ETM exports from the mid-1980s to the early 1990s were predominantly those which had benefited from "industry specific policies directed at increased outward orientation and export levels". (P. J. Sheehan, Nick Pappas and Enjiang Cheng, 1994, The Rebirth of Australian Industry, Centre for Strategic Economic Studies, Victoria University, p. 30). The industries included telecommunication equipment, cars, computers and pharmaceuticals.

Motor vehicles were a standout, with an average annual rate of export growth of 16 per cent over the decade to 2000-01. Pharmaceutical exports grew by 21.4 per cent per annum during the same period, to $2.4 billion. (Jonathon Coppel and Ben McLean, 2002, 'Trends in Australia's  Exports, Reserve Bank of Australia Bulletin, April 2002, p.3).

New enterprises were not significant contributors. Bob Birrell and Ernest Healy present the heretical hypothesis that the tariff protection and industry policy support of the pre-reform era laid the foundation for Australia's ETM export successes in the 1990s, and that once this support was removed in the 1990s and 2000s that the success was short-lived.

Dennis Glover, Lecturer Graduate School of Humanities and Social Sciences at the University of Melbourne, draws a parallel between what happened to English workers in the first three decades of the 1800s and what happened in Australia from the Mid-eighties and the 2015. ("The unmaking of the Australian working class - and their right to resist", The Conversation, 3 August 2015).

Dr. Glover says that at the end of the eighteenth century the English working class of hand loom weavers, agricultural labourers, iron workers, miners and so on lived a largely rural existence, employed at home or in small workshops, with strong connections to village or parish life. But by the 1830s many had been agglomerated into large factories. Towns like Manchester, Liverpool and Leeds had been transformed into the "dark satanic mills" of Blake's poem. Crammed into dangerous slums, many died young and poor. The old world had been physically transformed: bricked over, blackened, cheapened, uglified.

Dr. Glover says something just as dramatic happened in Australia during the past three decades. He says the transformation from the industrial to the post-industrial era has been so total as to constitute the sociological equivalent of an extinction event. (Ibid). "The queues of workers' cars lining up each morning to get through the factory gate - gone. The publicly owned banks and utilities - gone, or about to go...... Secure, full-time employment, with its guarantee of holidays, sick pay and promotion - in many industries long gone. The working class dream of home ownership and upward mobility via cheap land, equal educational opportunities and cheap land - all are on the way out"(Ibid).

Dennis Glover describes this as the un-making of the Australian working class. "Just as 18th century England's green and pleasant fields were paved over with brick, its vocations replaced by the steam-powered machine, its pastoral life rent asunder by the regimentation of the Industrial Age, in just 30 years the world of the Australian working class, with its factories and unions and quality public services and the communities they supported, has been made all but extinct, wiped out, like the dinosaurs, by the fiery asteroid of creative destruction". (Ibid).

He describes this as the revolution the little people lost, and makes the astute observation that the little people, the losers, refuse to go away. They vote against more "economic reform" - they won't support a higher GST and they won't support more privatisations. One might add that they voted out the party of Workchoices and they don't support deregulated university fees or Medicare co-payments either.

Dr Glover asks why they don't thank Paul Keating for liberating them from their dull, monotonous, supposedly unskilled and unimportant jobs making cars? He answers by saying that Australia's working class won't easily give up without a fight, won't voluntarily accept poverty, and won't surrender its culture and traditions without a struggle. "Australia's wilful working class deserves to be rescued from the condescension of the economic reformers. Just like the members of the English working class who went through the Industrial Revolution, the people who have experienced the destruction of their industries and communities in places like Dandenong and Doveton in Melbourne's South-East, Norlane in Geelong, Broadmeadows in Melbourne's north, and Elizabeth outside Adelaide, where the car factories and canneries are still being closed and unemployment is still well above 20per cent after 25 years of economic growth, have something important to say to us" (Ibid).

He concludes with the observation that we should try to make economic change work for everyone.

Right wing commentators and economists regularly say the world is changing, and changing rapidly, and nations must change in order to survive. This is a classic case of seeking to profit from their own wrongdoing. Much of the change that is happening is being driven by policies advocated and implemented by right wing commentators. Much of the change is not inevitable. The fact that it is making it tougher to survive should be cause to question our policy directions, not head even faster towards the cliff.

One of the defining features of modern political life is a pervasive loss of faith in government's ability to solve problems, or indeed do anything much at all. Sally Young, Associate Professor of Political Science at the University of Melbourne, says we are living through a lost era of policy making. She says that politicians of today are suffering a crisis of confidence about whether their policy making can make a big difference. (The Age, 1 April 2015, p.20 ).

She notes that the Prime Ministers of the seventies built things. Gough Whitlam left behind Medibank, women's health centres, the Family Court, single-parent pensions, public transport projects, sewerage systems, the Arts Council, the National Gallery, Triple J and Legal Aid. He gave us free tertiary education, the Trade Practices Act, no fault divorce, needs based schools funding, the abolition of conscription, the Heritage Commission, aboriginal land rights, voting at age 18 and fair electoral boundaries.

But the situation since the 1970s has deteriorated dramatically. Erik S. Reinert from The Other Canon Foundation set this deterioration out in detail in 2012 in his paper "Neo-classical economics: A trail of economic destruction since the 1970s". He says that three decades of applying neoclassical economics and neo-liberal policies have destroyed, rather than created, real wages and wealth.

Reinert starts by observing that after the Second World War Two institutions were established which provided the conditions for an unprecedented increase in human welfare. The 1947 Marshall Plan paved the way for the re industrialisation of Europe and other nations all the way to Japan. The 1948 Havana Charter established rules of international trade that made this industrialisation possible. It allowed for "infant industry protection" where unemployment was present in a country. There was a tripartite political setting, with a balance of power between business, labor and government.

Countries with a diversified economy prospered. For example South Korea diversified away from agriculture and raw materials and into manufacturing industry. It did not continue to rely on its 'comparative advantage' in agriculture, instead using heavy-handed industry policy to break into manufacturing. On the other hand Somalia was richer than Korea until the mid 60s, but in Reinert's words "continued to specialise according to its comparative advantage in being poor".

Reinert says that the theory of "comparative advantage" advanced through David Ricardo's free trade theories was in practice only applied in the colonies. He says that the US Washington Consensus free trade theories were for a long time mainly intended for export, not for use at home. He says that "Unfortunately, in the end the West also started believing in the propaganda version of its own economic theory".

The world development record, expressed as a growth rate of GDP per capita, is described by Reinert as excellent from 1950 to 1973 but dismal from 1973 to 2001. He says that during this period Latin America experienced a string of 'lost decades'. Real wages in Peru were more than halved when the free trade shock and subsequent deindustrialisation hit Peru starting in the mid-1970s. Africa's beginning industrialisation was reversed, and the communist economies became poorer than they had been under a notoriously inefficient communist planned economy. Reinert concludes from this period that a nation with an inefficient manufacturing sector is much better off than a nation without any manufacturing sector at all.

Reinert says that even the United States finds that too much free trade has undermined its manufacturing base. The West has embarked on an attack on wage levels and purchasing power in the name of austerity. The results are likely to be just as harmful to real wages and purchasing power as they have been wherever they have been applied. A wave of neo-classical wealth destruction hit Latin America in the mid-seventies. It also hit the little industry Africa had managed to build. Another wave of destruction hit the centrally planned economies after the fall of the Berlin Wall. The fall of the Wall heralded a period of Western triumphalism, an in particular a failure of mainstream economics to distinguish between the financial sector and real wealth creation. This has now caught up with country after country in Europe and beyond.

Three economies which have done well during these lost decades have been Brazil, India and China. Reinert notes that they escaped the free market fundamentalism and free trade shock that accompanied the fall of the Berlin Wall. In these countries neoliberalism was met with resistance from a critical mass of economists.

Reinert says that what is needed is to recapture the middle ground. He supports the principles of the Havana Charter, unanimously approved by the members of the United Nations in 1948, as a blueprint for a world economic order that creates, rather than destroys, mass welfare. He says that of the three political systems which brought financial capital under control during the 1930s, - communism, fascism, and the New Deal - there is little doubt what most people today would choose. But that needs to be kept as a live option, and neoclassical economics is failing to do this. It fails to distinguish between the real economy and the financial sector, with the risk that financial sector stops adding value to the real economy, but starts to parasitically destroy value. (This paper is online at http://mpra.ub.uni-muenchen.de/47910/).

,

Planet DebianMike Hommey: SSH through jump hosts, revisited

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+*
ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Update: Add missing port to -W flag when one is not given.

Planet DebianIain R. Learmonth: After FOSDEM 2016

FOSDEM was fun. It was great to see all these open source projects coming together in one place and it was really good to talk to people that were just as enthusiastic about the FOSS activities they do as I am about mine.

Thanks go to Saúl Corretgé who looked after the real-time communications dev room and made sure everything ran smoothly. I was very pleased to find that I had to stand for a couple of talks as the room was full with people eager to learn more about the world of RTC.

I was again pleased on the Sunday when I had such a great audience for my talk in the distributions dev room. Everyone was very welcoming and after the talk I had some corridor discussions with a few people that were really interesting and have given me a few new things to explore in the near future.

A few highlights from FOSDEM:

  • ReactOS: Since I last looked at this project it has really matured and is getting to be rather stable. It may be possible to start seriously considering replacing Windows XP/Vista machines with ReactOS where the applications being run just cannot be used with later versions of Windows.
  • Haiku: I used BeOS a long long time ago on my video/music PC. I can't say that I was using it over a Linux or BSD distribution for any particular reason but it worked well. I saw a talk that discussed how Haiku was keeping up-to-date with drivers and also there was a talk, that I didn't see, that talked about the new Haiku package management system. I think I may check out Haiku again in the near future, even if only for the sake of nostalgia.
  • Kolab: Continuing with the theme of things that have matured since I last looked at them, I visited the Kolab stand at FOSDEM and I was impressed with how far it has come. In fact, I was so impressed that I'm looking at using it for my primary email and calendaring in the near future.
  • picoTCP: When I did my Honours project at University, I was playing with Contiki. This looks a lot easier to get started with, even if it's perhaps missing parts of the stack that Contiki implements well. If I ever find time for doing some IoT hacking, this will be on the list of things to try out first.

This is just some of the highlights, and I know I'm missing out a lot here. One of the main things that FOSDEM has done for me is open my eyes as to how wide and diverse our community is and it has served as a reminder that there is tons of cool stuff out there if you take a moment to look around.

Also, thanks to my trip to FOSDEM, I now have four new t-shirts to add into the rotation: FOSDEM 2016, Debian, XMPP and twiki.org.

Planet Linux AustraliaCraige McWhirter: Dipping My Toe Into Federated Social Media

I've started dipping my toe into federated social media. During LCA2016 I stood up an instance of GNUSocial. You can find it here social.mcwhirter.io and if you're already in the federated social media universe, you can reach me as craige@social.mcwhirter.io.

GNUSocial

Planet DebianJoey Hess: letsencrypt support in propellor

I've integrated letsencrypt into propellor today.

I'm using the reference letsencrypt client. While I've seen complaints that it has a lot of dependencies and is too complicated, it seemed to only need to pull in a few packages, and use only a few megabytes of disk space, and it has fewer options than ls does. So seems fine. (Although it would be nice to have some alternatives packaged in Debian.)

I ended up implementing this:

letsEncrypt :: AgreeTOS -> Domain -> WebRoot -> Property NoInfo

This property just makes the certificate available, it does not configure the web server to use it. This avoids relying on the letsencrypt client's apache config munging, which is probably useful for many people, but not those of us using configuration management systems. And so avoids most of the complicated magic that the letsencrypt client has a reputation for.

Instead, any property that wants to use the certificate can just use leteencrypt to get it and set up the server when it makes a change to the certificate:

letsEncrypt (LetsEncrypt.AgreeTOS (Just "me@my.domain")) "example.com" "/var/www"
    `onChange` setupthewebserver

(Took me a while to notice I could use onChange like that, and so divorce the cert generation/renewal from the server setup. onChange is awesome! This blog post has been updated accordingly.)

In practice, the http site has to be brought up first, and then letsencrypt run, and then the cert installed and the https site brought up using it. That dance is automated by this property:

Apache.httpsVirtualHost "example.com" "/var/www"
    (LetsEncrypt.AgreeTOS (Just "me@my.domain"))

That's about as simple a configuration as I can imagine for such a website!


The two parts of letsencrypt that are complicated are not the fault of the client really. Those are renewal and rate limiting.

I'm currently rate limited for the next week because I asked letsencrypt for several certificates for a domain, as I was learning how to use it and integrating it into propellor. So I've not quite managed to fully test everything. That's annoying. I also worry that rate limiting could hit at an inopportune time once I'm relying on letsencrypt. It's especially problimatic that it only allows 5 certs for subdomains of a given domain per week. What if I use a lot of subdomains?

Renewal is complicated mostly because there's no good way to test it. You set up your cron job, or whatever, and wait three months, and hopefully it worked. Just as likely, you got something wrong, and your website breaks. Maybe letsencrypt could offer certificates that will only last an hour, or a day, for use when testing renewal.

Also, what if something goes wrong with renewal? Perhaps letsencrypt.org is not available when your certificate needs to be renewed.

What I've done in propellor to handle renewal is, it runs letsencrypt every time, with the --keep-until-expiring option. If this fails, propellor will report a failure. As long as propellor is run periodically by a cron job, this should result in multiple failure reports being sent (for 30 days I think) before a cert expires without getting renewed. But, I have not been able to test this.

Planet DebianIustin Pop: mt-st project new homepage

A short public notice: mt-st project new homepage at https://github.com/iustin/mt-st. Feel free to forward your distribution-specific patches for upstream integration!

Context: a while back I bought a tape unit to help me with backups. Yay, tape! All good, except that I later found out that the Debian package was orphaned, so I took over the maintenance.

All good once more, but there were a number of patches in the Debian package that were not Debian-specific, but rather valid for upstream. And there was no actual upstream project homepage, as this was quite an old project, with no (visible) recent activity; the canonical place for the project source code was an ftp site (ibiblio.org). I spoke with Kai Mäkisara, the original author, and he agreed to let me take over the maintenance of the project (and that's what I intend to do: maintenance mostly, merging of patches, etc. but not significant work). So now there's a github project for it.

There was no VCS history for the project, so I did my best to partially recreate the history: I took the debian releases from snapshots.debian.org and used the .orig.tar.gz as bulk import; the versions 0.7, 0.8, 0.9b and 1.1 have separate commits in the tree.

I also took the Debian and Fedora patches and applied them, and with a few other cleanups, I've just published the 1.2 release. I'll update the Debian packaging soon as well.

So, if you somehow read this and are the maintainer of mt-st in another distribution, feel free to send patches my way for integration; I know this might be late, as some distributions have dropped it (e.g. Arch Linux).

Planet DebianBen Armstrong: Bluff Trail icy dawn: Winter 2016

Before the rest of the family was up, I took a brief excursion to explore the first kilometre of the Bluff Trail and check out conditions. I turned at the ridge, satisfied I had seen enough to give an idea of what it’s like out there, and then walked back the four kilometres home on the BLT Trail.

I saw three joggers and their three dogs just before I exited the Bluff Trail on the way back, and later, two young men on the BLT with day packs approaching. The parking lot had gained two more cars for a total of three as I headed home. Exercising appropriate caution and judgement, the first loop is beautiful and rewarding, and I’m not alone in feeling the draw of its delights this crisp morning.

Click the first photo below to start the slideshow.

Click to start the slideshowClick to start the slideshow
facebooktwittergoogle_plusredditpinterestlinkedinmail

Planet DebianSteve Kemp: Redesigning my clustered website

I'm slowly planning the redesign of the cluster which powers the Debian Administration website.

Currently the design is simple, and looks like this:

In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.

(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)

When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.

Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.

Recently though I've begun planning how it will be deployed in the future and I have a new design:

Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:

  • POST /users/login
    • POST a username/password and return 200 if valid. If bogus details return 403. If the user doesn't exist return 404.
  • GET /users/Steve
    • Return a JSON hash of user-information.
    • Return 404 on invalid user.

I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.

This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.

Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.

Planet Linux AustraliaPeter Lieverdink: Social! Space! Western Australia!

A few weeks ago I noticed a retweet by ESA, asking for expression of interest from space enthusiasts to attend and social-media (verb) the inauguration of a new antenna at their New Norcia deep spacetracking site in Western Australia.

That site is used to communicate with deep space missions such as Rosetta and Gaia.

After some um-ing and ah-ing, I decided to apply. After all, when I'm on holiday elsewhere I try to visit observatories and other space related things and am always a bit disappointed when a fence keeps me at a distance.

Last week I got an email with the the happy news that I was one of the fifteen lucky people selected to attend!

 

So, over the next week you'll probably see a lot of space tweets from me with impressive radio hardware, behind the scenes looks at things, and a lot of excited people.

You can read more about #SocialSpaceWA on the ESA Social Space blog.

 

,

Planet DebianDimitri John Ledkov: Blogging about Let's encrypt over HTTP

So let's encrypt thing started. And it can do challenges over http (serving text files) and over dns (serving .txt records).

My "infrastructure" is fairly modest. I've seen too many of my email accounts getting swamped with spam, and or companies going bust. So I got my own domain name surgut.co.uk. However, I don't have money or time to run my own services. So I've signed up for the Google Apps account for my domain to do email, blogging, etc.

Then later i got the libnih.la domain to host API docs for the mentioned library. In the world of .io startups, I thought it's an incredibly funny domain name.

But I also have a VPS to host static files on ad-hoc basis, run VPN, and an irc bouncer. My irc bouncer is ZNC and I used a self-signed certificate there, thus i had "ignore" ssl errors in all of my irc clients... which kind of defeats the purposes somewhat.

I run my VPS on i386 (to save on memory usage) and on Ubuntu 14.04 LTS managed with Landscape. And my little services are just configured by hand there (not using juju).

My first attempt at getting on the let's encrypt bandwagon was to use the official client. By fetching debs from xenial, and installing that on LTS. But the package/script there is huge, has support for things I don't need, and wants dependencies I don't have on 14.04 LTS.

However I found a minimalist implementation letsencrypt.sh implemented in shell, with openssl and curl. It was trivial to get dependencies for and configure. Specified a domains text file, and that was it. And well, added sym links in my NGINX config to serve the challenges directory & a hook to deploy certificate to znc and restart that. I've added a cronjob to renew the certs too. Thinking about it, it's not complete as I'm not sure if NGINX will pick up certificate change and/or if it will need to be reloaded. I shall test that, once my cert expires.

Tweaking config for NGINX was easy. And I was like, let's see how good it is. I pointed https://www.ssllabs.com/ssltest/ at my https://x4d.surgut.co.uk/ and I got a "C" rating. No forward secrecy, vulnerable to down grade attacks, BEAST, POODLE and stuff like that. I went googling for all types of NGINX configs and eventually found website with "best known practices" https://cipherli.st/ However, even that only got me to "B" rating, as it still has Diffie-Hellman things that ssltest caps at "B" rating. So I disabled those too. I've ended up with this gibberish:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

I call it gibberish, because IMHO, I shouldn't need to specify any of the above... Anyway I got my A+ rating.

However, security is as best as the weakest link. I'm still serving things over HTTP, maybe I should disable that. And I'm yet to check how "good" the TLS is on my znc. Or if I need to further harden my sshd configuration.

This has filled a big gap in my infrastructure. However a few things remain served over HTTP only.

http://blog.surgut.co.uk is hosted by an Alphabet's / Google's Blogger service. Which I would want to be served over HTTPS.

http://libnih.la is hosted by GitHub Inc service. Which I would want to be served over HTTPS.

I do not want to manage those services, experience load / spammers / DDoS attacks etc. But I am happy to sign CSRs with let's encrypt and deploy certs over to those companies. Or allow them to self-obtain certificates from let's encrypt on my behalf. I used gandi.net as my domain names provider, which offers an RPC API to manage domains and their zones files, thus e.g. I can also generate an API token for those companies to respond with a dns-01 challenge from let's encrypt.

One step at a time I guess.

The postings on this site are my own and don't necessarily represent any past/present/future employers' positions, strategies, or opinions.

Planet DebianAndrew Shadura: Community time at Collabora

I haven’t yet blogged about this (as normally I don’t blog often), but I joined Collabora in June last year. Since then, I had an opportunity to work with OpenEmbedded again, write a kernel patch, learn lots of things about systemd (in particular, how to stop worrying about it taking over the world and so on), and do lots of other things.

As one would expect when working for a free software consultancy, our customers do understand the value of the community and contributing back to it, and so does the customer for the project I’m working on. In fact, our customer insists we keep the number of locally applied patches to, for example, Linux kernel, to minimum, submitting as much as possible upstream.

However, apart from the upstreaming work which may be done for the customer, Collabora encourages us, the engineers, to spend up to two hours weekly for upstreaming on top of what customers need, and up to five days yearly as paid Community days. These community days may be spent working on the code or doing volunteering at free software events or even speaking at conferences.

Even though on this project I have already been paid for contributing to the free software project which I maintained in my free time previously (ifupdown), paid community time is a great opportunity to contribute to the projects I’m interested in, and if the projects I’m interested in coincide with the projects I’m working with, I effectively can spend even more time on them.

A bit unfortunately for me, I haven’t spent enough time last year to plan my community days, so I used most of them in the last weeks of the calendar year, and I used them (and some of my upstreaming hours) on something that benefitted both free software community and Collabora. I’m talking about SparkleShare, a cross-platform Git-based file synchronisation solution written in C#. SparkleShare provides an easy to use interface for Git, or, actually, it makes it possible to not use any Git interface at all, as it monitors the working directory using inotify and commits stuff right after it changes. It automatically handles conflicts even for binary files, even though I have to admit its handling could still be improved.

At Collabora, we use SparkleShare to store all sorts of internal documents, and it’s being used by users not familiar with command line interfaces too. Unfortunately, the version we recently had in Debian had a couple of very annoying bugs, making it a great pain to use it: it would not notice edits in local files, or not notice new commits being pushed to the server, and that led to individual users’ edits being lost sometimes. Not cool, especially when the document has to be sent to the customer in a couple of minutes.

The new versions, 1.4 (and recently released 1.5) was reported as being much better and also fixing some crashes, but it also used GTK+ 3 and some libraries not yet packaged for Debian. Thanh Tung Nguyen packaged these packages (and a newer SparkleShare) for Ubuntu and published them in his PPA, but they required some work to be fit for Debian.

I have never touched Mono packages before in my life, so I had to learn a lot. Some time was spent talking to upstream about fixing their copyright statements (they had none in the code, and only one author was mentioned in configure.ac, and nowhere else in the source), a bit more time went into adjusting and updating the patches to the current source code version. Then, of course, waiting the packages to go through NEW. Fixing parallel build issues, waiting for buildds to all build dependencies for at least one architecture… But then, finally, on 19th of January I had the updated SparkleShare in Debian.

As you may have already guessed, this blog post has been sponsored by Collabora, the first of my employers to encourage require me to work on free software in my paid time :)

Planet DebianNeil Williams: lava.debian.net

With thanks to Iain Learmonth for the hardware, there is now a Debian instance of LAVA available for use and the Debian wiki page has been updated.

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing. Extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA has a long history of supporting continuous integration of the Linux kernel on ARM devices (ARMv7 & ARMv8). So if you are testing a Linux kernel image on armhf or arm64 devices, you will find a lot of similar tests already running on the other LAVA instances. The Debian LAVA instance seeks to widen that testing in a couple of ways:

  • A wider range of tests including use of Debian artifacts as well as mainline Linux builds
  • A wider range of devices by allowing developers to offer devices for testing from their own desks.
  • Letting developers share local test results easily with the community without losing the benefits of having the board on your desk.

This instance relies on the latest changes in lava-server and lava-dispatcher. The 2016.2 release has now deprecated the old, complex dispatcher and a whole new pipeline design is available. The Debian LAVA instance is running 2015.12 at the moment, I’ll upgrade to 2016.2 once the packages migrate into testing in a few days and I can do a backport to jessie.

What can LAVA do for Debian?

ARMMP kernel testing

Unreleased builds, experimental initramfs testing – this is the core of what LAVA is already doing behind the scenes of sites like http://kernelci.org/.

U-Boot ARM testing

This is what fully automated LAVA labs have not been able to deliver in the past, at least without a usable SD Mux

What’s next

LOTS. This post actually got published early (distracted by the rugby) – I’ll update things more in a later post. Contact me if you want to get involved, I’ll provide more information on how to use the instance and how to contribute to the testing in due course.

Planet DebianHideki Yamane: playing to update package (failed)


I thought to build gnome-todo package 3.19 branch.

Once tried to do that, it seems to need gtk+3.0 (>= 3.19.5), however Debian doesn't have it yet (of course, it's development branch). Then tried to build gtk+3, it needs wayland 1.90 1.9.90 that has not been in Debian yet, too. So, update local package to wayland 1.9.91, found tiny bug and sent patch, and build it (package diff was sent to maintainer - and merged), easy task.

Build again, gtk+3.0 needs "wayland-protocols" that has not been packaged in Debian, yet. Okay... (20 min work...) done! Make wayland-protocols package (not ITPed yet since who should be maintainer, under same umbrella as wayland?), not difficult.

Build newest gtk+3.0 source as 3.19.8 with cowbuilder chroot with those package (cowbuilder --login --save-after-exec --inputfile foo.deb --inputfile bar.deb), ...and failed with testsuite ;) I don't have enough knowledge to investigate it.

Back to older gtk+3.0 source, build 3.19.1 is fine (diff), but 3.19.2 was failed to build, 3.19.3 to 3.19.8 were failed with testsuite.


Time is up, "You lose!"... that's one of typical days.

Planet DebianJulien Danjou: FOSDEM 2016, recap

Last week-end, I was in Brussels, Belgium for the FOSDEM, one of the greatest open source developer conference. I was not sure to go there this year (I already skipped it in 2015), but it turned out I was requested to do a talk in the shared Lua & GNU Guile devroom.

As a long time Lua user and developer, and a follower of GNU Guile for several years, the organizer asked me to run a talk that would be a link between the two languages.

I've entitled my talk "How awesome ended up with Lua and not Guile" and gave it to a room full of interested users of the awesome window manager 🙂.

We continued with a panel discussion entitled "The future of small languages Experience of Lua and Guile" composed of Andy Wingo, Christopher Webber, Ludovic Courtès, Etiene Dalcol, Hisham Muhammaad and myself. It was a pretty interesting discussion, where both language shared their views on the state of their languages.

It was a bit awkward to talk about Lua & Guile whereas most of my knowledge was year old, but it turns out many things didn't change. I hope I was able to provide interesting hindsight to both community. Finally, it was a pretty interesting FOSDEM to me, and it was a long time I didn't give talk here, so I really enjoyed it. See you next year!

Planet Linux AustraliaChris Neugebauer: linux.conf.au 2017 is coming to Hobart

Yesterday at linux.conf.au 2016 in Geelong, I had the privilege of being able to introduce our plans for linux.conf.au 2017, which my team and I are bringing to Hobart next year. We’ll be sharing more with you over the coming weeks and months, but until then, here’s some stuff you might like to know:

The Dates

16–20 January 2017.

The Venue

We’re hosting at the Wrest Point Convention Centre. I was involved in the organisation of PyCon Australia 2012 and 2013, which used Wrest Point, and I’m very confident that they deeply understand the needs of our community. Working out of a Convention Centre will reduce the amount of work we need to do as a team to organise the main part of the conference, and will let us focus on delivering an even better social programme for you.

We’ll have preferred rates at the adjoining hotels, which we’ll make available to attendees closer to the conference. We will also have the University of Tasmania apartments available, if you’d rather stay at somewhere more affordable. The apartments are modern, have great common spaces, and were super-popular back when lca2009 was in Hobart.

The Theme

Our theme for linux.conf.au 2017 is The Future of Open Source. LCA has a long history as a place where people come to learn from people who actually build the world of Free and Open Source Software. We want to encourage presenters to share with us where we think their projects are heading over the coming years. These thoughts could be deeply technical: presenting emerging Open Source technology, or features of existing projects that are about to become part of every sysadmin’s toolbox.

Thinking about the future, though, also means thinking about where our community is going. Open Source has become massively successful in much of the world, but is this success making us become complacent in other areas? Are we working to meet the needs of end-users? How can we make sure we don’t completely miss the boat on Mobile platforms? LCA gets the best minds in Free Software to gather every year. Next year, we’ll be using that opportunity to help see where our world is heading.

 

So, that’s where our team has got so far. Hopefully you’re as excited to attend our conference as we are to put it on. We’ll be telling you more about it real soon now. In the meantime, why not visit lca2017.org and find out more about the city, or sign up to the linux.conf.au announcements list, so that you can find out more about the conference as we announce it!

lca2017 handver.001

Geek FeminismLinkspam On. Linkspam Off. (5 February 2016)

  • Fathers: maybe stop mentioning your daughters to earn credibility on women’s issues | Medium: “We have to take our time and earn trust. We have to show up to those women’s meetings — and listen. We have to volunteer to do the busy work it takes to make diversity initiatives run. We’ve got to apologize when we mess up. We have to make our workplaces more hospitable to all kinds of people. We have to hire marginalized people. And we’ve got to read, read, read all we can to make sure we know what we are talking about and never stop because we probably still don’t. Our daughters are awesome. But at work, lets make things better for everyone.”
  • Dear White Women in Tech: Here’s a Thought — Follow Your Own Advice by Riley H | Model View Culture: “Instead of being useful to us, all I see is that white women are quite happy to talk at all-white panels and call it diversity in tech and gaming. You’re happy to use the means afforded to you for being white to play a good game and make a good face while doing nothing meaningful for women of color. You’re screaming and shouting all day about your own shallow versions of feminism while the women of color you claim to represent are trying to simultaneously hold their heads up to stay above water, and down to avoid choking on smoke.”
  • How startups can create a culture of inclusiveness | The Globe and Mail: “As a young female in a leadership position at a successful tech startup, who also happens to be visibly religious, I know a thing or two about representing minorities in the workplace. After years of hearing and reading about the lack of diversity in startups and personally encountering what seem like isolated incidents, I’ve noticed a very real pattern of exclusivity. Here are a few things I’ve learned during my career at several Toronto startups on building a workplace culture that is collaborative, inclusive, and one that can help accelerate the growth of your company.”
  • This 2014 Sci-Fi Novel Eerily Anticipated the Zika Virus | Slate: “There is a better science fiction analog to the Zika crisis: The Book of the Unnamed Midwife, by Meg Elison, which was published in 2014 In Children of Men, abortion and birth control are rendered moot; in The Book of the Unnamed Midwife, birth control and a woman’s right to bodily autonomy are central to the plot.”
  • Let’s Talk About The Other Atheist Movement | Godlessness in Theory: “Over the last twenty-four hours, with media fixated on Dawkins’ absence from one upcoming convention, atheists have been gathered at another in Houston. The Secular Social Justice conference, sponsored jointly by half a dozen orgs, highlights ‘the lived experiences, cultural context, shared struggle and social history of secular humanist people of color’. Sessions address the humanist history of hip hop, the new atheism’s imperialist mission and the lack of secular scaffolds for communities of colour in the working class US, whether for black single mothers or recently released incarcerees. Perhaps we could talk about this?”
  • Computer Science, Meet Humanities: in New Majors, Opposites Attract | Chronicle of Higher Education: “She chose Stanford University, where she became one of the first students in a new major there called CS+Music, part of a pilot program informally known as CS+X.Its goal is to put students in a middle ground, between computer science and any of 14 disciplines in the humanities, including history, art, and classics. And it reduces the number of required hours that students would normally take in a double major in those subjects.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

,

Planet DebianDaniel Pocock: Giving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

CryptogramNSA Reorganizing

The NSA is undergoing a major reorganization, combining its attack and defense sides into a single organization:

In place of the Signals Intelligence and Information Assurance directorates ­ the organizations that historically have spied on foreign targets and defended classified networks against spying, respectively ­ the NSA is creating a Directorate of Operations that combines the operational elements of each.

It's going to be difficult, since their missions and culture are so different.

The Information Assurance Directorate (IAD) seeks to build relationships with private-sector companies and help find vulnerabilities in software ­ most of which officials say wind up being disclosed. It issues software guidance and tests the security of systems to help strengthen their defenses.

But the other side of the NSA house, which looks for vulnerabilities that can be exploited to hack a foreign network, is much more secretive.

"You have this kind of clash between the closed environment of the sigint mission and the need of the information-assurance team to be out there in the public and be seen as part of the solution," said a second former official. "I think that's going to be a hard trick to pull off."

I think this will make it even harder to trust the NSA. In my book Data and Goliath, I recommended separating the attack and defense missions of the NSA even further, breaking up the agency. (I also wrote about that idea here.)

And missing in their reorg is how US CyberCommmand's offensive and defensive capabilities relate to the NSA's. That seems pretty important, too.

Planet DebianBernd Zeimetz: bzed-letsencrypt puppet module

With the announcement of the Let’s Encrypt dns-01 challenge support we finally had a way to retrieve certificates for those hosts where http challenges won’t work. Also it allows to centralize the signing procedure to avoid the installation and maintenance of letsencrypt clients on all hosts.

For an implementation I had the following requirements in my mind:

  • Handling of key/csr generation and certificate signing by puppet.
  • Private keys don’t leave the host they were generated on. If they need to (for HA setups and similar cases), handling needs to be done outside of the letsencrypt puppet module.
  • Deployment and cleanup of tokens in our DNS infrastructure should be easy to implement and maintain.

After reading trough the source code of various letsencrypt client implementations I decided to use letsencrypt.sh. Mainly because its dependencies are available pretty much everywhere and adding the necessary hook is as simple as writing some lines of code in your favourite (scripting) language. My second favourite was lego, but I wanted to avoid shipping binaries with puppet, so golang was not an option.

It took me some days to find enough spare time to write the necessary puppet code, but finally I managed to release a working module today. It is still not perfect, but the basic tasks are implemented and the whole key/csr/signing chain works pretty well.

And if your hook can handle it, http-01 challenges are possible, too!

Please give the module a try and send patches if you would like to help to improve it!

Planet DebianJose M. Calhariz: Preview of amanda 3.3.8-1

While I sort out a sponsor, my sponsor is very busy, here is a preview of the new packages. So anyone can install and test them on jessie.

The source of the packages is in collab-maint.The debs files for jessie are here:

amanda-common_3.3.8-1_cal0_i386.deb

amanda-server_3.3.8-1_cal0_i386.deb

amanda-client_3.3.8-1_cal0_i386.deb

Here comes the changelog:

amanda (1:3.3.8-1~cal0) unstable; urgency=low

  * New Upstream version
    * Changes for 3.3.8
      * s3 devices
          New NEARLINE S3-STORAGE-CLASS for Google storage.
          New AWS4 STORAGE-API
      * amcryptsimple
          Works with newer gpg2.
      * amgtar
          Default SPARSE value is NO if tar < 1.28.
          Because a bug in tar with some filesystem.
      * amstar
          support include in backup mode.
      * ampgsql
          Add FULL-WAL property.
      * Many bugs fix.
    * Changes for 3.3.7p1
      * Fix build in 3.3.7
    * Changes for 3.3.7
      * amvault
          new --no-interactivity argument.
          new --src-labelstr argument.
      * amdump
          compute crc32 of the streams and write them to the debug files.
      * chg-robot
          Add a BROKEN-DRIVE-LOADED-SLOT property.
      * Many bugs fix.
  * Refreshed patches.
  * Dropped patches that were applied by the upstream: fix-misc-typos,
    automake-add-missing, fix-amcheck-M.patch,
    fix-device-src_rait-device.c, fix-amreport-perl_Amanda_Report_human.pm
  * Change the email of the maintainer.
  * "wrap-and-sort -at" all control files.
  * swig is a new build depend.
  * Bump standard version to 3.9.6, no changes needed.
  * Replace deprecated dependency perl5 by perl, (Closes: #808209), thank
    you Gregor Herrmann for the NMU.

 -- Jose M Calhariz <jose@calhariz.com>  Tue, 02 Feb 2016 19:56:12 +0000

Planet DebianBen Hutchings: Debian LTS work, January 2016

In January I carried over 10 hours from December and was assigned another 15 hours of work by Freexian's Debian LTS initiative. I worked a total of 15 hours. I had a few days on 'front desk' at the start of the month, as my week in that role spanned the new year.

I fixed a regression in the kernel that was introduced to all stable suites in December. I uploaded this along with some minor security fixes, and issued DLA 378-1.

I finished backporting and testing fixes to sudo for CVE-2015-5602. I uploaded an update and issued DLA 382-1, which was followed by DSA 3440-1 for wheezy and jessie.

I finished backporting and testing fixes to Claws Mail for CVE-2015-8614 and CVE-2015-8708. I uploaded an update and issued DLA 383-1. This was followed by DSA 3452-1 for wheezy and jessie, although the issues are less serious there.

I also apent a little time on InspIRCd, though this isn't a package that Freexian's customers care about and it seems to have been broken in squeeze for several years due to a latent bug in the build system. I had already backported the security fix by the time I discovered this, so I went ahead with an update fixing that regression as well, and issued DLA 384-1.

Finally, I diagnosed the regression in the update to isc-dhcp in DLA 385-1.

Planet DebianEnrico Zini: debtags-cleanup

debtags.debian.org cleaned up

Since the Debtags consolidation announcement there are some more news:

No more anonymous submissions

  • I have disabled anonymous tagging. Anyone is still able to tag via Debian Single Sign-On. SSO-enabling the site was as simple as this.
  • Tags need no review anymore to be sent to ftp-master. I have removed all the distinction in the code between reviwed and unreviewed tags, and all the code for the tag review interface.
  • The site now has an audit log for each user, that any person logged in via SSO can access via the "history" link in the top right of the tag editor page.

Official recognition as Debian Contributors

  • Tag contributions are sent to contributors.debian.org. There is no historical data for them because all submissions until now have been anonymous, but from now on if you tag packages you are finally recognised as a Debian Contributor!

Mailing lists closed

  • I closed the debtags-devel and debtags-commits mailing lists; the archives are still online.
  • I have updated the workflow for suggesting new tags in the FAQ to "submit a bug to debtags and Cc debian-devel"

We can just use debian-devel instead of debtags-devel.

Autotagging of trivial packages

  • I have introduced the concept of "trivial" packages to currently be any package in the libs, oldlibs and debug sections. They are tagged automatically by the site maintenance and are excluded from the site todo lists and tag editor. We do not need to bother about trivial packages anymore, all 13239 of them.

Miscellaneous other changes

  • I have moved the debtags vocabulary from subversion to git
  • I have renamed the tag used to mark packages not yet reviewed by humans from special::not-yet-tagged to special::unreviewed
  • At the end of every nightly maintenance, some statistics are saved into a database table. I have collected 10 years of historical data by crunching big tarballs of site backups, and fed them to the historical stats table.
  • The workflow for getting tags from the site to ftp-master is now far, far simpler. It is almost simple enough that I should manage to explain it without needing to dig through code to see what it is actually doing.

CryptogramTracking Anonymous Web Users

This research shows how to track e-commerce users better across multiple sessions, even when they do not provide unique identifiers such as user IDs or cookies.

Abstract: Targeting individual consumers has become a hallmark of direct and digital marketing, particularly as it has become easier to identify customers as they interact repeatedly with a company. However, across a wide variety of contexts and tracking technologies, companies find that customers can not be consistently identified which leads to a substantial fraction of anonymous visits in any CRM database. We develop a Bayesian imputation approach that allows us to probabilistically assign anonymous sessions to users, while ac- counting for a customer's demographic information, frequency of interaction with the firm, and activities the customer engages in. Our approach simultaneously estimates a hierarchical model of customer behavior while probabilistically imputing which customers made the anonymous visits. We present both synthetic and real data studies that demonstrate our approach makes more accurate inference about individual customers' preferences and responsiveness to marketing, relative to common approaches to anonymous visits: nearest- neighbor matching or ignoring the anonymous visits. We show how companies who use the proposed method will be better able to target individual customers, as well as infer how many of the anonymous visits are made by new customers.

Worse Than FailureError'd: An Unusually Childish Debate

"Hilarity ensued when, during a recent political debate, the subtitles used by the Swedish state television came, not from authorized subtitlers of the debate, but rather the neighboring children's channel," Jonas writes.

 

"By providing the option to 'Give up', I sense they are at least being honest with me about the state of their web site," writes John Z.

 

"While I don't doubt that it's portable, it does seem inconvenient to have to gas up a signal generator," wrote Nathaniel M.

 

Alaks M. wrote, "Thanks Apple for your amazing error report. I didn't even have anything open."

 

"I mean, sure, music tastes vary, but come on!" writes Bruce R.

 

Chris M. wrote, "Well ReSharper, the test is green, but I'm not so sure if I should check it in."

 

Chris D. wrote, "The Perimeter Institute for Theoretical Physics is in denial about the 100+ seminars that they just returned matching my search criteria."

 

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!

Planet DebianMichal Čihař: Bug squashing in Gammu

I've not really spent much time on Gammu in past months and it was about time to do some basic housekeeping.

It's not that there would be too much of new development, I rather wanted to go through the issue tracker, properly tag issues, close questions without response and resolve the ones which are simple to fix. This lead to few code and documentation improvements.

Overall the list of closed issues is quite huge:

Do you want more development to happen on Gammu? You can support it by money.

Filed under: English Gammu python-gammu Wammu | 0 comments

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Friday – Session 3

Lighting talks

  • New Zealand Open Source Society
    • nzoss.org.nz
  • LCA 2015 give-aways of ARM chromebooks
    • Linux on ARM chellenge
    • github/steven-ellis
  • Call to Arms
    • x86 != Linux
    • Please consider other archetectures
  • StackPtr
    • Open Source GPS and MAP sharing
    • Android client and IOS to come
    • Create a group, Add placemaps, Share location with a group
    • Also run OpenStreetmaps tileserver
    • stackptr.com/registration  – Invite code LCA2016
  • Hat Rack
    • code is in githug, but what about everything else?
    • How to ack stuff that isn’t code?
    • bit.do/LABHR    #LABHR
    • Recommend people, especially people not like you
    • github.com/LABHR/octohatrack
  • Pycon
    • Melbourne 12-16 August
    • DjangoCon Au, Science and Data Miniconf, Python in Education plus more on 1st day
    • CPF open in mid-March
    • Financial assistence programme
    • pycon-au.org
  • Kiwi PyCon
    • 2016 in dunedin
    • Town Hall
    • 9-11 September
    • kiwi.pycon.org
  • GovHack
    • Have fun
    • Open up the government data
    • 29-31 July across Aus and NZ
  • JMAP: a better way to email
    • Lots of email standards, all aweful
    • $Company API
    • json over https
    • Single API for email/cal/contacts
    • Mobile/battery/network friendly
    • Working now at fastmail
    • Support friendly (only uses http, just one port for everything).
    • Batches commands, uses OOB notification
    • Effecient
    • Upgrade path – JMAP proxy
    • http://jmap.io  , https://proxy.jmap.io/
  • Tools
    • “Devops is just a name for a Sysadmin without any experience”
    • Lets get back to unix principals with tools
  • Machine Learning Demo
  • Filk of technical – Lied about being technical/gadget type.
  • ChaosKey
    • Randomness at 1MB/s
    • Copied from OneRNG
    • 4x4mm QFN package attached to USB key
    • Driver in Linux 4.1 (good in 4.3)
    • Just works!
    • Building up smaller batches to test
    • Hoping around $30

Closing

  • Thanks to Speakers
  • Clarification about the Speaker Gifts
  • Thanks to Sponsors
  • Raffle – $9680 raised
  • SFC donations with “lcabythebay” in the comment field will be matched (twice) in next week or two.
  • Thanks to Main Organisers from LCA President
  • Linux.conf.au 2017
    • Hobart
    • January 16th-20th 2017
    • At the Wrest Point casino convention centre. Accommodation on site and at Student accommodation
    • hobart.lca2017.org
  • Thanks to various people
  • hdmi2usb.tv is the video setup

FacebookGoogle+Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Friday – Session 2

Free as in cheap gadgets: the ESP8266 by Angus Gratton

  • I missed the start of the talk but he was giving a history of the release and getting software support for it.
  • Arduino for ESP8266 very popular
  • 2015-2016 maturing
  • Lots of development boards
    • Sparkfun ESP8266 thing, Adafruid Hazaah, WeMOS D1
  • Common Projects
    • Lots of lighting projects, addressable LED strips
    • Wireless power monitoing projects
    • Copy of common projects. Smoke alarm project
    • ESPlant – speakers project built in Open Hardware Miniconf – solar powered gardening sensor
    • Moodlight kickstarter
  • Shortcomings
    • Not a lot of documentation compared to other micro-controllers. 1/10 that of similar products
    • Weird hardware behaviour. Unusual output
    • Default baud rate 74880 bps
    • Bad TLS – TLS v1.0, 1.1 only , RSA 512/1024 . 2048 might work
    • Other examples
  • FOSS in ESP8266
    • GCC , Lua , Arduino, Micro Python
    • axTLS , LWIP, max80211, wpa_supplicant
    • Wrapped APIs, almost no source, mostly missing attribution
    • Weird licenses on stuff
  • Does this source matter?
    • Anecdote: TLS random key same every time due to bad random function (later fixed). But still didn’t initially use the built-in random number generator.
  • Reverse Engineering
    • Wiki , Tools: foogod/xtobjdis , ScratchABit , radara2 (soon)
    • esp-open-rtos – based on the old version that was under MIT
    • mbedTLS – TLS 1.2 (and older) , RSA to 4096 and other stuff. Audited and maintained
    • Working on a testing setup for regression tests
  • For beginners
    • Start with Ardino
    • Look at dev board
  • Future
    • Hopefully other companies will see success and will bring their own products out
    • but with a more open licenses
    • ESP32 is coming, probably 1y away from being good and ready

secretd – another take on securely storing credentials by Tollef Fog Heen

  • Works for fastly
  • What is the problem?
    • Code can be secret
    • Configuration can be secret
    • Credentials are secret
  • Secrets start in the following and move to the next..
    • directly code
    • then a configuration file
    • then an pre-encrypted store
    • then an online store
  • Problems with stores
    • Complex or insecure
    • Manual work to re-encrypt
    • Updating is hard
    • Not support for dev/prod split
  • Requirements for a fix
    • Dynamic environment support
    • Central storage
    • Policy based access controls, live
    • APIs for updating
  • Use Case
    • Hardware (re)bootstrapping
    • Hands-of/live handling
    • PCI: auditing
    • Machine might have no persistent storage
  • Options
    • pwstore – pre-encrypted
    • chef-vault – pre-encrypted
    • Hashicorp Vault – distributed, complex, TTL on secrets
    • etcd – x509
  • Secretd
    • go
    • SQL
    • ssh
    • tree structure, keys are just strings
    • positive ACLs
    • PostgressSQL backend
    • Apache Licensed
  • Client -> json over ssh -> secret-shell -> unix socket ->  secretd -> postgressSQL
  • Missing
    • Encrypting secrets on disk
    • Admin tools/other UIs
    • Auditing
    • Tool integration
    • Enrolment key support
  • Demo
  • Questions:
    • Why not sqlite? – Cause  I wanted at database. Postgres more directly supported the data structure I wanted, also type support
    • Why do just use built-in postgress security stuff? – Features didn’t exist a year ago, also requires all users must exist as DB users.

 

FacebookGoogle+Share

LongNowIs the Great Auk a Candidate for De-Extinction?

Great auk
On June 25 and 26, 2015, a meeting was held at the International Centre for Life in Newcastle, England, to discuss whether the extinct Great Auk–a once-common flightless pelagic bird known as “the penguin of the north”–might be a realistic candidate for bringing back to life using recent breakthroughs in genetic technology.  Twenty-two scientists and other interested parties gathered for the event.

Until its final extinction in 1844, the Great Auk (Pinguinus impennis) ranged across the entire north Atlantic ocean, fishing the waters off the northern US, Canada, Iceland, and northern Europe, including the coast near Newcastle in northern England.  The size of a medium penguin, it lived in the open ocean except for when it waddled ashore for breeding on just a few islands.  There its flightlessness made it vulnerable to human hunting and exploitation for its down that reached industrial scale.  Attempts to regulate the hunting as early as the 16th Century were fruitless.  The last birds, on an island off Iceland, were gone by 1844.

The host of the Great Auk meeting was Matt Ridley, member of the House of Lords, former science editor of The Economist, author of The Rational Optimist and Genome.  He opened the meeting by noting that de-extinction comes in four stages, which he described as: 1) In silico (the sequencing of the full genome of the extinct animal into digital data); 2) in vitro (editing the important genes of the extinct animal into living reproductive cells of its nearest living relative; 3) in vivo (using the edited reproductive cells to create living proxies of the extinct animal); and 4) in the wild (growing the proxy animal population with captive breeding and eventually releasing them to take up their old ecological role in the wild.)

Razorbill
The nearest relative of the Great Auk is the Razorbill (Alca torda, above).  It looks similar and has the same trans-Atlantic range and feeding habits, but it can fly and is about 1/8th the size of the Great Auk.  At the meeting, Tom Gilbert, an ancient-DNA expert at the Centre for Geogenetics, University of Copenhagen, reported on his preliminary sequencing of the Great Auk and Razorbill genomes, confirming that they are genetically close.  There are a number of Great Auk museum specimens to work with—71 skins, 24 skeletons, 75 eggs, and even some preserved internal organs and ancient fossil remains.

Three participants from Revive & Restore—Ben Novak, Ryan Phelan, and Stewart Brand—spelled out the current state of de-extinction projects involving the Passenger Pigeon, Heath Hen, and Woolly Mammoth. In each case the extinct genomes have been thoroughly sequenced, along with the genomes of their closest living relatives, and some of the important genes to edit have been identified.  With the woolly mammoth, 16 genes governing three important traits have been edited into a living elephant cell line by George Church’s team at Harvard Medical School.  For birds, the crucial in vivo stage of being able to create living chicks with edited genomes utilizing a primordial germ cell (PGC) approach has yet to be fully proven, though work has begun on the process, working with a private company in California.

Michael McGrew from Roslin Institute in Edinburgh described the current state of play using primordial germ cell techniques with chickens.  Progress may go best by introducing the edited PGCs into the embryos of chickens adjusted to have no endogenous germ cells of their own.  Because so much work has been done on chicken genetics, working with a bird in the same family, the extinct Heath Hen, may offer the most practical first case to pursue.  For the Great Auk eventually, a flock of captive-bred Razorbills will be needed to supply embryos for the PGC process.

Oliver Haddrath (Ornithologist at the Royal Ontario Museum) and Richard Bevan (Ecologist at Newcastle University) described what is known of the lifestyle, ecology, and history of Great Auks compared with Razorbills, and Andrew Torrance (Law professor at the University of Kansas) examined potential legal hurdles for resurrected Great Auks and found them not explicitly prohibitive and potentially navigable.

The meeting concluded with a sense that the project can and should be pursued.  Next steps include funding the completion of Tom Gilbert’s genetic study of Great Auks and Razorbills and scheduling a follow-up meeting in the summer of 2016, perhaps at another location (Canada, Iceland, Denmark) in the once-and-perhaps-future range of the Great Auk.

Meeting
Following the formal meeting, many of the 22 participants were taken by Matt Ridley on a boat tour of the nearby Farne Islands, where thousands of Razorbills and other seabirds are gathered in dense breeding colonies.  One of the islands has a low beach that would be a perfect land base for Great Auks.

Meeting

Participants:  Alastair Balls, Richard Bevan, Stewart Brand, Sir John Burn, Linda Conlon, Federica DiPalma, Graham Etherington, Fiona Fell, Tom Gilbert, Dan Gordon, Oliver Haddrath, James Haile, Jeremy Herrmann, Noel Jackson, Mike McGrew, Ben Novak, Ryan Phelan, Chris Redfern, Matt Ridley, Ryan Rothman, Jimmy Steele, Andrew Torrance.

Matt Ridley added the following about the Farne Islands:

The Farnes are one of the very few island groups on the east coast of Britain, so they are very attractive to island-nesting seabirds. They host about 90,000 breeding pairs of seabirds each summer: mainly puffins, guillemots, razorbills, kittiwakes, black-headed gulls, herring gulls, lesser black-back gulls, Arctic, Sandwich and Common terns, shags, cormorants, eider ducks and fulmars. About 2000 grey seal pups are born on the islands each autumn.

The number of breeding birds has approximately doubled in 40 years, largely thanks to protection from human disturbance, control of the predatory large gull population, and habitat management. Most of the birds depend on just one species of fish to feed their young: the sand eel. The eels lives in vast shoals over the sandy sea floor close to the islands. Some studies have suggested that nutrients from human activities on land — including agricultural fertiliser and human sewage — have contributed to the productivity of this part of the North Sea, though treated sewage effluent outflow to the sea has now ceased. It is likely that there are more breeding birds on the Farne islands today than for many centuries, because in the past people lived on the islands (as farmers or religious hermits), and visited them to collect eggs and chicks for food. There is archeological evidence that great auks lived here in the distant past, but they would have been quickly exterminated by people on islands so close to the shore, being so easy to catch.

Valerie AuroraThe Ally Skills Workshop returns, Impostor Syndrome book, public speaking and more

After taking three months off work, I naturally decided to found another company! Allow me to introduce Frame Shift Consulting, my new consulting firm. I’m continuing to do what I loved from the Ada Initiative – teaching Ally Skills Workshops, advising companies and conference organizers, speaking – and leaving out what I hated – fundraising, line management, and non-profit paperwork. I’ve also expanded the Ally Skills Workshop to teach people in a position of privilege how to support members of any marginalized group (formerly, it focused on teaching men to support women). I already have enough paying work that I’m behind on filling in my company web site, but I’ll be adding more content in between contracts over the next few months.

Woman holding microphone and raising arm in front of a photo of lightning
Calling down the lightning in a lightning talk
(Credit David Balliol, Thomas Bresson)

One of my goals for 2016 is to do more public speaking. I love speaking and people seem to enjoy my talks, but speaking was rarely a good use of my time when I was at the Ada Initiative. I regretfully had to turn down a lot of speaking engagements over the last 5 years. Now speaking is both fun and aligned with my work, so let me know if you’d like me to come to speak at your event! I’m especially interested in opportunities to speak to tech companies in the San Francisco Bay Area and paid speaking engagements anywhere in the world.

I’m also working on a book about fighting Impostor Syndrome, based on our work on Impostor Syndrome at the Ada Initiative. The approach I’m taking is that Impostor Syndrome isn’t a mysterious production of unfathomable personality quirks, it’s the intended result of a system of oppression designed to reinforce existing hierarchies. Once you understand where that nagging internal voice doubting your accomplishments is coming from, it’s easier to take action to reverse it. I’m looking for an agent who does traditional paper books for traditional publishers and knows the self-help market – let me know if you have a recommendation for someone!

I ended a lot of things in 2015 and I’m pretty happy about that. After 5 years of successful advocacy for women in open technology and culture, Mary Gardiner and I shut down the Ada Initiative (Mary is now working for Stripe, the lucky ducks). I stepped down from the board of the feminist makerspace I co-founded, Double Union, which is still going strong. With the shutdown of Magic Vibes, I am no longer involved in any joint projects with Amelia Greenhall and won’t be in the future. I stopped drinking alcohol entirely; I never drank that much in the first place but it turns out I’m allergic (!!!) to alcohol. After 5 enjoyable years of single-tude, I started dating again and am, to my pleasant surprise, in a long-term relationship with a great guy.

I’m really looking forward to 2016: teaching workshops, writing books, and speaking (and not fundraising!!). If you’d like to talk to me about teaching an Ally Skills Workshop, consulting with your organization, or speaking at your event, shoot me an email at contact@frameshiftconsulting.com. Here’s wishing you a great 2016 too!


Tagged: consulting, feminism, frame shift consulting, speaking

,

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Friday – Session 1

Keynote – Genevieve Bell

  • Building the Future
  • Lots of rolls as an Anthropologist at Intel over last 15 years or so
  • Vision of future from 1957 shows what the problems are in 1957 that the future would solve
  • Visions of the future seem very clean and linear, in reality it is messy and myriad.
  • ATM machine told her “Happy Birthday”
  • Imagining “Have you tried turning it off and on again?” at smart city scale is kind of terrifying.
  • Connectivity
    • Many people function well when they are offline, some people used to holiday in places with no cell reception
    • Social structures like Sabbath to give people time offline, but devices want us to be always online
    • Don’t want to always have seamless between devices, context matters. Want work/home/etc split
  • IOT
    • Technology lays bare domestic habits that were previously hidden
    • Who is else knows what you household habits are -> Gossip
  • Big Data
    • Messy , incomplete, inaccurate
    • Average human tells 6-200 lies per day
    • 100% of Americans lie in online profiles
      • Men lie about height, Women lie about weight
    • More data does not equal more truth. More data just means more data
  • Algorithms
    • My optimise for the wrong things (from the user’s point of view)
  • Security and Privacy
    • Conversation entwined with conversation about National Security
    • Concepts different from around the world
    • What is it like to release data under one circumstance and then to realise you have released it under several others
  • Memory
    • Cost of memory down to zero, we should just store everything
    • What are the usage models
    • What if everything you ever did and said was just there, what if you can never get away from it. There are mental illnesses based on this problem
  • Innovation
    • What is changing? to whose advantage and disadvantage? what does this mean to related areas?
    • Our solutions need to be human
    • We are the architects of our future
  • Question
    • Explain engineers to the world? – Treated first year at Intel like it was Anthropology fieldwork. Disconnect between what people imagine technologists think/do and what they really do. Need to explain what we do better

Helicopters and rocket-planes by Andrew Tridgell

  • The wonderful and crazy world of Open Autopilots
  • Outback Challenge
    • 90km/h for 45 minutes
    • Search pattern for a lost bushwalker with UAV
    • Drop them a rescue package
    • 2016 is much harder VTOL, get blood sample. Most do takeoff and landing remotely (30km from team).
    • “Not allowed to get blood sample using a propeller”
  • VTOL solutions – Helicopters and Quadplanes – tried both solutions
    • Communication 15km away, 2nd aircraft as a relay
    • Pure electric doesn’t have range. 100km/h for 1h
  • Helicopters
    • “Flying vibration generators with rotating swords at the top”
    • Hard to scale up which is needed in this case. 15cc motor, 2m blades, 12-14kg loaded
    • Petrol engines efficient VTOL and high energy density
    • Very precise control, good in high wind (competition can have ground wind up to 25 knots)
    • Normal stable flight vibrates at 6G , show example where in a couple of seconds flight goes bad and starts vibrating at 30+ G in a few seconds due to control problem (when pitch controller was adjusted and then started feedback loop)
  • Quadplanes
    • Normal Plane with wings but 4 virtually pointing propellers added
    • Long range, less vibration
    • initially two autopilots plus one more co-ordinating
    • electric for takeoff, petrol engine for for long range forward flight.
    • Hard to scale
    • crashed
  • Quadplane v2
    • Single auto-pilot
    • avoid turning off quad motors before enough speed from forward motor
    • Pure electric for all motors
    • Forward flight with wings much more efficient.
    • Options with scale-up to have forward motor as petrol
  • Rockets
    • Lohan rocket plane – Offshoot of The Register website
    • Mission hasn’t happened yet
    • Balloon takes plane to 20km, drops rocket and goes to Mach 2 in 8 seconds. Rocket glides back to each under autopilot and lands at SpacePort USA
    • 3d printed rocket. Needs to wiggle controls during ascent to stop them freezing up.
    • This will be it’s first flight so has autotune mode to hopefully learn how to fly for the first time on the way down
  • Hardware running Ardupilot
    • Bebop drone and 3DR solo runs open autopilot software
    • BBBmini fully open source kit
    • Qualcom flight more locked down
    • PXFMini for smaller ones
  • Sites
    • ardupilot.com
    • dronecode.org
    • canberrauav.org.au

The world of 100G networking by Christopher Lameter

  • Why not?
    • Capacity needed
    • Machines are pushing 100G to memory
    • Everything reqires more Bandwidth
  • Technologies
    • Was 10 * 10G standards CFP Cxx
    • New standard is 4 * 28Gs QSFP28 . compact and designed to replace 10G and 40G networking
    • Inifiband (EDR)
      • Most mature to date, switches and NICs available
    • Ethernet
      • Hopefully available in 2016
      • NICS under dev, can reuse EDR adapter
    • OmniPath
      • Redesigned to try replace infiband
    • Comparison connectors
      • QSFP28 smaller
    • QSFP idea with spliter into 4 * 25G links for some places
      • Standard complete in 2016 , 50G out there but standard doesn’t exist yet.
      • QSFP is 4 cables
  • 100G switches
    • 100G x 32 or 50G x64 or 25G x 128
    • Models being released this year, hopefully
    • Keeping up
  • 100G is just 0.01ns per bit , 150ns for 1500MTU packet, 100M packets/second, 50 packets per 10 us
  • Hardware distributed packets between cores. will need 60 cores to handle 100G in CPU, need to offload
  • Having multiple servers (say 4) sharing a Nic using PCIe!
  • How do you interface with these?
    • Socket API
  • Looking Ahead
    • 100G is going to be a major link speed in data centers soon
    • Software needs to mature especially the OS stack to handle bottlenecks

 

FacebookGoogle+Share

TEDOn “60 Minutes,” TED Prize winner Charmian Gooch shows how easy it can be to move questionable funds into the US

Charmian Gooch's message on "60 Minutes?" “America came up as the easiest place to set up an anonymous company after Kenya. We thought it can’t be this bad. And unfortunately, it is.”

Charmian Gooch’s message on 60 Minutes: “America came up as the easiest place to set up an anonymous company after Kenya. We thought it can’t be this bad. And unfortunately, it is.”

A corruption investigator walks into a law firm and says he wants to buy a brownstone, a jet and a yacht.

This isn’t the start of a joke. It’s the premise of the latest investigation by Global Witness, the anti-corruption organization co-founded by 2014 TED Prize winner Charmian Gooch.

In an investigation revealed on 60 Minutes as well as in The New York Times, the nonprofit sent an undercover investigator into 13 prestigious law firms in New York City, posing as an adviser to a government minister in Africa and wearing a hidden camera. The investigator said that his client came from “one of the most mineral-rich countries in West Africa,” and that he wanted to buy a plane, a yacht and a brownstone. The investigator also added that “if his name would appear in connection with buying real estate here, it would look at least very, very embarrassing.”

The story was designed to red flags. And yet, lawyers from 12 of these law firms offered advice on how to hide the questionable money and make these purchases. One suggested setting up a series of shell companies: “Company A owns Company B, which is owned by Company C and D.” Another suggested setting up a bank account in either the Isle of Man, Liechtenstein or Switzerland, then coming to his firm to make the purchases. Another suggested wiring money into the escrow account of the law firm itself, to hide the minister’s identity. Another said the minister needed “a straw man,” and that if the money left his country, “his name should not be attached. Someone else’s should be.” Yet another suggested doing a test transfer of $1 million, just to make sure the plan would work. Only one lawyer flat-out refused to offer advice, and asked the sketchy investigator to leave his office.

Global Witness contacted 50 law firms in total. Thirteen scheduled face-to-face meetings with the investigator. And twelve gave actual advice. Global Witness stresses that these meetings with the lawyers were preliminary; the firms didn’t actually take on the investigator as a client. And these lawyers didn’t actually break the law, as no money changed hands. But to Global Witness, that’s the point: should lawyers be so willing to answer these types of questions? And should moving questionable funds into the US really be that easy?

Global Witness began this investigation in 2014, just a few weeks after Charmian Gooch revealed her wish at TED2014 to end anonymous companies. Since then, the issue has gotten considerable attention around the world, but it’s still very easy to set up a company anonymously in the US. “In some states, you need less identification to set up a company than to get a library card,” says Gooch. “America comes up as the easiest place to set up an anonymous company after Kenya. We thought it can’t be this bad. And unfortunately, it is.”

Watch the 60 Minutes segment, or read The New York Times‘ article to hear what some of the lawyers had to say about their actions. Below, check out Gooch’s TED Talk and a TED-Ed lesson that further explains the issue of anonymous companies. And if you’d like to read the full case study on this investigation, head to Global Witness’ website.

Just days after the investigation was shown on 60 Minutes, a bipartisan group in both houses of the US Congress introduced a bill that would require companies in the US to disclose their ultimate owners, and keep this information up to date. Global Witness says this bill would “make life much harder” for those hoping to hide money.



Planet DebianVincent Fourmond: Making oprofile work again with recent kernels

I've been using oprofile for profiling programs for a while now (and especially QSoas, because it doesn't require specific compilation options, and doesn't make your program run much more slowly (like valgrind does, which can also be used to some extent for profiling). It's a pity the Debian package was dropped long ago, but the ubuntu packages work out of the box on Debian. But, today, while trying to see what takes so long in some fits I'm running, here's what I get:
~ operf QSoas
Unexpected error running operf: Permission denied

Looking further using strace, I could see that what was not working was the first call to perf_event_open.
It took me quite a long time to understand why it stopped working and how to get it working again, so here's for those of you who googled the error and couldn't find any answer (including me, who will probably have forgotten the anwser in a couple of months). The reason behing the change is that, for security reason, non-privileged users do not have the necessary privileges since Debian kernel 4.1.3-1; here's the relevant bit from the changelog:

  * security: Apply and enable GRKERNSEC_PERF_HARDEN feature from Grsecurity,
    disabling use of perf_event_open() by unprivileged users by default
    (sysctl: kernel.perf_event_paranoid)

The solution is simple, just run as root:
~ sysctl kernel.perf_event_paranoid=1

(the default value seems to be 3, for now). Hope it helps !

Rondam RamblingsNot with a bang...

Jeb Bush's campaign for the presidency is literally ending with a whimper: “I will not trash talk. I will not be a divider in chief or an agitator in chief. I won’t be out there blowharding, talking a big game without backing it up. I think the next president needs to be a lot quieter but send a signal that we’re prepared to act in the national security interests of this country — to get back in

Planet DebianPetter Reinholdtsen: Using appstream in Debian to locate packages with firmware and mime type support

The appstream system is taking shape in Debian, and one provided feature is a very convenient way to tell you which package to install to make a given firmware file available when the kernel is looking for it. This can be done using apt-file too, but that is for someone else to blog about. :)

Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin | \
  awk '/Package:/ {print $2}'
firmware-qlogic
%

See the appstream wiki page to learn how to embed the package metadata in a way appstream can use.

This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides mimetype image/svg+xml | \
  awk '/Package:/ {print $2}'
bkchem
phototonic
inkscape
shutter
tetzle
geeqie
xia
pinta
gthumb
karbon
comix
mirage
viewnior
postr
ristretto
kolourpaint4
eog
eom
gimagereader
midori
%

I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

Planet DebianRitesh Raj Sarraf: Lenovo Yoga 2 13 running Debian with GNOME Converged Interface

I've wanted to blog about this for a while. So, though I'm terrible at creating video reviews, I'm still going to do it, rather than procrastinate every day.

 

In this video, the emphasis is on using Free Software (GNOME in particular) tools, with which soon you should be able serve the needs for Desktop/Laptop, and as well as a Tablet.

The video also touches a bit on Touchpad Gestures.

 

Categories: 

Keywords: 

Like: 

Planet DebianMartin-Éric Racine: xf86-video-geode 2.11.18

Yesterday, I pushed out version 2.11.18 of the Geode X.Org driver. This is the driver used by the OLPC XO-1 and by a plethora of low-power desktops, micro notebooks and thin clients. This release mostly includes maintenance fixes of all sorts. Of noticeable interest is a fix for the long-standing issue that switching between X and a VT would result in a blank screen (this should probably be cherry-picked for distributions running earlier releases of this driver). Many thanks to Connor Behan for the fix!


Unfortunately, this driver still doesn't work with GNOME. On my testing host, launching GDM produces a blank screen. 'ps' and other tools show that GDM is running but there's no screen content; the screen remains pitch black. This issue doesn't happen with other display managers e.g. LightDM. Bug reports have been filed, additional information was provided, but the issue still hasn't been resolved.


Additionally, X server flat out crashes on Geode hosts running Linux kernels 4.2 or newer. 'xkbcomp' repeatedly fails to launch and X exits with a fatal error. Bug reports have been filed, but not reacted to. However, interestingly enough, X launches fine if my testing host is booted with earliers kernels, which might suggest what the actual cause of this particular bug might be:


Since kernel 4.2 entered Debian, the base level i386 kernel on Debian is now compiled for i686 (without PAE). Until now, the base level was i586. This essentially makes it pointless to build the Geode driver with GX2 support. It also means that older GX1 hardware won't be able to run Debian either, starting with the next stable release.

CryptogramThe Internet of Things Will Be the World's Biggest Robot

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These "things" will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about­ -- and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They'll affect our environment. Our smart thermostats aren't collecting information about ambient temperature and who's in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they're linked to driverless cars, they'll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system's smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We're building a world-sized robot, and we don't even realize it.

I've started calling this robot the World-Sized Web.

The World-Sized Web -- can I call it WSW? -- is more than just the Internet of Things. Much of the WSW's brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It's mobile, of course, because many of these things will move around with us, like our smartphones. And it's persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they're all becoming more prevalent. I believe that we're at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That's the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will "decide" where to go, what data to collect, and what to do.

We're already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that's why we're building it. But it will change our society in ways we can't predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they're based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it's easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won't have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated­ -- and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it's about how smart they'll be when they do it.

The WSW is being built right now, without anyone noticing, and it'll be here before we know it. Whatever changes it means for society, we don't want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.

EDITED TO ADD: Kevin Kelly has also thought along these lines, calling the robot "Holos."

EDITED TO ADD: Commentary.

Worse Than FailureCodeSOD: Log of String

Zrywka drewna 776

The English language contains words with multiple and often contradictory meanings. A dress, for example, is only one of many items you could put on while dressing yourself. Meanwhile, if you want to wear pants instead, you should avoid pantsing yourself, as that would be counter-productive.

These shades of meaning come into play frequently in programming. Writing a .NET application to manage school schedules requires some contortions to avoid using the reserved word class, and writing one to hold Dungeons and Dragons character sheets requires you to spell out the attribute Intelligence instead of using the far more common abbreviation int.

Dimitris, however, wasn't thinking about the many interesting tidbits to be found in the English language as he went about tracing an error message through the stack his formula engine application had unraveled. Instead, he was thinking about logging. Not the act of felling trees and preparing their corpses for use by humans, but the act of writing messages to a central location so that debugging an application would be easier.

Logging has a fascinating etymology, by the way. It originally had to do with a bit of wood that was designed to float upward when a ship sank, making it easier for someone to discover what had befallen the ship by reading its log. In the same manner, Dimitris wanted to discover what had happened to the application—but he got distracted along the way.


/**
 * override xxLog as it writes unnecessarily to database syslog
 */
function xxLog( tag, s )
{  
        if (sbDebugEnabled) {  
                text_write( prx_debug_file, str(tag) + "|" + (( is_na(s)) ? "NA" : str(s) ));
        }  
        if (sbDebugPrintEnabled) {  
                print( str(tag) + " : " + (( is_na(s)) ? "NA" : str(s) ));
        }
}

Demetris found the exact same snippet at the top of every system class, across hundreds of files. Why had the code's previous maintainer continually overridden the base log functionality like this? What use case was the base logger written for? Why hadn't the original function xxLog(), whatever it was, simply been rewritten to meet that more common need?

Curiosity piqued, he pulled up the base xxLog() function in the standard library the system files were using:


/**
 * Calculate a natural logarithm
 */
  function xxLog(float num) {
    if (num == $NA) {
      return $NA;
    }
    return first(first(sql("select log("+string(num)+")")));
  }

Logarithm, from the Greek "Logos" meaning ratio or reckoning and "Arithmos" meaning number, is a numerical operation that can be performed on a number to determine the number to which a given base (e, for a "natural logarithm") must be raised to achieve the number given as the input. It's certainly not an operation that can be performed on a string—and because the formula engine used the backing database's mathematical library to calculate the result, it would log errors in the database, probably along the lines of "Invalid input."

Which is all a roundabout way of saying: Words are hard.

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!

Planet Linux AustraliaStewart Smith: Where Matthew was born.

So, for tedious reasons, I was talking to Matthew Garrett about how he was born in Galway, the Republic of Ireland.

Planet DebianDaniel Pocock: Australians stuck abroad and alleged sex crimes

Two Australians have achieved prominence (or notoriety, depending on your perspective) for the difficulty in questioning them about their knowledge of alleged sex crimes.

One is Julian Assange, holed up in the embassy of Ecuador in London. He is back in the news again today thanks to a UN panel finding that the UK is effectively detaining him, unlawfully, in the Ecuadorian embassy. The effort made to discredit and pursue Assange and other disruptive technologists, such as Aaron Swartz, has an eerie resemblance to the way the Inquisition hunted witches in the middle ages and beyond.

The other Australian stuck abroad is Cardinal George Pell, the most senior figure in the Catholic Church in Australia. The Royal Commission into child sex abuse by priests has heard serious allegations claiming the Cardinal knew about and covered up abuse. This would appear far more sinister than anything Mr Assange is accused of. Like Mr Assange, the Cardinal has been unable to travel to attend questioning in person. News reports suggest he is ill and can't leave Rome, although he is being accommodated in significantly more comfort than Mr Assange.

If you had to choose, which would you prefer to leave your child alone with?

Planet DebianRussell Coker: Unikernels

At LCA I attended a talk about Unikernels. Here are the reasons why I think that they are a bad idea:

Single Address Space

According to the Unikernel Wikipedia page [1] a significant criteria for a Unikernel system is that it has a single address space. This gives performance benefits as there is no need to change CPU memory mappings when making system calls. But the disadvantage is that any code in the application/kernel can access any other code directly.

In a typical modern OS (Linux, BSD, Windows, etc) every application has a separate address space and there are separate memory regions for code and data. While an application can request the ability to modify it’s own executable code in some situations (if the OS is configured to allow that) it won’t happen by default. In MS-DOS and in a Unikernel system all code has read/write/execute access to all memory. MS-DOS was the least reliable OS that I ever used. It was unreliable because it performed tasks that were more complex than CP/M but had no memory protection so any bug in any code was likely to cause a system crash. The crash could be delayed by some time (EG corrupting data structures that are only rarely accessed) which would make it very difficult to fix. It would be possible to have a Unikernel system with non-modifyable executable areas and non-executable data areas and it is conceivable that a virtual machine system like Xen could enforce that. But that still wouldn’t solve the problem of all code being able to write to all data.

On a Linux system when an application writes to the wrong address there is a reasonable probability that it will not have write access and you will immediately get a SEGV which is logged and informs the sysadmin of the address of the crash.

When Linux applications have bugs that are difficult to diagnose (EG buffer overruns that happen in production and can’t be reproduced in a test environment) there are a variety of ways of debugging them. Tools such as Valgrind can analyse memory access and tell the developers which code had a bug and what the bug does. It’s theoretically possible to link something like Valgrind into a Unikernel, but the lack of multiple processes would make it difficult to manage.

Debugging

A full Unix environment has a rich array of debugging tools, strace, ltrace, gdb, valgrind and more. If there are performance problems then tools like sysstat, sar, iostat, top, iotop, and more. I don’t know which of those tools I might need to debug problems at some future time.

I don’t think that any Internet facing service can be expected to be reliable enough that it will never need any sort of debugging.

Service Complexity

It’s very rare for a server to have only a single process performing the essential tasks. It’s not uncommon to have a web server running CGI-BIN scripts or calling shell scripts from PHP code as part of the essential service. Also many Unix daemons are not written to run as a single process, at least threading is required and many daemons require multiple processes.

It’s also very common for the design of a daemon to rely on a cron job to clean up temporary files etc. It is possible to build the functionality of cron into a Unikernel, but that means more potential bugs and more time spent not actually developing the core application.

One could argue that there are design benefits to writing simple servers that don’t require multiple programs. But most programmers aren’t used to doing that and in many cases it would result in a less efficient result.

One can also argue that a Finite State Machine design is the best way to deal with many problems that are usually solved by multi-threading or multiple processes. But most programmers are better at writing threaded code so forcing programmers to use a FSM design doesn’t seem like a good idea for security.

Management

The typical server programs rely on cron jobs to rotate log files and monitoring software to inspect the state of the system for the purposes of graphing performance and flagging potential problems.

It would be possible to compile the functionality of something like the Nagios NRPE into a Unikernel if you want to have your monitoring code running in the kernel. I’ve seen something very similar implemented in the past, the CA Unicenter monitoring system on Solaris used to have a kernel module for monitoring (I don’t know why). My experience was that Unicenter caused many kernel panics and more downtime than all other problems combined. It would not be difficult to write better code than the typical CA employee, but writing code that is good enough to have a monitoring system running in the kernel on a single-threaded system is asking a lot.

One of the claimed benefits of a Unikernel was that it’s supposedly risky to allow ssh access. The recent ssh security issue was an attack against the ssh client if it connected to a hostile server. If you had a ssh server only accepting connections from management workstations (a reasonably common configuration for running servers) and only allowed the ssh clients to connect to servers related to work (an uncommon configuration that’s not difficult to implement) then there wouldn’t be any problems in this regard.

I think that I’m a good programmer, but I don’t think that I can write server code that’s likely to be more secure than sshd.

On Designing It Yourself

One thing that everyone who has any experience in security has witnessed is that people who design their own encryption inevitably do it badly. The people who are experts in cryptology don’t design their own custom algorithm because they know that encryption algorithms need significant review before they can be trusted. The people who know how to do it well know that they can’t do it well on their own. The people who know little just go ahead and do it.

I think that the same thing applies to operating systems. I’ve contributed a few patches to the Linux kernel and spent a lot of time working on SE Linux (including maintaining out of tree kernel patches) and know how hard it is to do it properly. Even though I’m a good programmer I know better than to think I could just build my own kernel and expect it to be secure.

I think that the Unikernel people haven’t learned this.

Planet DebianIustin Pop: X cursor theme

There's not much to talk about X cursor themes, except when they change behind your back :)

A while back, after a firefox upgrade, it—and only it—showed a different cursor theme: basically double the size, and (IMHO) uglier. Searched for a while, but couldn't figure what makes firefox special, except that it is a GTK application.

After another round of dist-upgrades, now everything except xterms were showing the big cursors. This annoyed me to no end—as I don't use a high-DPI display, the new cursors are just too damn big. Only to find out two things:

  • thankfully, under Debian, the x-cursor-theme is an alternatives entry, so it can be easily configured
  • sadly, the adwaita-icon-theme package (whose description says "default icon theme of GNOME") installs itself as a very high priority alternatives entry (90), which means it takes over my default X cursor

Sigh, Gnome.

Planet DebianBenjamin Mako Hill: Welcome Back Poster

My office door is on the second floor in front the major staircase in my building. I work with my door open so that my colleagues and my students know when I’m in. The only time I consider deviating from this policy is the first week of the quarter when I’m faced with a stream of students, usually lost on their way to class and that, embarrassingly, I am usually unable to help.

I made this poster so that these conversations can, in a way, continue even when I am not in the office.

early_quarter_doors_sign

 

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Thursday – Session 3

Law and technology: impedance mismatch by Michael Cordover

  • IP lawyer
  • Known as the EasyCount guy
  • Lawyers and Politicians don’t get it
    • Governing behaviour that is not well understood (especially by lawyers) is hard
    • Some laws are passed under assumption that they won’t always be enforced (eg Jaywalking, Speeding limits). Pervasive monitoring may make this assumption obsolete
  • Technology people don’t get the law either
    • Good reasons for complexity of the law
    • Technology isn’t neutral
  • Legal detailed programmatic specifically
    • Construction
    • Food
    • Civil aviation
    • Broadcasting
  • Anonymous Data
    • Personal information – info from which id can be worked out
  • 100s of examples where law is vague and doesn’t well map to technology
    • Encryption
    • Unauthorised access
    • Copyright
    • Evidence
  • The obvious, easy solution:
    • Everybody must know about technology
    • NEVER going to happen
  • Just make a lot of contracts
    • Copyright – works fairly well, eg copyleft
    • TOS – works to restrict liability of service providers so services can actually be safely provided
    • EULAs
    • P3P – Privacy protection protocol
    • But doesn’t work well in multiple jurisdictions, small ppl against big companies, etc
  • Laws that are fit for purpose
    • An ISP is not an IRC server
    • VOIP isn’t PSTN
    • Focus on the outcome, sometimes
  • A somewhat radical shift in legal approach
    • It turns out the Internet is (sometimes) different
    • United States vs Causby – 1946 case that said people don’t work air above their property to infinity. Airplanes could fly above it.
  • You can help
    • Don’t ignore they law
    • Don’t be too technical
    • Don’t expect a technical solution
    • Think about policy solutions
    • Talk to everybody

 

FacebookGoogle+Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Thursday – Session 2

Machine Ethics and Emerging Technologies by Paul ‘@pjf’ Fenwick

  • Arrived late
  • Autonomous cars
    • Little private ownership of autonomous vehicles
    • 250k driving Taxis
    • 3.5 million truck drivers + plus more that depend on them
    • Most of the cost is the end-to-end on a highway. Humans could do the hard last-mile
  • Industrial revolution
    • Lots of people put out of jobs
    • Capital offence to harm machines
    • We still have tailors
    • But some jobs have been eliminated – eg Water bearer in cities
  • Replacing humans with small amounts of code
  • White collar jobs now being replaced
  • If more and more people are getting put out of jobs and we live in a society that expects people to have jobs what can we do?
    • Education to retrain
  • We *are* working less 1870=70h work week , 1988=40h work week
  • Leisure has much increased 44k hours -> 122k hours (shorter week + live longer)
  • What do people do with more leisure?
    • Pictures of cats!
    • Increase in innovation
  • How would the future work if machines are doing the vast majority of jobs?
    • Technological dividend
    • Basic income
  • Drones
    • “Drones have really taken off in the last few years”
    • Delivery drones
    • Disaster relief
    • Military drones – If autonomous then radio silent
    • Solar powered drones with multi-day/week duration
      • Good for environmental monitoring
      • Have anonymous warfare, somebody launches it, and it kills some people, but you don’t know who to blame
  • Machine Intelligence
    • Watson getting better at cancer diagnosis and treatments plan than many doctors
  • Questions:
    • Please focus on the upsides of lethal autonomous robots – Okay with robots, less happy with taking the machine out of the loop.
    • Why work week at 40 hours – Conjecture by Paul – Culture says humans must work and work gives you value and part time work is seen as much less important

Open Source Tools for Distributed Systems Administration by Elizabeth K. Joseph

  • Tools that enable distributed teams to work
  • Works day to day on Openstack
  • How most projects do infrastructure
    • Team or company manges do it or they just use github
    • Requests via mailing list or bug/ticketing system
    • Priority determined by the core team
  • Is there a better way – How Openstack is different – Openstack infrastructure team
    • Host own git, wiki, ircbots, mailing lists, web servers and run them themselves
    • All configs are open source and tracked in git
    • Anyone can submit changes to our project.
    • We all work remotely
  • Openstack CI system
    • 800+ projects
    • All projects must work togeather
    • changes can’t break master branch
    • code must be clean
    • testing must be completely automated
  • Tools for CI (* is they own tools)
    • Launchpad for Auth
    • git
    • gerrit
    • zuul* – gatekeep
    • Geaman
    • jenkins
    • nodepool*
  • Automated Test for infrastructure
    • flake8
    • puppet parser validate, puppet lint, puppet application tests
    • XML checkers
    • Alphabetized files ( cause people forget the alphabet)
    • Permissions on IRC channels
  • Peer review means
    • Multiple eyes on changes prior to merging
    • Good infrastructure for developing new solutions
    • No special process to go through commit access
    • Trains us to be collaborative by default
    • Since anyone can contribute, anyone can devote resources to it
  • Gerrit in-line comments
  • Automated deployments. Either puppet directly or via vcsrepo
  • Can you really manage infrastructure via git commits
    • Cacti – cacti.openstack.org
      • Cacti are public so anybody can check them
      • No active monitoring
    • Puppetboard
      • so you can watch changes happening
      • Had to change a little so secret stuff not public
    • Documentation
      • Fairly good since distributed team
    • Not quiet everything
      • Need to look at logs
      • Some stuff is manual
      • Passwords need to be privately managed (but in private git repo)
      • Some complicated migrations are manual
  • Maintenance collaboration on Etherpad
  • Collaboration
    • Via IRC various channels
    • main + incident + sprint + weekly meetings
    • channel/meeting logs
    • pastebin
    • In-person collaboration at Openstack design summit every 6 months
  • And then there are timezones
    • The first/root member in a particular region struggles to feel cohesion with the team
    • Increased reluctance to land changes into production
    • makes slower on-boarding
    • Only solved by increasing coverage in that time-zone so they’re not alone
  • Questions
    • Reason why no audio/video? – Not recorded or even hard to access if they are
    • How to dev “write documentation” culture – Make that person responsible to write docs so others can still handle it. Helps if it it really easy to do. Wikis never seem to work in practice, goes though same process as everything else (common workflow)
    • Task visibility – was bugzilla + launchpad – trying storyboard but not working well.

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: Machine Ethics and Emerging Technologies - Paul Fenwick - LCA2016

Paul Fenwick posed a journey of questioning what the future might look in 10,000 years time and is what we're doing today good for humanity.

  • More and more white collar jobs are being automated.
  • What are all these masses going to do with their leisure time?
  • More leisure time means more innovation.
  • Covered the benefits of drones.
  • Covered the dark side of drone use.
  • LARs (Lethal Autonomous Robots) are a significant issue.
    • Enables anonymous warfare
    • Long term target monitoring and execution
  • Can be used for long term environmental monitoring.

Another excellent, informative and entertaining talk by Paul.

Paul Fenwick

Planet Linux AustraliaCraige McWhirter: The Machine - Keith Packard - LCA2016

Keith Packard

  • Switching from Processor centric computing to memory driven computing
  • Described how the memory fabric works.
  • Will be able to connect any computing node to the shared memory.
  • Illustrated node assembly.
  • Next prototype will interconnect 320 terrabytes of memory accessible storage.
  • Planning to build larger machines.
  • Putting in facilities to protect the hardware from a compromised operating system.
  • Showed how fabric attached memory connects.
  • Linux is being ported to the machine.
    • Linux with HPE changes.
    • All work is being open sourced.
  • Creating a new file system allocate mempry in 8G units.
    • Library File System (LFS)
  • Currently focussing on Librarian, machine-wide shared memory allocator.
  • Trying to provide a two level allocation scheme
  • POSIX API.
  • No sparse files.
  • Locking is not global.
  • Farbic attached memory is not cache coherent
  • Read errors are signalled synchronously.
  • Write errors are asynchronous and require a barrier.
  • Went through all the areas they're working on Free Software.

LCA by the Bay

,

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Thursday – Session 1

Jono Bacon Keynote

  • Community 1.0 (ca 1998)
    • Observational – Now book on how to do it
    • Organic – people just created them
    • Technical Enviroment – Had to know C (or LaTex)
  • Community 2.0 (ca 2004, 2005)
    • Wikipedia, Redhat, Openstack, Github
    • Renaissance – Stuff got written down on how to do it
    • Self Organising groups – Gnome, Kde, Apache foundation – push creation of tech and community
    • Diversity – including of skills , non-technical people had a seat at the table and a role.
    • Company Engagement – Starting hiring community managers, sometimes didn’t work very well
  • Community 3.0 ?
  • Why?
    • “Thoughtful and productive communities make us as species better
  • Access and power is growing exponentally
  • But stuff around is changing
    • Cellphones are access method for most
    • Cloud computering
    • CD-printers, drones, cloud, crowdfunding, Ardinino
    • Lots for channels to get things to everybody and everybody can participate
  • “We need to empower diversity of both people and talent”
  • Human brain has not had a upgrade in a long time
  • Bold and Audacious Goals
    • Openness is at the heart of all of these
    • Open source in the middle of many
  • Eg Drone
    • Runs linux
    • Open API
  • “Open Source is where Society innovates”
  • “Need to make great community leadership accessible to everybody”
  • “Predictable collaboration – an aspirational goal where we won’t *need* community managers”
  • Not just about technology
    • We are all human.
  • Tangible value vs Intangible value
    • Tangible can be measured and driven to fix the numbers
    • Intangible – trust, dignety
  • System 1 thinking vs System 2 thinking
    • Instant vs considered
  • SCARF Model of thinking
    • Status – clarity of relative importance, need people to be able to flow between them
    • Certainty – Security and predictability
    • Autonomy – People really want choices
    • R – I got distracted by twitter, I’m sure it was important
    • Fair – fairness
  • Two Golden Rules
    • We accomplish our goals indirectly
    • We influence behaviour with small actions
  • We need to concentrate to building an experience for people to who join the community
  • Community Workflow
    • Communication – formal, inclormal? Coc? Tech to use?
    • Release sceduled, support?
    • How to participate, tech, hackthons
    • Government structure
  • Paths for different people
    • New developers
    • Core Developers
    • Consumers
    • Downstream Cosutomers
    • Organizations
  • Opportunity vs Belonging
  • Questions
    • Increasing Signal to Noise ratio – Trolls are easy[er], harder for people who are just no deft in communication. Mentorship can help
    • Destructive communities (like 4chan) , how can technology be used to work against these – Leaders need to set examples. Make clear abusive behavour towards others. Won’t be able to build tools that will completely remove bad behaviour. Had to tell destructive vs direct automatically but they can augmented.
    • What about Linus type people? – View is that even though it works for him and it is okay with people he knows. Viewed inwards by others it sets a bad example.

Using Persistent Memory for Fun and Profit by Matthew Wilcox

  • What is it?
    • Retains data without power
    • NV-DIMMs available – often copy DRAM to flash when power lost
    • Intel 3D X-point shipping in 2017. will become more a standard feature
  • How do we could use it
    • Total System persistence
      • But the CPU cache is not backed up, so pending writes vanish
    • Application level persistence
      • Boot new kernel be keep the running apps
      • CPU cache still
    • Completely redesigned operating system to use
      • But we want to use in 2017
    • A special purpose filesystem
      • Implementation not that great
    • A very fast block device
      • Usaged as very fast cache for apps really need it. Not really general purpose
    • Small modifications to existing file systems
      • On top of ext2 (xip)
      • DAX
  • How do we actually use it
    • New CPU instructions ( mostly to make sure encourage that things are flushed from the CPU cache)
    • Special purpose programming language shouldn’t be needed for interpreted languages. But for compiled code libraries might be needed
  • NVML library
  • Stuff built on NVML library so far.
    • Red-Black tree, B-tree, other data-structures
    • Key-value store
    • Fuse file system
    • Example MySQL storage engine
  • Resources
  • Questions
    • In 2017 will we have mix of persistent and non-persistent RAM? – Yes . New Layer in the storage hierarchy
    • Performance of 3d will be slower a little slow than DRAM but within ballpark, various trade-offs with other characteristics
    • Probably won’t have native crypto

Dropbox Database Infrastructure by Tammy Butow

  • Dropbox for last 4 months, previously Digital Ocean, prev National Australia Bank
  • Using MySQL for last 10 years. Now doing it FT.
  • 400 Million customers
  • Petabytes of data across thousands of servers
  • In 2012 Dropbox just had 1 DBA, but was huge then.
  • In 2016 it has grown to 9 people
  • 6000 DB servers -> DB Proxy -> DB as a service (edgestore) -> memcache -> Web Servers (nginx)
  • Talk – Go at Dropbox, Zviad Metreveli on Youtube
  • Applications talk directly to edgestore not directly to database
  • vitess is mysql proxy (by youtube) similar to what dropbox wrote. Might move to that
  • Details
    • Percona 5.6
    • Constantly upgrading (4 times in last year)
    • DBmanager – service we manage mysql via
  • Each Cluster is proiamry + 2 replicas
  • Use xtrabackup ( to hdfs locally and s3)
  • Tools
    • Tasks grow and take time
    • DBmanager
      • Automating DB operations
      • Web interface with standard operations and status of servers
      • Cloning Screen
      • Promotion Screen
      • Create and restore backups
      • WebUI gives you feedback and you can see how things are going. Don’t need magic command lines. Good for other teams to see stuff and do stuff (options right in front of them).
      • Benchmarking
      • Database job scheduling and prioritization. Promotion will take priority over anything else.
      • Common logging, centralized server and nice gui that everyone can see
    • HERMES
      • Availbale on dropbox github
      • Visable all quests and actions that need to be done by the team
    • Monitoring
      • Grafana
  • Performance
    • Improving backup and restore speed.
      • LZOP
      • xtrabackup
  • Auto-remediation (naoru) – up on github at some point
  • Inventory Management
    • Machine Database (MDB)
    • Has tags for things like kernel versions
  • Diognostics
    • Automated periodic tcpdump
    • Tools to kill long running transactions
    • List current queries running
    • atop
  • The Future
    • Reliabilty, performance and cost improvements
    • Config management
    • Love the “Go Programming Language” by Kernighan
    • List of Papers they love
  • Questions
    • Using percona not mariadb. They also shard not cluster DBs
    • Big Culture change from Back to Dropbox – At Bank tried to decom old systems, reduce risk. At Dropbox everyone is very Brave and pushing boundarys
    • machine database automatically built largely
    • Predictive Analysis on hardware – Do some , lots of dashboards for hardware team, lifecycle management of hardware. Don’t hug servers. Hug the hardware class instead.
    • Rollbacks are okay and should be easy. Always be able to rollback a change to get to back to a good stack.

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: LCA2016 Thursday Keynote - Jono Bacon

Jono Bacon spoke about how open communities are changing the world and how they may be improved in the future.

Community 1.0

  • Early Free Software communities were built from observing other groups around them and figuring things out as they went along.
  • Very high technical barrier of entry

Community 2.0

The Renaissance

  • Allowed broader participation, with Wikipedia as an example.
  • Knowledge had been built to allow people to start in the community from a common point
  • Self organising groups
  • Enabled greater diversity
  • Companies began engaging with communities.

What Does 3.0 Look Like?

  • How do we build effective reproducible communities?
    • Thoughtful and productive communities advance the human race,
  • Sharing the knowledge on how to build effective communities is going to be
  • Covered ubiquitous computing growth, 3D printing, Arduino etc
  • Crowd funding as one method of empowering consumers.
  • Not just consumption but empowering people to have better lives, key.
  • We need to empower diversity in all it's forms.
  • Openness is the greatest enabler.
  • The principles of openness are flowing through all forms of technology, life and work.
  • In a world worried about AI, we the people should be ensuring that it's open and taking control.

"Open Source is where society innovates" - Jono Bacon

  • We need to crack predictable collaboration. Making great great community leadership available everyone.
  • We can do better, we've only scratch the surface with our success thus far.

How do we do this?

  • For self respect we need to contribute. To contribute we need access.
  • Jono realised that his role as community manager was to help other contributors be as effective as possible with their time when they're contributing.
  • Discussed the difference between system 1 and system 2 thinking.
  • However behavioural economics is hard to apply in practice.
  • The principles can be pulled out and used though.
  • Discussed SCARF model of social threats and rewards.
  • From this model we can figure out how to put this into practice.
    • We accomplish goals indirectly. Gave Boeing as an example.
    • We influence behaviour with small actions. Recommended the book Lunch.
  • Build comprehensive rewarding experiences.
  • Need to make building a successfully structured community easy.
  • Described experiences from different stakeholder perspectives.
community_3.0 = {
    system 1 and 2 thinking +
    behavioural patterns +
    workflow +
    experiences +
    pacakaged guidance
}

The most important feeling we can create is a sense of belonging.

Jono Bacon

Planet DebianMichal Čihař: Gammu 1.37.0

Today, Gammu 1.37.0 has been released. As usual it collects bug fixes. This time there is another important change as well - improver error reporting from SMSD.

This means that when SMSD fails to connect to the database, you should get a bit more detailed error than "Unknown error".

Full list of changes:

  • Improved compatibility with ZTE MF190.
  • Improved compatibility with Huawei E1750.
  • Improved compatibility with Huawei E1752.
  • Increased detail of reported errors from SMSD.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: English Gammu | 0 comments

Krebs on SecuritySafeway Self-Checkout Skimmer Close Up

In Dec. 2015, KrebsOnSecurity warned that security experts had discovered skimming devices attached to credit and debit card terminals at self-checkout lanes at Safeway stores in Colorado and possibly other states. Safeway hasn’t disclosed what those skimmers looked like, but images from a recent skimming attack allegedly launched against self-checkout shoppers at a Safeway in Maryland offers a closer look at once such device.

Safeway Store, Germantown, Maryland

A skimming device made for self-checkout lanes that was removed from a Safeway Store in Germantown, Maryland

The image above shows an simple but effective “overlay” skimmer that banking industry sources say was retrieved from a Safeway store in Germantown, Md. The device is designed to fit directly over top of the Verifone terminals in use at many Safeways and other retailers. It has a PIN pad overlay to capture the user’s PIN, and a mechanism for recording the data stored on a card’s magnetic stripe when customers swipe their cards at self-checkout aisles.

Safeway officials did not respond to repeated requests for comment about this incident.

My local Safeway in Northern Virginia uses this exact model of Verifone terminals, and after seeing this picture for the first time I couldn’t help but pull on the terminal facing me in the self-checkout line on a recent store visit, just to be sure.

Many banks are now issuing newer, more secure chip-based credit and debit cards that are more expensive and difficult for thieves to steal and to counterfeit. As long as retailers continue to allow customers to avoid “dipping the chip” and instead allow “swipe the stripe” these skimming attacks on self-checkout lanes will continue to proliferate across the retail industry.

It may be worth noting that this skimming device looks remarkably similar to a point-of-sale skimmer designed for Verifone terminals that I wrote about in 2013.

Here’s a simple how-to video made by a fraudster who is selling very similar-looking overlay skimmers for Verifone point-of-sale devices; he calls them “Verifone condoms.” As we can see, the device could be attached in the blink of an eye (and removed quickly as well). The device in the video is just a shell, and does not include the POS PIN pad reader or card reader.

Planet DebianSven Hoexter: Moby

Maybe my favourite song of Moby - "That's when I reach for my revolver" - is one of the more unsual ones, slightly more rooted in his Punk years and a cover version. Great artist anyway.

Worse Than FailureAnnouncements: The Abstractions Conference: Pittsburgh

Back when we were setting up The Daily WTF: Live, I gave a shout-out to the Pittsburgh tech community group, Code & Supply. They’ve been a great way to network with local developers, dev-opsers, designers, and more, ranging from the seasoned vets to those just cutting their teeth on IT. I’m a huge fan of their events, and I only wish I could make it to more of them.

But there’s one event I’m not going to miss, and if you can get to Pittsburgh, you shouldn’t miss it either. Code & Supply is launching their new conference, Abstractions. Abstractions, like Code & Supply, is a cross-techology, multi-skillset event, dedicated to bringing some of the best speakers in a variety of technologies together for one of the best conference lineups I’ve seen.


I’m just going to give you a taste of some of the big names:

  • Mitchell Hashimoto, CEO of Hashicorp (makers of Vagrant, Otto, and other automation tools).
  • Allison Randall, leader on projects including Perl, Python, Ubuntu, and more.
  • Oh, did I mention Perl? Larry Wall will be there.
  • Speaking of programming language inventors, Joe Armstrong (Erlang), and José Valim (Elixir).
  • Make sure to bring a laptop running Free-as-in-Speech software, or suffer the wrath of Richard Stallman
  • Douglas Crockford, author of Javascript: The Good Parts and inventor of JSON
  • Scott Hanselman, from Microsoft’s Web Platform Team
  • Eileen Uchitelle, from the Ruby on Rails Core Team

Maybe that was more of a nibble than a taste, but it’s a great list of top-tier presenters. I am excited to see this kind of crowd coming into Pittsburgh, and I’m looking forward to attending this conference- this incredibly affordable conference. This is a three day conference which costs $250, and they’ll also be offering scholarships and sending out a call-for-proposals. The conference runs from August 18–20, at the David L. Lawrence Convention Center in Pittsburgh, and tickets can be punchased through Abstractions.io. I will be there as an attendee (and possibly a few other members of our TDWTF staff as well) with a backpack full of buttons and stickers for site fans. If you plan to attend, drop a line to our inbox. If enough readers are in the area that weekend, we’ll arrange a reader meetup outside of conference hours one evening.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianJonathan Dowland: Comparing Docker images

I haven't written much yet about what I've been up to at work. Right now, I'm making changes to the sources of a set of Docker images. The changes I'm making should not result in any changes to the actual images: it's just a re-organisation of the way in which they are built.

I've been using the btrfs storage driver for Docker which makes comparing image filesystems very easy from the host machine, as all the image filesystems are subvolumes. I use a bash script like the following to make sure I haven't broken anything:

oldid="$1"; newid="$2";
id_in_canonical_form() {
    echo "$1" | grep -qE '^[a-f0-9]{64}$'
}
canonicalize_id() {
    docker inspect --format '{{ .Id }}' "$1"
}
id_in_canonical_form "$oldid" || oldid="$(canonicalize_id "$oldid")"
id_in_canonical_form "$newid" || newid="$(canonicalize_id "$newid")"
cd "/var/lib/docker/btrfs/subvolumes"
sumpath() {
    cd "$1" && find . -printf "%M %4U %4G %16s %h/%f\n" | sort
}
diff -ruN "$oldid" "newid"
diff -u <(sumpath "$oldid") <(sumpath "$newid")

Using -printf means I can ignore changes in the timestamps on files which is something I am not interested in.

If it is available in your environment, Lars Wirzenius' tool Summain generates manifests that include a file checksum and could be very useful for this use-case.

Planet DebianWouter Verhelst: OMFG, ls

alias ls='ls --color=auto -N'

Unfortunately it doesn't actually revert to the previous behaviour, but it's close enough.

CryptogramSecurity vs. Surveillance

Both the "going dark" metaphor of FBI Director James Comey and the contrasting "golden age of surveillance" metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.

It's a myopic framing that focuses only on one threat -- criminals, including domestic terrorists -- and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.

Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It's a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.

Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers -- probably the Chinese government -- stole 21.5 million personal files of U.S. government employees and others. They wouldn't have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn't encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.

Adding backdoors will only exacerbate the risks. As technologists, we can't build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer's hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece's cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.

We're not being asked to choose between security and privacy. We're being asked to choose between less security and more security.

This trade-off isn't new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book Surveillance or Security?, which deftly parsed the details of this trade-off and concluded that security is far more important.

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they're in. If they're not, it's because you're not high enough on their priority list to bother with. Widespread encryption forces the listener -- whether a foreign government, criminal, or terrorist -- to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society's capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we'll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.

This essay previously appeared as part of the paper "Don't Panic: Making Progress on the 'Going Dark' Debate." It was reprinted on Lawfare. A modified version was reprinted by the MIT Technology Review.

Worse Than FailureFreelanced

Being a freelancer is hard. Being a freelancer during the downturn after the Dot-Com bust was even harder. Jorge was in that position, scrambling from small job to small job, fighting to make ends meet, when one of his freelance clients offered him a full-time gig.

Carol, the customer, said “Jorge, we’re really short-handed and need help. We’d like you to start on Monday. You know PHP, right?”

Jorge didn’t know PHP, but he knew plenty of other languages. He said yes, crash-coursed over the weekend, and was confident he could learn the rest on the job. When he showed up on Monday, Carol introduced him to Luke- “who will mentor you on our application.”

The box art of the video game 'Freelancer'

“Hey!” Luke grabbed Jorge’s hand, started shaking, and kept at it for far longer than comfortable. “It’s great to have you here, really great, you’re really going to like our code, it’s really really great. We’ve got a lot of great customers, and they’re really really happy with our great software. Do you like encryption? I built our encryption layer. It’s really really great. And I hope you like getting things done, because we’ve got a really really great environment with no obstacles.”

Jorge recovered his hand, wiped it on his pants, and tried to smile to cover the internal panic that was taking over his thought processes. That internal panic got louder and louder as Luke showed him the ropes.

They had a few dozen tiny applications, and the code for those applications lived in one place: the production server. Server, singular. There was no dev environment, there was no source control server. Their issue tracking was, “When there’s an issue, a customer will call you, and you’ll fix it.” Luke explained, “I like to work on it while I’m on the phone with them, so I can just edit the code and have them refresh the page right there.”

Jorge nearly quit, but Carol had been a great customer in the past, and he really wanted a steady gig. He ignored his gut, and instead tried to convince himself, “This is an opportunity. I can help them get really up to speed.”

He found an ancient Cobalt RaQ in a closet, with a 366MHz processor (with MMX!) and 64MB of RAM. Jorge hammered on that whenever he had a spare moment, setting it up as a dev environment, a CVS server and Bugzilla. This took weeks, because Jorge didn’t have a lot of spare moments. Luke kept him busy on a “deep dive” into the code.

Jorge was largely ignorant of PHP’s details and nuances, but Luke was massively ignorant. Luke’s indentation was so chaotic it could double as a cryptographically secure random number generator. Wherever possible, Luke reinvented wheels. Instead of using a server-side redirect, he instead injected a <script> block into the page to send the browser to a different page. When PHP changed their register_globals behavior for security reasons, Luke didn’t think about why that happened or what that meant. He didn’t even bother to flip the PHP.ini flag which would revert to the old behavior. Instead, he just pasted this block into every PHP file:

while(list($GET_Key,$GET_Val)=each($HTTP_GET_VARS)){
$Var_Key_VAR = $GET_Key;
$$Var_Key_VAR = $GET_Val;
    }
    while(list($POST_Key,$POST_Val)=each($HTTP_POST_VARS))
        {
            $Var_Key_VAR = $POST_Key;
    $$Var_Key_VAR = $POST_Val;
    }
    while(list($SERVER_Key,$SERVER_Val)=each($HTTP_SERVER_VARS)){
$Var_Key_VAR = $SERVER_Key;
        $$Var_Key_VAR = $SERVER_Val;
                        }

Jorge didn’t know enough about PHP at the time to recognize how horrible this was, or how horrible PHP’s register_globals behavior was. He knew it was bad, though. What he didn’t realize was that the entire situation was actually worse than that.

“Luke,” Jorge said, “why do I see your name peppered everywhere in the code?” Everywhere. Luke had tagged the code with his name like a graffiti artist trying desperately to get arrested. His name was in the commons, he was given credit in the meta tags of most pages, he named variables after himself, and even a few page titles actually said “by Luke S…”

“Oh,” Luke said, “well the work’s really, really great, right? Like art, and you’ve got to sign your great art. Let everybody know who the great developer behind it was. I’m sure you’ll get the chance to sign your name in a few places soon, right? It’ll be great. Really really great.”

The whole thing made Jorge suspicious. He removed the obvious signatures and started throwing the code into Google. Luke had barely written a single line of the code- 90% of what Jorge found had been copied-and-pasted from tutorials or other sites. Even some of the copy on pages had been stolen from other sites. It all came from somewhere else, and had been thrown together with no sense of what any of it actually meant.

Jorge was about to bring this up to Carol when Luke added: “You should really check out the encryption layer I built for security. It’s really, really great.”

It was really really something. It took a little while for Jorge to understand the purpose of the encryption until he dug past it into the underlying application. This application was a blog-style platform. Different users could manages posts in several feeds. Luke didn’t understand how to verify what user was logged in, so at one point, a little URL tampering would have allowed users to tamper with other users posts. Luke “solved” this by “encrypting” the URL params so that they couldn’t be edited. His cutting edge encryption algorithm was the most secure solution since ROT13: encode_base64.

This was Jorge’s final straw. Luke was a plagiarist hack, and there was no way Jorge could hope to maintain this application. He went to Carol to give his notice. “Oh, no,” Carol said, “you can’t quit! Luke just gave his notice today.”

“Wait, Luke is leaving?”

“Yes,” Carol said. “He thinks he can make more money as a freelancer, like you were. He’s already got several clients lined up, he says.”

“Well, that’s really, really great.”

Jorge is still in that job, but Luke’s creation has long since been entirely junked and rewritten. Luke is still out there somewhere, freelancing away.

[Advertisement] Scout is the best way to monitor your critical server infrastructure. With over 90 open source plugins, robust alerting, beautiful dashboards and a 5 minute install - Scout saves youvaluable engineering time. Try the server monitoring you'll 👍 today.Your first 30 days are free on us. Learn more at Scout.

Planet Linux AustraliaCraige McWhirter: Introduction to monitoring with Prometheus - Jamie Wilkinson - LCA2016

Jamie Wilkinson gave on overview of the Prometheus monitoring tool, based on the Borgmon white paper released by Google.

  • Monitoring complexity was becoming expensive.
  • Borgmon inverted the monitoring process
    • Was heavily relied upon at Google.
  • Prometheus, Bosun, Riemann are stream based monitoring like Borgman.
  • Prometheus scrapes /varz
  • Sends alerts as key value pairs
  • Using shards for scaling.
  • Defines targets in a YAML file.
  • Data storage is in a global database in memory
  • Use "higher level abstractions to lower cost of maintenance
  • Use metrics, not checks
  • Design alerts based on service objectives.

Another brilliant monitoring talk from Jamie.

Prometheus

Planet DebianThomas Goirand: Moby

Just a quick reply to Rhonda about Moby. You can’t introduce him without telling about Go, which is the title who made him famous, very early in the age of electronic music (November 1990, according to wikipedia). Many attempted to remix this song (and Moby himself), but nothing’s as good as the original version.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Wednesday – Session 3

The future belongs to unikernels. Linux will soon no longer be used in Internet facing production systems. by Andrew Stuart

  • Stripped down OS running a single application
  • Startup time only a few milli-seconds
  • Many of the current ones are language specific
  • The Unikernel Zoo
    • MirageOS – Must be written in OCaml
    • Rump –  Able to run general purpose software, run compiled posix applications, largely unmodified. Can have threading but not forking
    • HalVM – Must be coded in Haskell
    • Ling – Erlang
    • Drawbridge – Microsoft research project
    • OSv – More general purpose
    • “Something about Unikernels seems to attract the fans of the ‘less common’ languages”
    • plus a bunch more..
  • Unikernels and security
  • Bunch of people point out problems and alternative solutions the unikernel are trying to solve.

 

An introduction to monitoring and alerting with timeseries at scale, with Prometheus by Jamie Wilkinson

  • prometheus.io
  • SRE ultimately responsible for the reliability of google.com , less that 50% of time on ops
  • History of monitoring, Nagios doesn’t scale, hard to configure
  • Black-box monitoring for alerts
  • White-box monitoring for charts
  • Borgmon at Google, same tool used my many teams at google
  • Borgmon not Open Source, but instead we’ll look at Prometheus
  • Several alternatives alternatives
  • Borgman
  • Alert design
    • SLI – a measurment
    • SLO – a goal
    • SLA – economic incentives
  • Philosopy
    • Every time you get paged you should react with sense of urgency
    • Those that are not important shouldn’t be paged on, perhaps just to console
  • Instrumentation
    • Client exports a interface usually http , prometheus polls /metrics on this server gets plain page with numbers
    • Metrics are numbers not strings
    • Don’t need timestamps into data
  • Tell prometheus where the targets are in the “scrape_configs”
    • All sorts of ways to find targets (DNS, etc)
  • Variables all have labels, name, things like localtions
  • Rule evaluation
    • recording rules
    • tasks run built in fuctions like sum up data by label (eg all machines with the same region label), find rate of change etc
  • Pretty graphs shown in demo
  • https://github.com/jaqx0r/blts
  • Questions
    • Prometheus exporting daemon/proxy
    • Language ability to support things like flapping detection/ignore
    • Grafana support for Prometheus exists

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: The future belongs to unikernels - Andrew Stuart - LCA2016

Andrew Stuart gave an overview of the current state of unikernels:

Overview

  • Unikernel zoo is increasing.
    • MirageOS is the most mature at present and requires code written in OCaml.
    • HalVM requires you code to be written in Haskell
    • Ling requires your code to be written in Erlang.
    • runtime.js some thing as the above but in JavaScript.
    • OSv is not language specific and very minimalist.
    • rump kernels is essentially a very stripped down version of NetBSD and will run some other unikernels.
  • Threading, not forking.
  • Might be a Linux based unikernel coming.

Unikernels and Security

  • Suggests machines with user sign-in capabilities will be come less come due to security risks.
  • Unikernels are not invulnerable.
  • MirageOS have a bitcoin pinata.

LCA by the Bay

Planet DebianMichal Čihař: New projects on Hosted Weblate

I had some pile of hosting requests in queue since half of January and my recent talk on FOSDEM had some impact on requests for hosting translations as well, so it's about time to process them.

New kids on the block are:

Unfortunately I had to reject some projects as well mostly due to lack of file format support. This is still the same topic - when translating project, please stick with some standard format. Preferably what is usual on your platform.

If you like this service, you can support it on Bountysource salt or Gratipay. There is also option for hosting translations of commercial products.

Filed under: English Weblate | 0 comments

Planet Linux AustraliaCraige McWhirter: Sentrifarm - open hardware telemetry system for Australian farming conditions - Andrew McDonnell - LCA2016

Andrew McDonnell created Sentrifarm in 2015.

Requirements

  • Low power
  • Distributed
  • Using radio for communication
  • Local storage
  • Cheap

Background

  • They entered Hackaday - actual entry page.
    • Wanted to learn new skills
    • Have fun
    • Experiment
    • Perhaps produce something useful
  • There were lots of discarded prototypes
  • So many cheap devices facilitating experimentation.
  • Radio links were not quite as open as he would have liked.
  • Used Lora based ISM-band radio
  • Learned how much easier it is to have PCBs fabricated these days.
  • Fabrication lead times can be about 6 months.

Open Hardware Components

  • 8 devices Carambola2 - Linux OpenWRT board

Firmware

  • platform.io
  • Replaces need for Arduino IDE
  • Open Source
  • IDE agnostic

MQTT for communication

  • Specifically MQTT-SN for low bandwidth
  • Packages
    • mosquitto
    • mqtt_sn_tools
    • arduino-mqtt-sn
  • Gateway runs OpenWRT

Andrew provided an overview of how the gateway processing model worked.

Backend

  • Ubuntu 14.04
  • Docker 1.8.3
  • Carbon + Whisper + Graphite
  • Grafana
  • Custom Python scripts
  • Millions of lines of code and Andrew only had to write 7.

  • 3D printed some components.

    • Made a custom holder for the PCB
  • Used OpenSCAD to design the component.
  • Made the antenna himself with plans off the Internet.
    • Got range up to 9km.

Andrew's project is an ingenious solution to a serious problem. I need one of these for myself!

LCA by the Bay

Don MartiWorld's Simplest Privacy Tool

Here's the world's simplest Firefox add-on, which just turns on Tracking Protection (ordinarily buried somewhere in about:config) and sets third-party cookie policy to a sane value.

install pq from addons.mozilla.org

So far it has 15 users and one review -- five stars. It doesn't do much, or for very many people, but what it does do it does with five-star quality.

Bonus link: How do I turn on Tracking Protection? Let me count the ways.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Wednesday – Session 2

Welcoming Everyone: Five Years of Inclusion and Outreach Programmes at PyCon Australia by Christopher Neugebauer

  • How to bring more people to community run events
  • Talk is not about diversity in tech
  • Talk is about “Outreach and Inclusion in Events”
  • Outreach = getting them in , Inclusion = making them feel welcome
  • About funding programmes for events
  • FOSS happens over the Internet , face-to-face is less common than in other areas/communities
  • Events are where you can see the community
  • BUT: Going to a conference costs money – travel, rego, parking, leave from job
  • Events have equality of access problem
  • Inequity of access is  a problem with diversity
  • Solution: Run outreach programmes
  • Money can reduce the barriers, just spending money can help solve the problem
  • Pycon Australia has had outreach for last 5 years
  • FOSS vs other outreach programmes
    • Events have easy goals, define ppl/numbers to target, exact things to spend on, time period defined
    • Similar every year, similar result each year
    • Long-term results are ill-defined
    • Engagement is hard to track
  • Pycon Australia
    • Fairly independent of Python software foundation
    • Biggest Pycon within 9 hours of flying
    • Pycon US – 2500 attendees, $200k on financial attendance
    • Pycon Aus 2015 – 450 attendees , 5-8% of budget on funding
  • 2011
    • Harassment and Codes of Conduct were a big thing
    • Gender diversity policy, code of conduct, 20% speaks were women, First Gender diversity grants
    • 2 Grants, – 1 ticket and 1 Ticket + $500 funded out of general conf budget
    • 7 strong applicants at time when numbers were looking low (later picked up)
    • Sponsor found and funded all 7 applicants
  • 2012
    • 1st of 2 years running conf in Hobart
    • Moving from Sydney is hard. Australia big and people have to fly between cities (especially to Hobart)
    • Hobart long way away for many people and small number of locals
    • Sponsor increased funding to $700, funded 10 people for $500 + ticket
    • Previous grant recipient from 2011 was speaking in 2012
  • 2013
    • Finding more speakers from more places
    • Outreach and Speaker support run out of the same budget, cap removed on grants so International travel possible.
    • Anyone could apply removed purely on gender limit. So other people who needed funding could apply. Eg Students, teachers, geographic minorities
    • $12,500 allocated
    • As more signups and more money came in more could go to the assistance budget
    • If remove gender targeting then then what happens to diversity
    • Got groups like GeekGirlDinners to target people that needed grants rather than directly chasing people to apply.
    • Over half aid budget going to women
    • Teachers good force multiplies
  • 2014
    • Lost previous diversity Sponsor
    • Previously $5k from Sponsor + $7k from general fund.
    • Pycon US – Everybody pays to attend ( See Essay by Jesse Noller – Everybody Pays )
    • Most speakers have FOSS-friendly employers or can claim money
    • Argument: Some confs make everybody pay no matter their ability.
    • Told speakers that by default they would be charged, but by charge they weive it by just asking. Also said where the money was going and prioritised speakers to assistance. Also all organisers paid
    • Extra money from about $7000
    • Simplified structure of grants, less paperwork, just gave people a budget. Worked well since many people went with good deals.
    • Caters better for diverse needs
    • Also had Education Miniconf, covered under teacher traning budget. Offered to underwrite costs of substitute teachers for schools since that is not covered by normal school professional-dev budget
  •  Results
    • Every time at least one funding recipient has spoken at next conference
    • Many fundees come back when get professional jobs
    • Evangelize to the friends
  • Discovery
    • expanding fund gets people you might not expect
    • Diverse people have diverse needs
    • Avoid making people do paperwork, just give them money
    • Sponsors can make boot-strapping starting a programme easier
    • Don’t expect 100% success
    • Budget liberally, disburse conservatively
    • Watch out for immigration scams
    • Decline requests compassionately
  • Questions
    • Weekend hard for Childcare – Not heavily targeted
    • Targeting Speakers for funding rather than giving all of them means it gets to go a lot further. Better Bang for buck

Sentrifarm – open hardware telemetry system for Australian farming conditions by Andrew McDonnell

  • Great time to be a maker, everybody is able to make something
  • Neighbour had problem with having to measure grass fire danger in each paddock before going out with machinery during summer
  • Needs Wind Speed, temperature, humidity
  • Sentrifarm
    • Low power, solar
    • distributed
    • Works in area with slow internet, sim card expense adds up however
    • Easy to use for farmer, access via their farm.
    • Data should not be owned by cloud provider
  • Hackerday Prize
    • Build “something that matters”
    • Prizes just for participating
    • Document progress, produce a video
  • Our Goals
    • Cheap and Cheerful
    • Aussie “bush mechanic” ehtos
    • Enjoy the adventure
  • Used stuff from 24+ other opensource projects
  • Prototyping
    • Tried out various micro-controllers an other equipment
    • Most you could only buy for a few dollars
    • Tools – Bus Pirate
  • Radio links
    • ISM-band radio module “Lora” technology
    • SPI interface, well documented SX1276
    • $20 for the module
    • Propriety radio protocol, long rang low power, but open interface on top of it
  • Eagle used (alt is KiCAD) to design circuit
    • Build own shields to plug sensors and various controllers into
  • playformio.org – run one command, creates a arduino project and builds with one command for multiple micro-controllers
  • MQTT-SN – communications protocol for low-bw links.
  • Breakdown of his stack, see his slides for details
  • Backend Software
    • Ubuntu
    • Docker
    • Carbon + Whisper + Graphite, Grafana
    • “Great time to be a hacker, using who knows how many lines of code and only had to write 7 to get it to work together”
  • Grafana hard to setup but found a nice docker container
  • Data kept separately from the container
  • Goal to get power down
  • Used 3D-printer to create some parts from mounting bits.
    • OpenSCAD – Language to design the parts
  • Range of Lori of 5km un-evalated , 9km up a tower with sinple home-built antenna
  • Won a top-100 prize at Hackaday of a t-shirt
  • You can do it
  • Questions
    • Ask home survives weather? – Not a lot of experience yet, some options
    • Home likely others to use? – Maybe but main gaol was to building it

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: Usable formal methods - are we there yet? - Stefan Götz - LCa2016

Stefan Götz

  • Software reliability is often defined by industry standards.
  • Software analysis can be divided into three parts:
    • Static analysis
      • Examines code, no compilation or execution.
      • Share input with compilers
      • Example static analisers:
        • BLAST, Cppcheck, Eclipse, Frama-C
        • LLVM/CLang
        • Sparse
        • Splint - used by eChronos
    • Proof systems
    • Model checking

Splint

  • Performs patter matching
  • Understands c-types
  • Language model and rule matching
  • Control and data flow analysis
  • Similar to compiler setup
  • Run against entire application code.
  • Improved auto generated code and readability.
  • Found incorrect character conversion.
  • Discovered signal sets unintentionally returning a boolean,
  • Some false positives with unused code.
  • Some macros were not picked up.
  • Some variable initialisation not picked up.
  • Works very well over all.

Felt the time invested in using splint was well spent and brings a lot of piece of mind to the project.

Model Checking

  • Uses CBMC
  • Requires a little plumbing code and training.
  • Made them reconsider and improve execution timing.
  • Scalability requires improvements.
  • Being integrated with eChronos.

Are we there yet?

  • Static analysis Open Source tools need improving and an established best practices.
  • Model checking is not yet out of the box

LCA by the Bay

Planet Linux AustraliaCraige McWhirter: CloudABI - Ed Schouten - LCa2016

Ed Schouten provided a detailed tour of Capsicum and CloudABI.

  • AppArmor is an after thought
  • Puts the burden back on users
  • Not linked to security policies.
  • Capsicum is a FreeBSD method that sandboxes software
  • Works well with small applications but doesn't scale.
  • Questions why UNIX can't run third party binaries safely.

What is CloudABI?

  • CloudABI is a POSIX-like runtime environment based on Capsicum.
  • Capability based security with less foot shooting.
  • Global namespaces are entirely absent
    • By default can only perform actions with no global impact.
  • Symbiosis, not assimilation as it can run side by side with traditional applications.
  • File descriptors are used to provide additional rights.
  • Provided an example of using CloudABI to provide a secure web service.
  • You can use wrappers to provide features missing from CloudABI.
  • Only has 58 system calls. Incredibly compact.
  • Working towards having support for more POSIX operating systems.
  • Allows reuse of binaries without compilation.

Conclusions:

  • Provided an example of a simple CloudABI ls program.
  • How to execute it via the shell
  • Feels there's scalability problems with CloudABI.
  • Wrote cloudabi-run to make it feel less clunky to run.
  • Replace CLI arguments with a YAML file.
  • Easy to configure.
  • Impossible to invoke programs with the wrong file
  • Reduces start-up complexity.
  • Gave an example of CloudABI as the basis of a cluster management suite.
  • Provides a 100% accurate dependency graph.
  • Gave an example of "CloudABI as a Service".

LCA by the Bay

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Wednesday – Session 1

Going Faster: Continuous Delivery for Firefox by Laura Thomson

  • Works for Cloud services web operations team
  • Web Dev and Contious delivery lover
  • “Continuous delivery is for webapps” – Maybe not just Webapps? Maybe Firefox too
  • But Firefox is complicated
  • Process very complicated – “down from 5 source control systems to 3”
  • But plenty of web apps are very complicated (eg Netflix)
  • How do we continuous deliver Firefox
  • How it works currently
    • Release every 6 weeks
    • 4 channels – Nightly -> Aurora -> Beta -> release
    • Mercurial Repo for each channel
  • Release Models
    • Critical Mass – When enough is done and it is stable
    • Single Hard deadline – eg for games being mass released
    • Train Model – fixed intervals
    • Continuous Delivery
  • Deployment Maturity Model
  • Updates
    • New Build -> Generate  a diff -> FF calls back -> downloads and updates
    • Hotfixs
    • Addons automatically updated
  • Currently pipeline around 12 hours long, lots of tests and gatekeeping
  • “Go Faster”
    • System add-ons
    • Test Pilot
    • Data Separate from code
    • Downloadable content
    • Features delivered as web apps
  • System addons
    • Part of core FF, modularized into an add-on
    • Build/test against existing FF build, a lot smaller test
    • Updated up to daily(for now) on any release channel
    • signed and trusted
    • Restartless updates
      • install or update without a browser restart
      • Restarts suck
      • Restartsless coming soon for system add-ons
    • Good for rapid iteration, particularly on the front-end
    • Wrappers for services
    • Replacing hotfixes
  • Problems with add-ons
    • Localalisation
    • Optimizing UX : Better browser faster vs update fatigue
    • Upfront telemetry requirements
    • Dependency mngt on firefox
    • Dependency management between system add-ons (coming soon)
  • Add-ons in flights
    • Firefox hello is already an add-on
    • Currently in beta in 45
    • First beta updates before 46
  • Test Pilot
    • Release channel users opt in to new features
    • Release channel users different from pre-release ones
    • Developed as regular ad-ons (not system add ons)
    • Can graduate to system add-ons by flipping a bit
  • Data should be seperate from code
    • Sec policy
    • blocklists
    • tracking protection list
    • dictionaries
    • fonts
  • Many times Data update == release , this is broken
  • Also some have their own updaters
  • Kinto
    • Lightweight JSON storage with sync, sharing, signing
    • Natice JSON over http
    • niceties of couchDB backed by postgressDB
  • How Kinto Works
    • pings for updates
    • balrog supplies link to kinto
    • signed data downloaded, checked, applied
  • Kinto good for
    • Add-ons block list
  • Downloadable Content
    • Some parts of the browser may not need frequently
    • May not be needed on startup
    • eg languages packs, fonts for Firefox on Android
  • Features delivered remotely
    • Browser features delivered as web apps
    • Pull in content from the server
    • in a early stage
  • Futures
    • Easy for projects to impliment
    • Better “knobs and dials” (canaries A?B, data viz)
    • Pushed based updates
    • Simpler localisation
  • Questions
    • They support rollbacks
    • Worst case: Firefox has a startup crash
    • Not sure sure ice weasel would fit in.
    • How will effect ESR channel? – Won’t change, they will stay security-only
    • Bad Addons – Hate ones that reporting user-data, crashers (eg skype toolbar at one point), Highjack your browser and change settings
    • There is much collaboration between [open source] browsers
    • You are avoiding the release cycle, planning to speed it up – Lots of tests that can’t get rid of all, working on it but not a simple thing to solve.

FacebookGoogle+Share

,

Planet Linux AustraliaCraige McWhirter: LCA2016 Wednesday Keynote - Catarina Mota

Catarina Mota spoke about how open source software changed her life.

  • Discussed the spilling over of online communities in the real world, with 3D printing as an example.
  • In particular covered the RepRap printer and community.
  • RepRap printer provided the ability to short circuit traditional manufacturing.

"think of RepRap as a China on your desktop" - Chris de Bona

  • Open Source (GNU GPL) is core the RepRap mission and it's success.
  • Catarina describe sher house as an Open Source house.
  • Followed Open Source principles to design and build her home.
  • The goal was more affordable and sustainable housing.

Why Does it Matter?

  • Machines are not neutral.
  • Talked about obsolescence and e-waste.
  • Phones are big contributor,
  • Phonebloks is a phone designed to be phone worth keeping with entirely replaceable components.
  • Users are co-creators.
  • Meant to be repaired, transformed, adapted and appropriated.

Catarina gave a wonderful talk on open source software, hardware and the way it's changing the world. Great talk.

Catarina Mota

Planet Linux AustraliaLev Lafayette: Reviving a Downed Compute Node in TORQUE/MOAB

The following describes a procedure for bringing up a compute node in TORQUE that's marked as 'Down'. Whilst the procedure, once known, is relatively simple, investigation to come to this stage required some research and to save others time this document may help.

1. Determine whether the node is really down.

Following an almighty NFS outage quite a number of compute nodes were marked as "down". However the two standard tools, `mdiag -n | grep "Down"` and `pbsnodes -ln` gave significantly different results.

read more

Planet DebianRhonda D'Vine: Moby

Today is one of these moods. And sometimes one needs certain artists/music to foster it. Music is powerful. There are certain bands I know that I have to stay away from when feeling down to not get too deep into it. Knowing that already helps a lot. The following is an artist that is not completely in that area, but he got powerful songs and powerful messages nevertheless; and there was this situation today that one of his songs came to my mind. That's the reason why I present you today Moby. These are the songs:

  • Why Does My Heart Feel So Bad?: The song for certain moods. And lovely at that, not dragging me too much down. Hope you like the song too. :)
  • Extreme Ways: The ending tune from the movie The Bourne Ultimatum, and I fell immediately in love with the song. I used it for a while as morning alarm, a good start into the day.
  • Disco Lies: If you consider the video disturbing you might be shutting your eyes from what animals are facing on a daily basis.

Hope you like the selection; and like always: enjoy!

/music | permanent link | Comments: 2 | Flattr this

CryptogramPaper on the Going Dark Debate

I am pleased to have been a part of this report, part of the Berkman Center's Berklett Cybersecurity project:

Don't Panic: Making Progress on the "Going Dark" Debate

From the report:

In this report, we question whether the "going dark" metaphor accurately describes the state of affairs. Are we really headed to a future in which our ability to effectively surveil criminals and bad actors is impossible? We think not. The question we explore is the significance of this lack of access to communications for legitimate government interests. We argue that communications in the future will neither be eclipsed into darkness nor illuminated without shadow.

In short our findings are:

  • End-to-end encryption and other technological architectures for obscuring user data are unlikely to be adopted ubiquitously by companies, because the majority of businesses that provide communications services rely on access to user data for revenue streams and product functionality, including user data recovery should a password be forgotten.
  • Software ecosystems tend to be fragmented. In order for encryption to become both widespread and comprehensive, far more coordination and standardization than currently exists would be required.
  • Networked sensors and the Internet of Things are projected to grow substantially, and this has the potential to drastically change surveillance. The still images, video, and audio captured by these devices may enable real-time intercept and recording with after-the-fact access. Thus an inability to monitor an encrypted channel could be mitigated by the ability to monitor from afar a person through a different channel.
  • Metadata is not encrypted, and the vast majority is likely to remain so. This is data that needs to stay unencrypted in order for the systems to operate: location data from cell phones and other devices, telephone calling records, header information in e-mail, and so on. This information provides an enormous amount of surveillance data that widespread.
  • These trends raise novel questions about how we will protect individual privacy and security in the future. Today's debate is important, but for all its efforts to take account of technological trends, it is largely taking place without reference to the full picture.

New York Times coverage. Lots more news coverage here. Slashdot thread. BoingBoing post.

Planet DebianSune Vuorela: Compilers and error messages

So. I typo’ed up some template code the other day. And once again I learned the importance of using several c++ compilers.

Here is a very reduced version of my code:

#include <utility>
template <typename T> auto foo(const T& t) -> decltype(x.first)
{
return t.first;
}
int main()
{
foo(std::make_pair(1,2));
return 0;
}

And let’s start with the compiler I was testing with first.

MSVC (2013 and 2015)

main.cpp(8): error C2672: ‘foo’: no matching overloaded function found
main.cpp(8): error C2893: Failed to specialize function template ‘unknown-type foo(const T &)’

It is not completely clear from that error message what’s going on, so let’s try some other compilers:

GCC (4.9-5.3)

2 : error: ‘x’ was not declared in this scope
template <typename T> auto foo(const T& t) -> decltype(x.first)

That’s pretty clear. More compilers:

Clang (3.3-3.7)

2 : error: use of undeclared identifier ‘x’
template <typename T> auto foo(const T& t) -> decltype(x.first)

ICC (13)

example.cpp(2): error: identifier “x” is undefined
template <typename T> auto foo(const T& t) -> decltype(x.first)

(Yes. I mistyped the variable name used for decltype. Replacing the x with t makes it build).

Thanks to http://gcc.godbolt.org/ and http://webcompiler.cloudapp.net/ for testing with various compilers.

Planet DebianSteve Kemp: Best practice - Don't serve writeable PHP files

I deal with compromises often enough of PHP-based websites that I wish to improve hardening.

One obvious way to improve things is to not serve PHP files which are writeable by the webserver-user. This would ensure that things like wp-content/uploads didn't get served as PHP if a compromise wrote valid PHP there.

In the past using php5-suhosin would have allowd this via the suhosin.executor.include.allow_writable_files flag.

Since suhosin is no longer supported under Debian Jessie I wonder if there is a simple way to achieve this?

I've written a toy-module which allows me to call stat on every request, and return a 403 on access to writeable files/directories. But it seems like I shouldn't need to write my own code for this functionality.

Any pointers welcome; happy to post my code if that is useful but suspect not - it just shouldn't exist.

TEDReading list: Big ideas in books from our TED2016 speakers

Blog_Books_for_TED_2016_2

 

With TED2016 quickly approaching, what better way to get ready than with a good book (or two, or three)? Before the conference starts on February 15, explore these reads by some of our speakers.

Life in the future:

Exegesis by Astro Teller. A science-fiction tale of a self-aware, artificially intelligent machine and its creator grappling with questions of existence and responsibility. Teller, who heads up X (the former Google X), kicks off TED, speaking first in Session 1.

The Girl in the Road: A Novel by Monica Byrne. A fantastical novel set in a future world, chronicling the fates of two women fleeing for freedom. Byrne, a science-fiction writer and trained scientist, speaks in Session 11.

Love of language:

The Language Hoax by John McWhorter. Does language shape how we think and perceive the world? Linguist McWhorter takes on this thesis and proposes something new: it is language that reflects culture and thought — not the other way around. He speaks in Session 4; watch his previous TED Talk.

Chineasy: The New Way to Read Chinese by ShaoLan Hsueh. An innovative language guide that turns Chinese characters into pictograms, facilitating easy recall and understanding. ShaoLan speaks during TED University. Watch her TED Talk.

Between You & Me: Confessions of a Comma Queen by Mary Norris. The longtime New Yorker copy editor shares her three-decades-long accumulation of grammar knowledge, helping readers dodge common — and perplexing — errors. She speaks in Session 6.

Inside a mind:

Birth of a Theorem: a Mathematical Adventure by Cédric Villani. The acclaimed mathematician takes readers on a journey inside cracking physics’ Landau Damping theorem, work that led him to win the 2010 Fields Medal. Villani will speak in Session 2.

Me, Myself, and Us: The Science of Personality and the Art of Well-Being by Brian Little. Psychology professor Little unpacks decades of research in a quest to answer this key question: is personality fixed or is it ever-evolving? He will speak in Session 4.

Just for Fun: The Story of an Accidental Revolutionary by Linus Torvalds. A dedicated coder shares how — and why — he built the groundbreaking free software Linux. Torvalds speaks in Session 6.

Blog_Books_for_TED_2016

Science and Tech:

Satellite Remote Sensing for Archeology by Sarah Parcak. This year’s TED Prize winner literally wrote the textbook on how satellite remote sensing can be used in global field projects, looting prevention and site preservation. Parcak will make her bold wish in Session 5. Watch her TED Talk from TED2012.

Moonshots in Education: Launching Blended Learning in the Classroom by Esther Wojcicki. A teacher’s comprehensive guide for integrating digital and online learning into the classroom. Wojcicki speaks during TED University.

Between Earth and Sky: Our Intimate Connections to Trees by Nalini Nadkarni. A brilliant mix of photography, personal narrative and information fused into a guide about the deep connection between humanity, trees and their rich canopies. Nadkarni will speak at TED University; you can watch her previous TED Talks.

Re-think:

Don’t Go Back to School: A Handbook for Learning Anything by Kio Stark. A guide on learning, minus the school. This survey of more than 100 independent learners shows how to forgo higher education while gaining the skills you need. Stark, a novelist and writer, will speak in Session 4.

Women Who Don’t Wait in Line: Break the Mold, Lead the Way by Reshma Saujani. An honest look into why so many women are still not in top levels of corporations and government — and what we can do about it. Saujani, founder of Girls Who Code, speaks in Session 6.

Originals: How Non-Conformists Move the World by Adam Grant. Packed with studies from politics, business, sports and entertainment, this guide urges readers to go against the grain to make lasting change. Grant, a psychology professor at Wharton, will speak in Session 4.

The Conservative Heart: How to Build a Fairer, Happier, and More Prosperous America by Arthur Brooks. A provocative case for a new brand of conservatism — one that adds meaning to life through combating poverty, promoting equality and celebrating success. Brooks will speak in Session 9.

Project Rebirth: Survival and the Strength of the Human Spirit from 9/11 Survivors by Courtney E. Martin. Martin, a writer, partners with psychoanalyst Dr. Robin Stern to document how nine people, whose lives were forever changed by 9/11, move towards resilience and peace in the face of grief. This book was written in conjunction with the documentary Rebirth.  Courtney E. Martin speaks in Session 12. Watch her previous TED Talk.

Across the globe:

How to Run the World: Charting a Course to the Next Renaissance by Parag Khanna. It’s time for a new kind of global diplomacy, one that dismantles rigid dichotomies of the past while tackling today’s most pressing problems. Khanna will speak in Session 7; you can watch his previous TED Talk.

Who Speaks For Islam?: What a Billion Muslims Really Think by Dalia Mogahed. This six-year Gallup Poll of the Muslim world gives voice to people left out of the conversation on increased discrimination, Islamophobia and terrorist attacks. Mogahed speaks in Session 7; you can watch her previous TED Talk.

The Future: Six Drivers of Global Change by Al Gore. A radical, comprehensive look at six forces, including economic globalization and digital communication, that are ushering in unprecedented change worldwide. Gore will speak in Session 8; watch his previous TED Talks.

Art and narrative:

Hello World: Where Design Meets Life by Alice Rawsthorn. A celebrated design critic walks through the history and impact of visionary contemporary design. Rawsthorn speaks in Session 11.

The Small Backs of Children: A Novel by Lidia Yuknavitch. A gripping novel about how an iconic photograph of a girl in war-torn Eastern Europe changes lives in unexpected ways. Yuknavitch speaks in Session 9.

Ellis Island: Ghosts of Freedom by Stephen Wilkes. A photographer’s five-year study of an abandoned Ellis Island hospital, where overgrown vines and trees burst through the floorboards. Wilkes will speak in Session 12.

Moving memoirs:

Year of Yes: How to Dance it out, Stand in the Sun and Be Your Own Person by Shonda Rhimes. In a memoir, this TV powerhouse shares how a year of saying “yes” to everything that scarred allowed her to fully embrace life — and how you can do the same. Shonda Rhimes will speak in Session 1.

Last Night on Earth by Bill T. Jones. A dance icon shares a meditative, stunning memoir that tracks his signature blend of art and activism throughout the decades. Bill T. Jones speaks in Session 1; watch his breathtaking improvisation from TED2015.

Even This I Get to Experience by Norman Lear. The Hollywood heavyweight who created All in the Family, Good Times and The Jeffersons shares his memoir, capturing the life lessons that shaped his legendary career. Lear will speak in Session 5.

 

TED2016 happens February 15-19, 2016, in Vancouver, Canada. Check the TED Blog and the @TEDNews twitter feed for live coverage. And in the meantime, read about the speaker program.

TED2016_BooksPost_Blog


TEDIntroducing the TED Residency

TED_THEATER_logo_render

Are you working on something that deserves wider exposure? Do you draw insights and inspiration from people in other fields? And have you dreamed of giving a TED Talk but don’t know how to get it—or yourself—onto our radar? Well, here’s your chance.

With TED moving into new headquarters in New York’s SoHo, there will be room to add new members to our in-house community, courtesy of a brand-new program: the TED Residency. Could one of these Residents be you?

If you are chosen as a TED Resident, you will spend four months at TED HQ developing your idea, supported by the TED team plus its extended family of amazing humans and, of course, your fellow Residents. We’ll provide office space and technical assistance; you’re responsible for your own room and board, travel, and living expenses.

You’ll present your idea worth spreading in TED’s brand-new theater—and yes, you can invite friends to cheer you on. Also, your talk will be considered for use on TED.com, and the contacts you’ll make just might change your life.

Apply today for this remarkable opportunity!


Krebs on SecurityGood Riddance to Oracle’s Java Plugin

Good news: Oracle says the next major version of its Java software will no longer plug directly into the user’s Web browser. This long overdue step should cut down dramatically on the number of computers infected with malicious software via opportunistic, so-called “drive-by” download attacks that exploit outdated Java plugins across countless browsers and multiple operating systems.

javamessAccording to Oracle, some 97 percent of enterprise computers and a whopping 89 percent of desktop systems in the U.S. run some form of Java. This has made Java JRE (the form of Java that runs most commonly on end-user systems) a prime target of malware authors.

“Exploit kits,” crimeware made to be stitched into the fabric of hacked and malicious sites, lie in wait for visitors who browse the booby-trapped sites. The kits can silently install malicious software on computers of anyone visiting or forcibly redirected to booby-trapped sites without the latest version of the Java plugin installed. In addition, crooks are constantly trying to inject scripts that invoke exploit kits via tainted advertisements submitted to the major ad networks.

These exploit kits — using names like “Angler,” “Blackhole,” “Nuclear” and “Rig” — are equipped to try a kitchen sink full of exploits for various browser plugins, but historically most of those exploits have been attacks on outdated Java and Adobe Flash plugins. As a result, KrebsOnSecurity has long warned users to remove Java altogether, or at least unplug it from the browser unless and until it is needed.

On Jan. 27, 2016, Oracle took a major step toward reducing the effectiveness of exploit kits and other crimeware when the company announced it was pulling the browser plugin from the next desktop version of Java – Java JRE 9.

“By late 2015, many browser vendors have either removed or announced timelines for the removal of standards based plugin support, eliminating the ability to embed Flash, Silverlight, Java and other plugin based technologies,” wrote Dalibor Topic, principle product manager for Open Java Development Kit (OpenJDK).

“With modern browser vendors working to restrict and reduce plugin support in their products, developers of applications that rely on the Java browser plugin need to consider alternative options such as migrating from Java Applets (which rely on a browser plugin) to the plugin-free Java Web Start technology,” Topic continued. “Oracle plans to deprecate the Java browser plugin in JDK 9. This technology will be removed from the Oracle JDK and JRE in a future Java SE release.”

Crooks have used Java flaws to attack a broad range of systems, and not just Windows PCs: In 2013, the Flashback Trojan used a Java flaw to ensnare more than 600,000 Mac OS X systems in a massive botnet.

I look forward to a world without the Java plugin (and to not having to remind readers about quarterly patch updates) but it will probably be years before various versions of this plugin are mostly removed from end-user systems worldwide. And some businesses still reliant on very old versions of Java will continue to use outdated versions of the program.

But for most users, there is no better time like the present to determine whether you have Java installed and decide whether it’s time to give it the boot once and for all. Hopefully, this is the last time I will have to include these boilerplate instructions on how to do that:

Windows users can check for the program in the Add/Remove Programs listing in Windows, or visit Java.com and click the “Do I have Java?” link on the homepage. Oracle’s instructions for removing Java from Mac OS X systems are available here.

If you have an specific use or need for Java, make sure you have the latest version. Also, know that there is a way to have this program installed while minimizing the chance that crooks will exploit unknown or unpatched flaws in the program: Unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play, which can block Web sites from displaying both Java and Flash content by default). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel.

Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

Many people confuse Java with  JavaScript, a powerful scripting language that helps make sites interactive. Unfortunately, a huge percentage of Web-based attacks use JavaScript tricks to foist malicious software and exploits onto site visitors. For more about ways to manage JavaScript in the browser, check out my tutorial Tools for a Safer PC.

CryptogramMore Details on the NSA Switching to Quantum-Resistant Cryptography

The NSA is publicly moving away from cryptographic algorithms vulnerable to cryptanalysis using a quantum computer. It just published a FAQ about the process:

Q: Is there a quantum resistant public-key algorithm that commercial vendors should adopt?

A: While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized by NIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. Once these algorithms have been standardized, NSA will require vendors selling to NSS operators to provide FIPS validated implementations in their products. Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. NSA does not recommend implementing or using non-standard algorithms, and the field of quantum resistant cryptography is no exception.

[...]

Q: When will quantum resistant cryptography be available?

A: For systems that will use unclassified cryptographic algorithms it is vital that NSA use cryptography that is widely accepted and widely available as part of standard commercial offerings vetted through NIST's cryptographic standards development process. NSA will continue to support NIST in the standardization process and will also encourage work in the vendor and larger standards communities to help produce standards with broad support for deployment in NSS. NSA believes that NIST can lead a robust and transparent process for the standardization of publicly developed and vetted algorithms, and we encourage this process to begin soon. NSA believes that the external cryptographic community can develop quantum resistant algorithms and reach broad agreement for standardization within a few years.

Lots of other interesting stuff in the Q&A.

Planet DebianDirk Eddelbuettel: Like peanut butter and jelly: x13binary and seasonal

This post was written by Dirk Eddelbuettel and Christoph Sax and will be posted on both author's respective blogs.

The seasonal package by Christoph Sax brings a very featureful and expressive interface for working with seasonal data to the R environment. It uses the standard tool of the trade: X-13ARIMA-SEATS. This powerful program is provided by the statisticians of the US Census Bureau based on their earlier work (named X-11 and X-12-ARIMA) as well as the TRAMO/SEATS program by the Bank of Spain. X-13ARIMA-SEATS is probably the best known tool for de-seasonalization of timeseries, and used by statistical offices around the world.

Sadly, it also has a steep learning curve. One interacts with a basic command-line tool which users have to download, install and properly reference (by environment variables or related means). Each model specification has to be prepared in a special 'spec' file that uses its own, cumbersome syntax.

As seasonal provides all the required functionality to use X-13ARIMA-SEATS from R --- see the very nice seasonal demo site --- it still required the user to manually deal with the X-13ARIMA-SEATS installation.

So we decided to do something about this. A pair of GitHub repositories provide both the underlying binary in a per-operating system form (see x13prebuilt) as well as a ready-to- use R package (see x13binary) which uses the former to provide binaries for R. And the latter is now on CRAN as package x13binary ready to be used on Windows, OS-X or Linux. And the seasonal package (in version 1.2.0 -- now on CRAN -- or later) automatically makes use of it. Installing seasaonal and x13binary in R is now as easy as:

install.packages("seasonal")

which opens the door for effortless deployment of powerful deasonalization. By default, the principal function of the package employs a number of automated techniques that work well in most circumstances. For example, the following code produces a seasonal adjustment of the latest data of US retail sales (by the Census Bureau) downloaded from Quandl:

library(seasonal) 
library(Quandl)   ## not needed for seasonal but has some niceties for Quandl data

rs <- Quandl(code="USCENSUS/BI_MARTS_44000_SM", type="ts")/1e3

m1 <- seas(rs)

plot(m1, main = "Retail Trade: U.S. Total Sales", ylab = "USD (in Billions)")

This tests for log-transformation, performs an automated ARIMA model search, applies outlier detection, tests and adjusts for trading day and easter effects, and invokes the SEATS method to perform seasonal adjustment. And this is how the adjusted series looks like:

Of course, you can access all available options of X-13ARIMA-SEATS as well. Here is an example where we adjust the latest data for Chinese exports (as tallied by the US FED), taking into account the different effects of Chinese New Year before, during and after the holiday:

xp <- Quandl(code="FRED/VALEXPCNM052N", type="ts")/1e9

m2 <- seas(window(xp, start = 2000),
          xreg = cbind(genhol(cny, start = -7, end = -1, center = "calendar"), 
                       genhol(cny, start = 0, end = 7, center = "calendar"), 
                       genhol(cny, start = 8, end = 21, center = "calendar")
          ),
          regression.aictest = c("td", "user"),
          regression.usertype = "holiday")

plot(m2, main = "Goods, Value of Exports for China", ylab = "USD (in Billions)")

which generates the following chart demonstrating a recent flattening in export activity measured in USD.

We hope this simple examples illustrates both how powerful a tool X-13ARIMA-SEATS is, but also just how easy it is to use X-13ARIMA-SEATS from R now that we provide the x13binary package automating its installation.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianNorbert Preining: Gaming: The Talos Principle – Road to Gehenna

After finishing the Talos Principle I immediately started to play the extension Road to Gehenna, but was derailed near completion by the incredible Portal Stories: Mel. Now that I finally managed to escape from the test chambers my attention returned to the Road to Gehenna. As with the pair Portal 2 and Portal Stories: Mel, the challenges are going up considerably from the original Talos Principle to the Road to Gehenna. Checking the hours of game play it took me about 24h through all the riddles in Road to Gehenna, but I have to admit, I had some riddles where I needed to cheat.

road-to-gehenna.jpg

The Road to Gehenna does not bring much new game play elements, but loads of new riddles. And the best of all, playable on Linux! And as with the original game, the graphics are really well done, while still be playable on my Vaio Pro laptop with Intel integrated graphic card – a plus that is rare in the world of computer games where everyone is expected to have a high-end nVidia or Radeon card. Ok, there is not much action going on where quick graphic computations are necessary, still the impression of the game is great.
gehenna1

The riddles contain the well known elements (connectors, boxes, jammer, etc), but the settings are often spectacular, sometimes very small and narrow, just a few moves if done in the right order, sometimes like wide open fields with lots of space to explore. Transportation between various islands suspended in the air is with vents, giving you a lot of nice flight time!
gehenna2

If one searches a lot, or uses a bit of cheating, one can find good old friends from the Portal series, burried in the sand in one of the world. This is not the only easter egg hidden in the game, there are actually a lot, some of which I have not seen but only read about afterwards. Guess I need to replay the whole game.
gehenna3

Coming back to the riddles, I really believe that the makers have been ingenious in using the few items at hand to create challenging and surprising riddles. As it is so often, many of the riddles look completely impossible at first glance, and often even after staring at them for tens and tens of minutes. Until (and if) one has the the a-ha effect and understands the trick. This often still needs a lot of handwork and trial-error rounds, but all in all the game is well balanced. What is a bit a pain – similar to the original game – are collecting the stars to reach the hidden world and free the admin. There the developers overdid it in my opinion, with some rather absurd and complicated stars.
gehenna4

The end of the game, ascension of the messengers, is rather unspectacular. A short discussion on who remains and then a big closing scene with the messenger being beamed up a la Starship Enterprise, and a closing black screen. But well, the fun was with the riddles.
gehenna5

All in all an extension that is well worth the investment if one enjoyed the original Talos, and is looking for rather challenging riddles. Now that I have finished all the Portal and Talos titles, I am hard thinking of what is next … looking into Braid …

Enjoy!

Worse Than FailureCodeSOD: High Performance Memory Allocation

Jamie has a co-worker who subscribes to the “malloc is slow” school of thought. Now, for most programs, it’s fine, but Jamie works on a high-performance computing system operating in a massively parallel configuration, so there are portions of their application where that philosophy is completely valid.

In this case, however, the code Jamie’s co-worker wrote is in their message handling layer. There’s really no reason to pool buffers there, as the performance gain is practically non-existent based on the frequency of buffer allocation. That doesn’t change Jamie’s co-worker’s opinion though- malloc is slow.

void *buf_mem;
int *big_int_buf;
double *big_double_buf;
double **double_in, **double_out;
int **int_in, **int_out;
int num_connections, max_num_in, max_num_out;
/**
* Names have been changed to protect the guilty.
*/
int reallocate_buffers( void )
{
    free( buf_mem );
    buf_mem = malloc( (max_num_in + max_num_out) * 64 * 10000 * num_connections );
    big_int_buf = buf_mem;
    big_double_buf = buf_mem;
    for( int i = 0; i < num_connections; i++ )
    {
        int_in[i] = ((int *)buf_mem) [max_num_in * i ];
        int_out[i] = ((int *)buf_mem) [max_num_in * num_connections + i * max_num_out ];
        double_in[i] = ((double *)buf_mem) [max_num_in * i ];
        double_out[i] = ((double *)buf_mem) [max_num_in * num_connections + i * max_num_out ];
        long_in[i] = ((double *)buf_mem) [max_num_in * i ];
        long_out[i] = ((double *)buf_mem) [max_num_in * num_connections + i * max_num_out ];
    }
    return 0;
}

Now, there are few treats in here. First, this buffer implementation is a header included in every C file in their code base, even files that never need a buffer. The number 10,000 is a “magic” number- magic in that the developer had no idea how big the buffer needed to be, and just kept throwing successively larger numbers at it until it stopped crashing.

But what makes this a pièce de resistance are the variables big_int_buf and big_double_buf- both of which point to the same buf_mem memory address. I hope you didn’t try and use both of those variables to store different types of data at the same time, because if you did, you’re going to have one heck of a time figuring out the resulting bugs. Sadly, Jamie did- and spent more time than healthy untangling strange and inexplicable bugs.

[Advertisement] BuildMaster is more than just an automation tool: it brings together the people, process, and practices that allow teams to deliver software rapidly, reliably, and responsibly. And it's incredibly easy to get started; download now and use the built-in tutorials and wizards to get your builds and/or deploys automated!

Planet DebianMichal Čihař: Weekly phpMyAdmin contributions 2016-W04

As I've already mentioned in separate blog post we mostly had some security issues fun in past weeks, but besides that some other work has been done as well.

I've still focused on code cleanups and identified several pieces of code which are no longer needed (given our required PHP version). Another issue related to security updates was to set testing of 4.0 branch using PHP 5.2 as this is what we've messed up in the security release (what is quite bad as this is only branch supporting PHP 5.2).

In addition to this, I've updated phpMyAdmin packages in both Debian and Ubuntu PPA.

All handled issues:

Filed under: Debian English phpMyAdmin | 0 comments

Planet Linux AustraliaColin Charles: Donating to an opensource project when you download it

Apparently I’ve always thought that donating to opensource software that you use would be a good idea — I found this about Firefox add-ons. I suggested that the MariaDB Foundation do this for downloads to the MariaDB Server, and it looks like most people seem to think that it is an OK thing to do.

I see it being done with Ubuntu, LibreOffice, and more recently: elementary OS. The reasoning seems sound, though there was some controversy before they changed the language of the post. Though I’m not sure that I’d force the $0 figure. 

For something like MariaDB Server, this is mostly going to probably hit Microsoft Windows users; Linux users have repositories configured or use it from their distribution of choice! 

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Sysadmin Miniconf – Session 3

The life of a Sysadmin in a research environment – Eric Burgueno

  • Everything must be reproducible
  • Keeping system up as long as possible, not have an overall uptime percentage
  • One person needs to cover lots of roles rather than specialise
  • 2 Servers with 2TB of RAM. Others smaller according to need
  • Lots of varied tools mostly bioinformatics software
  • 90TB to over 200TB of data over 2 years. Lots of large files. Big files, big servers.
  • Big job using 2TB of RAM taking 8 days to run.
  • The 2*2TB servers can be joined togeather to create a single 4TB server
  • Have to customize environment for each tool, hard when there have lots of tools and also want to compare/collaborate against other places where software is being run.
  • Reproducible(?) Research

Creating bespoke logging systems and dashboards with Grafana, in fifteen minutes – Andrew McDonnell

Live Demo

Order in the chaos: or lessons learnt on planning in operations – Peter Hall

  • Lead of an Ops team at REA group. Looks after dev teams for 10-15 applications
  • Ops is not a project, but works with many projects
  • Many sources of work, dev, security, incidents, infrastructure improvement
  • Understand the work
    • Document your work
    • Talk about it, 15min standup
  • Scedule things
    • and prepare for the unplanned
    • Perhaps 2 weeks
    • Leave lots of slack
  • Interruptions
    • Assign team members to each ops teams
    • Rotating “ops goal keeper”
    • Developers on pager
  • Review Often
  • Longer term goals for your team
  • Failure demand vs value demand.
    • Make sure [at least some of] what you are doing is adding value to the environment

 

From Commit to Cloud – Daniel Hall

  • Deployments should be:
    • fast – 10 minutes
    • small – only one feature change and person doing should be aware of all of what is changing
    • easy – little human work as possible, simple to understand
  • We believe this because
    • less to break
    • devs should focus on dev
    • each project should be really easy to learn, devs can switch between projects easy
    • Don’t want anyone from being afraid to deploy
  • Able to rollback
    • 30 microservices
    • 2 devs plus some work from others
  • How to do it
    • Microservices arch (optional but helps)
    • git , build agent, packaging format with dependencies
    • something to run you stuff
  • code -> git -> built -> auto test -> package -> staging -> test -> deploy to prod
  • Application is built triggere by git
    • build.sh script in each repo
  • Auto test after build, don’t do end-to-end testing, do that in staging
  • Package app – they use docker – push to internal docker repo
  • Deploy to staging – they use curl to push json mesos/matathon with pulls container. Testing run there
  • Single Click approval to deploy to staging
  • Deploy to prod – should be same as how you deploy to staging.

LNAV – Paul Wayper

  • Point at a dir. read all the files. sort all the lines together in timestamp order
  • Colour codes, machines, different facilities(daemons). Highlights IPs addresses
  • Errors lines in red, warning lines in yellow
  • Regular expressions highlighted. Fully pcre compatable
  • Able to move back and force and hour or a day at a time with special keys
  • Histograph of error lines, number per minutes etc
  • more complete (SQL like) queries
  • compiles as a static binary
  • Ability to add your own log file formats
  • Ability share format filters with others
  • Doesn’t deal with journald logs
  • Availbale for spel, fedora, debian but under a lot of active development.
  • acts like tail -f to spot updates to logs.

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: Haskell is Not For Production and Other Tales - Katie Miller - LCA2016

Katie Miller gave an excellent talk about Haskell covering:

  • Haskell is at the heart of Sigma at Facebook handling more than 1 million requests / second
  • Haxl is a Haskell framework in Sigma and is used for fighting malicious activity on Facebook
  • Haxl is open source.
  • Haskell's origins are academic.
  • Haskell is used widely in industry. Katie listed quite a few.

Why Haskell?

  • The ability to reason about code, from:
    • Purity
    • Strong static typing
    • Abstract away from concurrency
  • Haskell performs:
    • As much as 3x as fast as it's predecessor
    • 30 times as fast for the user experience.

The Myth of Haskell Being Difficult

  • Not necessarily difficult but it different and that result sin friction.
  • Expectations:
    • Concepts don't neatly to familiar languages.
    • It's a journey teaching you a new way to think.
  • Abstractions:
    • Abstract concepts can take time to get a handle on.
  • Type Errors
    • Can be a little difficult to explain to new users
    • Worth mastering due to increase of productivity.

Teaching Haskell

  • Didn't mention monads.
  • Stuck to notations and implicit concurrency.
  • Created conversation space to discuss issues.
  • Results exceeded expectations.

Hiring Difficulty

  • There are more people wanting to work in Haskell than there are Haskell jobs.
  • An embarrassment of talent riches.

Haskell Community Difficulty

  • Community may have forgotten how to communicate to new people.
  • Keep in mind what it as like to be new.
  • Documentation for libraries needs to improve.
  • Work to create diverse and welcoming community.
  • Be technical brutal but personally respectful.

Code

  • Haskell is not immune to bad code.

"Using functional programming is a conclusion from a goal, not the goal itself" - Brian McKenna

  • The legacy of Haskell is spreading good ideas and this was an original goal.
  • Haskell's difference brings both benefits and challenges.

"Open Source = opportunity" - Katie Miller

Conclusion: Haskell is for production.

Haskell

Planet DebianRussell Coker: Compatibility and a Linux Community Server

Compatibility/interoperability is a good thing. It’s generally good for systems on the Internet to be capable of communicating with as many systems as possible. Unfortunately it’s not always possible as new features sometimes break compatibility with older systems. Sometimes you have systems that are simply broken, for example all the systems with firewalls that block ICMP so that connections hang when the packet size gets too big. Sometimes to take advantage of new features you have to potentially trigger issues with broken systems.

I recently added support for IPv6 to the Linux Users of Victoria server. I think that adding IPv6 support is a good thing due to the lack of IPv4 addresses even though there are hardly any systems that are unable to access IPv4. One of the benefits of this for club members is that it’s a platform they can use for testing IPv6 connectivity with a friendly sysadmin to help them diagnose problems. I recently notified a member by email that the callback that their mail server used as an anti-spam measure didn’t work with IPv6 and was causing mail to be incorrectly rejected. It’s obviously a benefit for that user to have the problem with a small local server than with something like Gmail.

In spite of the fact that at least one user had problems and others potentially had problems I think it’s clear that adding IPv6 support was the correct thing to do.

SSL Issues

Ben wrote a good post about SSL security [1] which links to a test suite for SSL servers [2]. I tested the LUV web site and got A-.

This blog post describes how to setup PFS (Perfect Forward Secrecy) [3], after following it’s advice I got a score of B!

From the comments on this blog post about RC4 etc [4] it seems that the only way to have PFS and not be vulnerable to other issues is to require TLS 1.2.

So the issue is what systems can’t use TLS 1.2.

TLS 1.2 Support in Browsers

This Wikipedia page has information on SSL support in various web browsers [5]. If we require TLS 1.2 we break support of the following browsers:

The default Android browser before Android 5.0. Admittedly that browser always sucked badly and probably has lots of other security issues and there are alternate browsers. One problem is that many people who install better browsers on Android devices (such as Chrome) will still have their OS configured to use the default browser for URLs opened by other programs (EG email and IM).

Chrome versions before 30 didn’t support it. But version 30 was released in 2013 and Google does a good job of forcing upgrades. A Debian/Wheezy system I run is now displaying warnings from the google-chrome package saying that Wheezy is too old and won’t be supported for long!

Firefox before version 27 didn’t support it (the Wikipedia page is unclear about versions 27-31). 27 was released in 2014. Debian/Wheezy has version 38, Debian/Squeeze has Iceweasel 3.5.16 which doesn’t support it. I think it is reasonable to assume that anyone who’s still using Squeeze is using it for a server given it’s age and the fact that LTS is based on packages related to being a server.

IE version 11 supports it and runs on Windows 7+ (all supported versions of Windows). IE 10 doesn’t support it and runs on Windows 7 and Windows 8. Are the free upgrades from Windows 7 to Windows 10 going to solve this problem? Do we want to support Windows 7 systems that haven’t been upgraded to the latest IE? Do we want to support versions of Windows that MS doesn’t support?

Windows mobile doesn’t have enough users to care about.

Opera supports it from version 17. This is noteworthy because Opera used to be good for devices running older versions of Android that aren’t supported by Chrome.

Safari supported it from iOS version 5, I think that’s a solved problem given the way Apple makes it easy for users to upgrade and strongly encourages them to do so.

Log Analysis

For many servers the correct thing to do before even discussing the issue is to look at the logs and see how many people use the various browsers. One problem with that approach on a Linux community site is that the people who visit the site most often will be more likely to use recent Linux browsers but older Windows systems will be more common among people visiting the site for the first time. Another issue is that there isn’t an easy way of determining who is a serious user, unlike for example a shopping site where one could search for log entries about sales.

I did a quick search of the Apache logs and found many entries about browsers that purport to be IE6 and other versions of IE before 11. But most of those log entries were from other countries, while some people from other countries visit the club web site it’s not very common. Most access from outside Australia would be from bots, and the bots probably fake their user agent.

Should We Do It?

Is breaking support for Debian/Squeeze, the built in Android browser on Android <5.0, and Windows 7 and 8 systems that haven’t upgraded IE as a web browsing platform a reasonable trade-off for implementing the best SSL security features?

For the LUV server as a stand-alone issue the answer would be no as the only really secret data there is accessed via ssh. For a general web infrastructure issue it seems that the answer might be yes.

I think that it benefits the community to allow members to test against server configurations that will become more popular in the future. After implementing changes in the server I can advise club members (and general community members) about how to configure their servers for similar results.

Does this outweigh the problems caused by some potential users of ancient systems?

I’m blogging about this because I think that the issues of configuration of community servers have a greater scope than my local LUG. I welcome comments about these issues, as well as about the SSL compatibility issues.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Sysadmin Miniconf – Session 2

Site Reliability Engineering at Dropbox – Tammy Butow

  • Having a SLA, measuring against it. Caps OPSwork, Blameless Post Mortum, Always be coding
  • 400 M customer, billion files every day
  • Very hard to find people to scale, so build tool to scale instead
  • Team looks at 6,000 DB machines, look after whole machines not just the app
  • Build a lot of tools in python and go
  • PygerDuty – python library for pagerduty API
    • Easy to find the top things paging, write tools to reduce these
    • Regular weekly meeting to review those problems and make them better
    • If work is happening on machines then turn off monitoring on them so people don’t get woken up for things they don’t need to.
    • Going for days without any pages
  • Self-Healing and auto-remediation scripts
  • Hermes
    • Allocate and track tasks to groups
  • Automation of DB tasks
  • Bot copies pagerduty alerts in slack
  • Aim Higher
    • Create a roadmap for next 12 months
    • Buiding a rocketship while it is flying though the sky
  • Follow the Sun so people are working days
  • Post Mortem for every page
  • Frequent DR testing
  • Take time out to celebrate

I missed out writing up the next couple of talks due to technical problems

 

FacebookGoogle+Share

Planet Linux AustraliaTim Serong: A Gentle Introduction to Ceph

A Gentle Introduction to CephI told a little story about Ceph today in the sysadmin miniconf at linux.conf.au 2016. Simon and Ewen have already put a copy of my slides up on the miniconf web site, but for bonus posterity you can get them from here also. For some more detail, check out Lars’ SUSE Enterprise Storage – Design and Performance for Ceph slides from SUSECon 2015 and Software-defined Storage: Sizing and Performance.

Update (2016-02-07): The video of the talk can be found here.

Planet Linux AustraliaCraige McWhirter: Internet Archive: Universal Access, Open APIs - VM Brasseur - LCA2016

VM Brasseur gave the most revealing and informative talk I've attended thus far.

A brief tour:

  • Anonymise IP addresses
  • Run the Wayback Machine
  • Archive for paying customers
  • Donated crawled content
  • Deep crawl on popular sites
  • Broad shallow crawl of known domains
  • Targeted crawls.
  • On demand crawling from the webpage - so you can archive before it's pulled from the net.
  • Scan 1000 books per day
  • 30 scanning centres around the world.
  • Runs the Open Library for both books, video and TV.
  • TV is now a citable resource
  • Live music archive
    • Includes every live show of the Grateful Dead.
  • Video game archive of classic games playable in the browser.
  • Will save anything digital, for free, publicly.

Open Library Api

  • Returns JSON (XML also available)
  • Evergreen and Koha are FOSS library applications that use the Open Library API.
  • "Do We Want It?" API to see what books the library needs.
  • There is an advanced search API too.

IAS3 API

  • Drop in replacement for the Amazon S3 API.
  • NASA use the API. All the audio, video and photos are available on the Internet Archive.
  • ia-wrapper is a Perl client to access the API.
  • There is a Python client: internetarchive.

This is the talk that has blown my mind so far. This ought to have been a keynote. So very glad I made it. I highly recommend watching the video when it comes out. This was an amazing talk.

The Open Library

The Wayback Machine

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Sysadmin Miniconf – Session 1

Is that a Cloud in you packet – Steven Ellis

  • What if you could have a demo of a stack on a phone
  • or on a memory stick or a mini raspberry-pi type PC
  • Nested Virtualisation
  • Hardware
    • Using Linux as host env, not so good on Win and Mac
    • Thinkpad, fedora or Centos, 128GB SSD
  • Nested Virtualisation
    • Huge perforance boost over qemu
    • Use SSD
    • enable options in modules kvm-intel or kvm-amd
    • Confirm SSD perf 1st – hdparm -t /dev/sdX
    • Create base env for VMs, enable vmx in features
    • Make sure it uses a different network so doesn’t badly interact with ones further out
  • Think LVM
    • Creat ethin pool for all envs
    • Think on lvm ” issue_discards = 1 “
  • Base image
    • Doesn’t have to be minimal
    • update the base regularly
    • How do you build your base image?
      • Thin may go weirdly wrong
      • Always use kickstart to re-create it.
    • Think of your use case, don’t skim on the disk (eg 40G disk for image)
    • ssh keys, Enable yum cache
    • Patch once kicked
    • keep a content cache, maybe with rsync or mrepo
  • Turn off VM and hen use fsrim and partx to make it nice and smaller.
  • virt-manager can’t manage thin volumes, DONT manually add the path
  • use virsh to manually add the path.
  • snapshots or snapshots great performance on SSD
  • Thin longer activates automatically on distros
  • packstack simple way to install simple openstack setup
  • LVM vs QCOW
    • qcow okay for smaller images
    • cloud-init with atomic
    • do not snapshot a qcow image when it is running

Revisiting Unix principles for modern system automation – Martin Krafft

  • SSH Botnet
  • OSI of System Automation
  • Transport unix style, both push and pull
  • uses socat for low level data moving
  • autossh <- restarts ssh connection automatically
  • creates control socket

A Gentle Introduction to Ceph – Tim Serong

  • Ceph gives a storage cluster that is self healing and self managed
  • 3 interfaces, object, block, distributed fs
  • OSD with files on them, monitor nodes
  • OSD will forward writes to other replics of the data
  • clients can read from any OSD
  • Software defined storage vs legacy appliances
  • Network
    • Fastest you can, seperate public and cluster networks
    • cluster fatsre than public
  • Nodes
    • 1-2G ram per TB of storage
    • read recomendations
  • SSD journals to cache writes
  • Redundancy
    • Replications – capacity impact but usually good performance
    • Erasure coding – Like raid – better space efficiency but impact in most other areas
  • Adding more nodes
    • tends to work
    • temp impact during rebalancing
  • How to size
    • understand you workload
    • make a guess
    • Build a 10% pilot
    • refine to until perf is achieved
    • scale up the the pilot

Keeping Pinterest running – Joe Gordon

  • Software vs service
    • No stable versions
    • Only one version is live
    • Devs support their own service – alligns incentives, eg monitoring built in
    • Testing against production traffic
  • SRE at Pinterest
    • Like a pit crew in F1
    • firefighting at scale
    • changing tires while moving
  • Operation Maturity
  • Operation Excellence
    • Have the best practices, docs, process, imporvements
    • Repeatable deploys
  • Visability
    • data driven company
    • Lots of Time series data – TSDB
    • Using ELK
  • Deployments
    • no impact to end user
    • easy to do, every few minutes
  • Canary vs Staging
    • Send dark (copies) of traffic to canary box without sending anything back to user
    • Bounce back to starting if problems
  • Teletran
    • Rollback, hotfix, rolling deploy, starting and testing, visibility and useability
    • client-server model
    • pre/post download, restart, etc scripts included with every deployment
    • puase/resume various testing
  • Postmortums and Production readyness reviews
  • Cloud is not infinite, often will hit AWS capacity limits or even no avaialble stuff in the region
  • Need to be able to make sure you know what you are running and if it i seffecintly used
  • Open sourced tools
    • mysql_utils – lots of tools to manage many DBs
    • Thrift tools
    • Teletraan – open sourced in Feb 2016
    • github.com/pinterest

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: Functional Programming in Python - Juan Nunez-Iglesiasis - LCA2016

Juan Nunez-Iglesiasis gave an informative talk on using toolz to write Python using functional programming methodologies.

  • There was an excellent real world example using streaming.
  • There were a couple of other examples user k-mer and a genome markov model.
  • Python has a function call overhead of half a megabyte per second that is a hard limit for these applications.

Juan provided an excellent introduction into using the libraries that enable to write in Python in the manner of functional programming language.

Juan Nunez-Iglesiasis

Planet Linux AustraliaCraige McWhirter: Functional Programming, Parametricity, Types - Tony Morris - LCA2016

Tony Morris gave an excellent and informative talk that was mostly well beyond my current skill sets. some key points were:

  • Believes that functional programming as already won "the war".
  • Parametricity is a tool of hight reward.
  • Every programme terminates in a "total language".
  • Gave an excellent and detailed example of a fictional parametric language.
  • Covered the limits of parametricity using Haskell as an example.

However my understanding of parametricity is greater than when I entered the room :-)

Sitting next to Brendan O'Dea for his post talk rants was definitely a good move.

LCA by the Bay

,

Planet Linux AustraliaCraige McWhirter: LCA2016 Tuesday Keynote - George Fong

George Fong opened covering an extraordinary time in history, the birth of the Internet and in particular it's origins in Australia.

George made some interesting points about Linux and the community:

  • 96.3% of the top 1 million website run on Linux
  • Open Source and Free Software refer to a different set of views based on different values.

There was some coverage of the common foundations of Internet communities and the rough consensus.

"Be conservative in what you send and liberal in what you accept" - Jon Postel

Discussed the impact and challenges of mobile devices, the Internet of things and the reliability of maintenance.

Covered the innovation of the hacker / maker community doing a better job in some cases than the manufacturers.

Made some interesting points about how the Internet became a scary place, the rift between the tech community and commercial users.

Discussed the erosion of civil digital liberties and the interference of the state in devices.

The frustrations about metadata laws were touched and quoted Michael Cordover.

George wrapped up covering the loss of integrity and trust. A heightened awareness of the human impact.

"The cavalry are not coming. We are the cavalry" - George Fong

George gave an interesting and thought provoking keynote. Well worth watching when the video becomes available.

George Fong

Planet DebianLunar: Reproducible builds: week 40 in Stretch cycle

What happened in the reproducible builds effort between January 24th and January 30th:

Media coverage

Holger Levsen was interviewed by the FOSDEM team to introduce his talk on Sunday 31st.

Toolchain fixes

Jonas Smedegaard uploaded d-shlibs/0.63 which makes the order of dependencies generated by d-devlibdeps stable accross locales. Original patch by Reiner Herrmann.

Packages fixed

The following 53 packages have become reproducible due to changes in their build dependencies: appstream-glib, aptitude, arbtt, btrfs-tools, cinnamon-settings-daemon, cppcheck, debian-security-support, easytag, gitit, gnash, gnome-control-center, gnome-keyring, gnome-shell, gnome-software, graphite2, gtk+2.0, gupnp, gvfs, gyp, hgview, htmlcxx, i3status, imms, irker, jmapviewer, katarakt, kmod, lastpass-cli, libaccounts-glib, libam7xxx, libldm, libopenobex, libsecret, linthesia, mate-session-manager, mpris-remote, network-manager, paprefs, php-opencloud, pisa, pyacidobasic, python-pymzml, python-pyscss, qtquick1-opensource-src, rdkit, ruby-rails-html-sanitizer, shellex, slony1-2, spacezero, spamprobe, sugar-toolkit-gtk3, tachyon, tgt.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

  • gnubg/1.05.000-4 by Russ Allbery.
  • grcompiler/4.2-6 by Hideki Yamane.
  • sdlgfx/2.0.25-5 fix by Felix Geyer, uploaded by Gianfranco Costamagna.

Patches submitted which have not made their way to the archive yet:

  • #812876 on glib2.0 by Lunar: ensure that functions are sorted using the C locale when giotypefuncs.c is generated.

diffoscope development

diffoscope 48 was released on January 26th. It fixes several issues introduced by the retrieval of extra symbols from Debian debug packages. It also restores compatibility with older versions of binutils which does not support readelf --decompress.

strip-nondeterminism development

strip-nondeterminism 0.015-1 was uploaded on January 27th. It fixes handling of signed JAR files which are now going to be ignored to keep the signatures intact.

Package reviews

54 reviews have been removed, 36 added and 17 updated in the previous week.

30 new FTBFS bugs have been submitted by Chris Lamb, Michael Tautschnig, Mattia Rizzolo, Tobias Frost.

Misc.

Alexander Couzens and Bryan Newbold have been busy fixing more issues in OpenWrt.

Version 1.6.3 of FreeBSD's package manager pkg(8) now supports SOURCE_DATE_EPOCH.

Ross Karchner did a lightning talk about reproducible builds at his work place and shared the slides.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Tuesday – Keynote: George Fong

George Fong – Chair of Internet Australia

The Challenges of the Changing Social Significance of the Nerd

  • “This is the first conference I’ve been to where there’s an extremely high per capita number of ponytails”
  • Linux not just running web server and other servers  but also network devices
  • Linux and the Web aren’t the same thing, but they’ve grown symbiotically and neither would be the same without the other
  • “One of the lessons we’ve learned in Australia is that when you mix technology with politics, you get into trouble”
  • “We have proof in Australia that if you take guns away from people, people stop getting killed”

 

FacebookGoogle+Share

CryptogramNSA and GCHQ Hacked Israeli Drone Feeds

The NSA and GCHQ have successfully hacked Israel's drones, according to the Snowden documents. The story is being reported by the Intercept and Der Spiegel. The Times of Israel has more.

Planet DebianRaphaël Hertzog: My Free Software Activities in January 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I did not ask for any paid hours this month and won’t be requesting paid hours for the next 5 months as I have a big project to handle with a deadline in June. That said I still did a few LTS related tasks:

  • I uploaded a new version of debian-security-support (2016.01.07) to officialize that virtualbox-ose is no longer supported in Squeeze and that redmine was not really supportable ever since we dropped support for rails.
  • Made a summary of the discussion about what to support in wheezy and started a new round of discussions with some open questions. I invited contributors to try to pickup one topic, study it and bring the discussion to some conclusion.
  • I wrote a blog post to recruit new paid contributors. Brian May, Markus Koschany and Damyan Ivanov candidated and will do their first paid hours over February.

Distro Tracker

Due to many nights spent on playing Splatoon (I’m at level 33, rank B+, anyone else playing it?), I did not do much work on Distro Tracker.

After having received the bug report #809211, I investigated the reasons why SQLite was no longer working satisfactorily in Django 1.9 and I opened the upstream ticket 26063 and I had a long discussion with two upstream developers to find out the best fix. The next point release (1.9.2) will fix that annoying regression.

I also merged a couple of contributions (two patches from Christophe Siraut, one adding descriptions to keywords, cf #754413, one making it more obvious that chevrons in action items are actionable to show more data, a patch from Balasankar C in #810226 fixing a bad URL in an action item).

I fixed a small bug in the “unsubscribe” command of the mail bot, it was not properly recognizing source packages.

I updated the task notifying of new upstream versions to use the data generated by UDD (instead of the data generated by Christoph Berg’s mole-based implementation which was suffering from a few bugs). 

Debian Packaging

Testing experimental sbuild. While following the work of Johannes Schauer on sbuild, I installed the version from experimental to support his work and give him some feedback. In the process I uncovered #810248.

Python sponsorship. I reviewed and uploaded many packages for Daniel Stender who keeps doing great work maintaining prospector and all its recursive dependencies: pylint-common, python-requirements-detector, sphinx-argparse, pylint-django, prospector. He also prepared an upload of python-bcrypt which I requested last month for Django.

Django packaging. I uploaded Django 1.8.8 to jessie-backports.
My stable updates for Django 1.7.11 was not handled before the release of Debian 8.3 even though it was filed more than 1.5 months before.

Misc stuff. My stable update for debian-handbook has been accepted fairly shortly after my last monthly report (thank you Adam!) so I uploaded the package once acked by a release manager. I also sponsor a backports upload of zim prepared by Joerg Desch.

Kali related work

Kernel work. The switch to Linux 4.3 in Kali resulted in a few bug reports that I investigated with the help of #debian-kernel and where I reported my findings back so that the Debian kernel could also benefit from the fixes I uploaded to Kali: first we included a patch for a regression in the vmwgfx video driver used by VMWare virtual machines (which broke the gdm login screen), then we fixed the input-modules udeb to fix support of some Logitech keyboards in debian-installer (see #796096).

Misc work. I made a non-maintainer upload of python-maxminddb to fix #805689 which had been removed from stretch and that we needed in Kali. I also had to NMU libmaxminddb since it was no longer available on armel and we actually support armel in Kali. During that NMU, it occurred to me that dh-exec could offer a feature of “optional install”, that is installing a file that exists but not failing if it doesn’t exist. I filed this as #811064 and it stirred up quite some debate.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Geek FeminismStand by your linkspam (1 February 2016)

  • Down and out in statistical computing | Adventures in Data (February 1): “So: unintentionally offensive variable name leads to a patch and the indication that it is much more than one person finding it offensive, leads to the President of the R Foundation dismissing the concerns as “shit-disturbing” and punishing the people who surfaced said concern.”
  • The week I made Forbes’ 30 Under 30 Science List | Sarah Guthals on Medium (January 10): “When being nominated or recognized for my efforts, it’s not that I’m the best, or the one that should get the recognition, but I am one of the people that should be recognized, and that recognition could allow me to highlight all of the other people and efforts that have contributed to me being able to continue making my efforts towards helping others.”
  • How to stop the sexual harassment of women in science: reboot the system | The Conversation (January 28): “we don’t need to wait for journalists and politicians to shine a spotlight on more individual cases of harassment. It’s time individual researchers, science managers, departments and institutions made the commitment to reboot science and wipe out harassment.”
  • The “Women in Tech” movement is full of victim blaming bullshit | Life Tips (January 14): “It is time to focus the work on holding the men in charge accountable- not just trying to do things to “help women”.”
  • Names and Harvard | Adventures in Renaming (January 26): “If Harvard can be so on the ball with preferred names, why can’t anyone else? Why can’t PayPal let me decide what name I want to show on Paypal.Me rather than plastering my full name? Why can’t I have my debit card show the name I’d rather overly-friendly cashiers call me? And why is Facebook still being fussy over names? Just one quick note to the administrators (maybe not even that), and done. Easy.”
  • Plug In With The DIY Tech Superstar Of Adafruit Industries: BUST Interview | Bust Magazine (January 21): “While still a student, she built an mp3 player from scratch “for fun.” And after her classmates took notice, Fried began selling her own DIY kits. Today, Adafruit occupies a 15,000 square-foot manufacturing facility in N.Y.C., and Fried hosts weekly web hangouts where she answers tech questions and interacts with makers of all skill levels.”
  • Lady Science | Slate (January 25): “sexism isn’t a women’s problem, it’s a problem for everyone. Also it helps if men speak up, because men who might be a part of the problem will tend to listen to other men more than women. Ironic, but once this idea gets traction with them that problem itself might diminish.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianRitesh Raj Sarraf: State of Email Clients on Linux Based Platforms

I've been trying to catch up on my reading list (Thanks to rss2email, now I can hold the list longer, than just marking all as read). And one item from last year's end worth spending time was Thunderbird.

Thunderbird has been the email client of choice for many users. The main reason for it being popular has been, in my opinion, it being cross platform. Because that allows users an easy migration path across platforms. It also bring persistence, in terms of features and workflows, to the end users. Perhaps that must have been an important reason for many distributions (Ubuntu) and service providers to promote it as the default email client. A Windows/Mac user migrating to Ubuntu will have a lot better experience if they see familiar tools, and their data and workflows being intact.

Mozilla must have executed its plan pretty well, to have been able to get it rolling so far. Because other attempts elsewhere (KDE4 Windows) weren't so easy. Part of the reason maybe that any time a new disruptive update is rolled on (KDE4, GNOME3), a lot many frustrated users are born. It is not that people don't want change. Its just that no one likes to see things break. But unfortunately, in Free Software / Open Source world, that is taken lightly.

That's one reason why it takes Mozilla so so so long to implement Maildir in TB, when others (Evolution) have had it for so long.

So, recently, Mozilla announced its plans to drop Thunderbird development. It is not something new. Anyone using TB knows how long it has been in Maintenance/ESR mode.

What was interesting on LWN was the comments. People talked a lot about DE Native Email clients - Kmail, Sylpheed. TUI Clients and these days Browser based clients. Surprisingly, not much was talked about Evolution.

My recent move to GNOME has made me look into letting go of old tools/workflows, and try to embrace newer ones. Of them has been GNOME itself. Changing workflows for email was difficult and frustrating. But knowing that TB doesn't have a bright future, it was important to look for alternatives. Just having waited for Maildir and GTK3 port of TB for so long, was enough.

On GNOME, Evolution, may give an initial impression of being in Maintenance mode. Especially given that most GNOME apps are now moving to the new UI, which is more touch friendly. And also, because there were other efforts to have another email client on GNOME, I think it is Yorba.

But even in its current form, Evolution is a pretty impressive email client Personal Information Management tool. It already is ported to GTK3. Which implies it is capable of responding to Touch events. It sure could have a revised Touch UI, like what is happening with other GNOME Apps. But I'm happy that it has been defered for now. Revising Evolution won't be an easy task, and knowing that GNOME too is understaffed, breaking a perfectly working tool won't be a good idea.

My intent with this blog post is to give credit to my favorite GNOME application, i.e. Evolution. So next time you are looking for an email client alternative, give Evolution a try.

Today, it already does:

  • Touch UI
  • Maildir
  • Microsoft Exchange
  • GTK3
  • Addressbook, Notes, Tasks, Calendar - Most Standards Based and Google Services compatible
  • RSS Feed Manager
  • And many more that I may not have been using

The only missing piece is being cross-platform. But given the trend, and available resources, I think that path is not worthy of trying.

Keep It Simple. Support one platform and support it well.

Categories: 

Keywords: 

Like: 

Planet DebianMike Gabriel: My FLOSS activities in January 2016

In January 2016 I was finally able to work on various FLOSS topics again (after two months of heavily focussed local customer work):

  • Upload of MATE 1.12 to Debian unstable
  • Debian LTS packaging and front desk work
  • Other Debian activies
  • Edu Workshop in Kiel
  • Yet another OPSI Packaging Project

Upload of MATE 1.12 to Debian testing/unstable

At the beginning of the new year, I finalized the bundle upload of MATE 1.12 to Debian unstable [1]. All uploaded packages are available in Debian testing (stretch) and Ubuntu xenial by now. MATE 1.12 will also be the version shipped in Ubuntu MATE 16.04 LTS.

Additionally, I finally uploaded caja-dropbox to Debian unstable (non-free), thanks to Vangelis Mouhtsis and Martin Wimpress for doing first steps preparations. The package has already left Debian's NEW queue, but unfortunately has been removed from Debian testing (stretch) again due to build failures in one of its dependencies.

Debian LTS work

In January 2016 I did my first round of Debian LTS front desk work [2]. Before actually starting with my front desk duty, I worked myself through the documentation and found it difficult to understand the output of the lts-cve-triage.py script. So, I proposed various improvments to the output of that script (all committed by now).

During the second week of January then, I triaged the following packages regarding known/open CVE issues:

  • isc-dhcp (CVE-2015-8605)
  • gosa (CVE-2015-8771, CVE-2014-9760)
  • roundcube (CVE-2015-8770)

read more

Racialicious#BlackOscarBait: On The Importance of Casting

For the second year in a row there were zero actors of colour nominated for lead or supporting acting awards, and no Black directors recognised for their efforts. The lack of nominations for Creed and Straight Out Of Compton stood out to many, leading to the resurgence of @ReignofApril’s #OscarsSoWhite tag, calls for a boycott of the show, and demands that there be immediate changes in the voting and nomination process. To the Academy’s credit they did move rather swiftly. Their statement began with: “The Board’s goal is to commit to doubling the number of women and diverse members of the Academy by 2020.” and outlined some important changes:

Beginning later this year, each new member’s voting status will last 10 years, and will be renewed if that new member has been active in motion pictures during that decade.  In addition, members will receive lifetime voting rights after three ten-year terms; or if they have won or been nominated for an Academy Award.

At the same time, the Academy will supplement the traditional process in which current members sponsor new members by launching an ambitious, global campaign to identify and recruit qualified new members who represent greater diversity.  

Under President Cheryl Boone Isaac’s leadership the Academy has taken steps to mitigate their diversity issues, but this top-down solution isn’t the only thing necessary to fix Hollywood’s race problem. Viola Davis hit on it when she won her Emmy –working in the television industry which is often considered more diverse— for How To Get Away With Murder. “You can’t win an Emmy for a role that isnt there,” she said during her acceptance speech. The same principle applies to the big screen– people of colour cannot be nominated for the roles that do not exist.

Changing the rules and membership the Academy operates under is a giant step, but does the number of diverse people there to vote matter if the number of diverse people in mainstream Hollywood films doesn’t increase at the same speed?

White actors have the largest breadth of roles in Hollywood. It’s as easy for Steve Carrell to make Foxcatcher as it is for him to make The 40 Year Old Virgin, Little Miss Sunshine, or The Big Short, and he’ll receive acclaim or some kind of major nomination for all of them. Note the variety of topics and genres Carrell is able to work with; not to mention the different directors and writers. Each of these films was marketed to a general audience– not just a white one. This makes sense since in 2014 white people made up 63% of the population and only made up 56% of the movie going audience, while Latin@s were 25% of the movie going audience despite making up 17% of the American population.

With numbers like that you’d think that actors like Gina Rodriguez would be more in demand. You might assume that there would be more than one Oscar Isaac taking over Hollywood along with the likes of fellow white heartthrobs Zac Efron, Channing Tatum, and –as inexplicable as it is– Bennedict Cumberbatch.

The problem isn’t just those in the Academy voting, it’s the variety and number of roles that people of colour are offered. If you’re a good actress, say a Meryl Streep or Cate Blanchett, steadily working on a number of movies in different genres marketed towards wide audiences each year then your chances of making something that is clearly Oscar Bait or even stumbling into a nomination unexpectedly is much higher. Easy A comes to mind when thinking of “stumbling into nominations.” It brought Emma Stone an unexpected Golden Globe nomination (she was also later nominated for an Oscar for Birdman).  ‘Classic Literature retold via teen movie’ is a fairly common movie trope more often than not told via a majority white gaze, and often critically adored. Easy A represents a widely popular and financially successful genre of film that Black, Latino, Asian, and Native actors have never gotten to lead. We have no Jane Austen teen comedies starring Amandla Stenberg, no modern day high school Shakespeare romps starring Avan Jogia.

Exposure is just as important in the nomination and voting process as the quality of the movies themselves. I speak from experience, as someone whose job it was to fill out Emmy and Oscar ballots for someone during the 2011 awards season. I hadn’t seen all the movies and shows up for nomination, and in some categories I voted based on the past work of those nominated that I’d seen most of and enjoyed. Whether their films deserve awards or not, I imagine it’s easy for voters to check off boxes for Leonardo DiCaprio and Jennifer Lawrence simply because these are people who’ve had ample opportunity to star in a variety of things that show their talent as actors. Even if you’re not thinking of Joy while voting, perhaps you’re remembering that you enjoyed American Hustle, Silver Lining Playbook, or The Hunger Games or X-Men franchises. If Leonardo DiCaprio wins an Oscar this year  for The Revenant I’ll go to my grave swearing it’s actually for Catch Me If You Can or The Wolf of Wallstreet.

Jennifer Lawrence has made three movies a year since 2011. Michael B. Jordan made one movie in 2014. Despite being active in the business for longer than Lawrence, 2015 is the first time he’s released more than one film in a single year. This means she’s had triple the chance for interviews, soundbites, fake celebrity friendships, and quirky red carpet stumbles to stick in the public’s mind. Jordan’s not been offered the chance to make the same impact.

The strangest part is that as hard as it seems to be for Hollywood to imagine more roles for actors of colour, they’re the most obvious things in the world to me. Brooklyn reminded me how much I’d loved Re Jane –a modern retelling of Jane Eyre about a Korean-American girl from Queens who attempts to find herself and love in both America and Korea– and that I’d happily watch it play out on screen. If there’s room for a major studio backed movie about a blonde lady who invented a mop, there’s room for a fictionalised account about the man who invented the Supersoaker, or the woman who changed the landscape of Black hair care. I’d love to see Gina Rodriguez nominated for an unexpected Oscar because she got to stand out in a witty comedy written by some Hollywood darling, a la Bridesmaids. We’re about to get our 4th Batman film of the past 11 years, and yet no one has called on Pedro Pascal to reboot the Zorro franchise. It’s not difficult to think of all the movies that could be Brown Oscar Bait, or just entertaining, to mass audiences if Hollywood would just think to make and cast them.

When winning her SAG Award this past Saturday night, Viola Davis had more to offer. “I can play any character in Chekhov and Shakespeare and Miller. All of the actors of color I know don’t place any limitations on themselves either,” she said, in part. As an actor like Sean Penn isn’t just regulated to making Milk, actors of colour shouldn’t be regulated to being recognised for the so called Important Movies™ like Selma and 12 Years a Slave. We need more Creeds and more Straight Out of Comptons. More variety; struggle borne from character driven personal goals and modern day adversaries rather than the fallbacks of bondage and whiteness. It’s time for Hollywood to figure out a bottom-up approach on casting, the stories that written with people of colour in mind, and the marketing of said films to meet the Academy’s top-down approach in membership and voting change.

The post #BlackOscarBait: On The Importance of Casting appeared first on Racialicious - the intersection of race and pop culture.

Worse Than FailureDude, Where's My Hard Drive?

Hard disk head crash

What, again? Michael stared at the Explorer window in disbelief. The free disk space bar was glowing red, and the text underneath reported that his half-terabyte system partition had a measly few gigs left before filling up.

When it had first happened, he hadn't thought twice about it. In fact, he'd been rather glad; at least he'd had the motivation to finally discard all the games and software he would never use again. But when the disk space ran out again the next month, and again the month after, he started getting more and more worried. Was he really using that much space, or was something else going on?

Curious, he decided to finally investigate the issue. A cursory look at his hard drive with WinDirStat confirmed his suspicions. With over 80 percent of his hard drive space labelled as "unknown", something was definitely amiss. He kept searching, manually scouring through his folders and files, until finally he managed to pinpoint the culprit: an innocuously named "C:\Windows\System32\Config" folder filled with hundreds of thousands of files, taking up 420 gigabytes in size.

A quick trip to Google and a bit of playing with Process Monitor revealed the answer to the mystery. As it turned out, every modification to Windows Registry—the oft-derided database of all the Windows and Windows application settings—generated a transaction log file to ensure the data integrity, prevent corruption, and allow rollback of changes. Usually those small 512KB files weren't much of an issue. They got deleted after a clean reboot, and most software only modified the registry during installation or after a configuration change.

However, some applications and drivers—among them, Nvidia's 3D service—didn't play nice with the registry, shuffling the values around every few seconds or minutes. That, together with Michael's habit of not turning the computer off too often, resulted in cluttering the disk with more and more files until it filled up completely.

The solution, luckily, was rather simple. Michael purged the folder of all but the most recent log files, then uninstalled all the unnecessary bloatware from Nvidia, hoping it was the last thing he'd be deleting for a long while.

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!

RacialiciousSundance Pick: KIKI

Ball gives life.

Explosive energy, fierce fashion, and a strict, family focused culture all hallmarks of the ballroom social scene.

Featuring the lives of Chi Chi Mizrahi, Christopher Waldorf, Divo Pink Lady, Gia Marie Love, Izana “Zariya” Vidal , Kenneth “Symba McQueen” Soler-Rios and co-written by Twiggy Pucci Garçon, KIKI is a joyous and energetic look at the next generation of unwavering LGBTQ self advocacy in the face of a hostile world. The artist’s description of the film is full of affirmations and vision statements, revealing the core idea underlying the documentary:

In this film collaboration between Kiki gatekeeper, Twiggy Pucci Garçon, and Swedish filmmaker Sara Jordenö, viewers are granted exclusive access into this high-stakes world, where fierce Ballroom competitions serve as a gateway into conversations surrounding Black- and Trans- Lives Matter movements. This new generation of Ballroom youth use the motto, “Not About Us Without Us,” and KIKI in kind has been made with extensive support and trust from the community, including an exhilarating score by renowned Ballroom and Voguing Producer Collective Qween Beat. Twiggy and Sara’s insider-outsider approach to their stories breathes fresh life into the representation of a marginalized community who demand visibility and real political power.

25 years after the debut of Paris is Burning, the controversial documentary sensation is starting to age. The references, music and fashion feel dated, and the persistent questions around storytelling, racism, and exploitation are just as relevant today as they were when the film aired.

KIKI brings a fresh update and perspective on how ball has changed. KIKI exists in a world where transgender rights have been pushed into the limelight by brave pioneers like Laverne Cox and Janet Mock. While Paris is Burning valorized passing, most of the characters in KIKI believe they exist on a transgender continuum. Some openly identify as trans, but question if the label isn’t just another restriction. The idea of “realness” being divided into “female” and “male” categories is interrogated by one character, who asks why the LGBTQ ball scene has to follow the same binaries as the rest of the heterosexual world.

KIKI also features refreshingly frank conversations about policy over the course of the film. While all of the delicious aspects of ballroom culture are still in full effect – music! costumes! competition! – the narrative frequently swings toward the activism of the participants outside of the scene. The houses look at violence, police brutality, medical policy and housing policy and do their best to advocate for more rights and protections under the law. The director’s notes reveal some of urgency around their activism:

Kiki scene-members range in age from young teens to 20’s, and many have been thrown out of their homes by their families or otherwise find themselves on the streets. As LGBTQ people-of-color, they constitute a minority within a minority. An alarming 50% of these young people are HIV positive. The Kiki scene was created within the LGBTQ youth-of- color community as a peer-led group offering alternative family systems (“houses”), HIV awareness teaching and testing, and performances geared towards self-agency. The scene has evolved into an important (and ever-growing) organization with governing rules, leaders and teams, now numbering hundreds of members in New York and across the U.S and Canada. Run by LGBTQ youth for LGBTQ youth, it draws strategies from the Civil Rights, Gay Rights and Black Power movements.

Conversations about race, masculinity, femininity, performance all feature prominently throughout the doc. One illuminating conversation featured Gia and another activist debating the role of sex work in the LGBTQ communities. Symba soberly recounts the realities of living with HIV and the horrible day he found out he was positive. Twiggy Pucci Garçon heads to the White House to advocate for LGBTQ rights, but finds out he was evicted from his home while traveling. But not all the moments of discussion are heavy. Chi Chi Mizrahi breaks down the various types of experiences on the transgender experience, but explains that none of those labels fit him: “I’m just a boy that likes to play in women’s clothes!” He says, before analyzing a pair of shoes for both their masculine and feminine qualities.

KIKI does not have a clear narrative arc but the confusion doesn’t distract from the core purpose of the film. One of the most powerful devices used in the film is the long, silent camera shots that focus deeply on each subject. The camera lingers, forcing the viewer to confront the subject’s stoic gaze. Each shot feels a beat too long, but that is the point – it’s a quiet exhortation of the viewer to look, really look, at what is in front of them.

For all the pageantry and opulence of ball culture, all anyone really wants is to be seen for who they are.

The post Sundance Pick: KIKI appeared first on Racialicious - the intersection of race and pop culture.

CryptogramNSA's TAO Head on Internet Offense and Defense

Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) group -- basically the country's chief hacker -- spoke in public earlier this week. He talked both about how the NSA hacks into networks, and what network defenders can do to protect themselves. Here's a video of the talk, and here are two good summaries.

Intrusion Phases
  • Reconnaissance
  • Initial Exploitation
  • Establish Persistence
  • Install Tools
  • Move Laterally
  • Collect Exfil and Exploit

The event was the USENIX Enigma Conference.

The talk is full of good information about how APT attacks work and how networks can defend themselves. Nothing really surprising, but all interesting. Which brings up the most important question: why did the NSA decide to put Joyce on stage in public? It surely doesn't want all of its target networks to improve their security so much that the NSA can no longer get in. On the other hand, the NSA does want the general security of US -- and presumably allied -- networks to improve. My guess is that this is simply a NOBUS issue. The NSA is, or at least believes it is, so sophisticated in its attack techniques that these defensive recommendations won't slow it down significantly. And the Chinese/Russian/etc state-sponsored attackers will have a harder time. Or, at least, that's what the NSA wants us to believe.

Wheels within wheels....

More information about the NSA's TAO group is here and here. Here's an article about TAO's catalog of implants and attack tools. Note that the catalog is from 2007. Presumably TAO has been very busy developing new attack tools over the past ten years.

BoingBoing post.

EDITED TO ADD (2/2): I was talking with Nicholas Weaver, and he said that he found these three points interesting:

  • A one-way monitoring system really gives them headaches, because it allows the defender to go back after the fact and see what happened, remove malware, etc.

  • The critical component of APT is the P: persistence. They will just keep trying, trying, and trying. If you have a temporary vulnerability -- the window between a vulnerability and a patch, temporarily turning off a defense -- they'll exploit it.

  • Trust them when they attribute an attack (e,g: Sony) on the record. Attribution is hard, but when they can attribute they know for sure -- and they don't attribute lightly.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Monday – Session 3

Cloud Anti-Patterns – Casey West

  • The 5 stages of Cloud Native
  • Deploying my apps to the cloud is painful – why?
  • Denial
    • “Containers are like tiny VMs”
    • Anti-Pattern 1 – do not assume what you have now is what you want to put into the cloud or a container
    • “We don’t need to automate continuous delivery”
    • We shouldn’t automate what we have until it is perfect. Automate to make things consistent (not always perfect at least at the start)
  • Anger
    • “works on my machine”
    • Dev is just push straight from dev boxes to production
    • Not about making worse code go to production faster
    • Aim to repeatable testable builds, just faster
  • Bargaining
    • “We crammed the monolith into a container and called it a microservice”
    • Anti-Pattern: Critically think on what you need to re-factor (or “re-platforming” )
    • ” Bi-modal IT “
    • Some stuff on fast lane, some stuff on old-way slow lane
    • Anti-pattern: leagacy products put into slow lane, these are often the ones that really need to be fixed.
    • “Micros-services” talking to same data-source, not APIs
  • Depression
    • “200 microservices but forgot to setup Jenkins”
    • “We have an automated build pipeline but online release twice per year”
  • Acceptance
    • All software sucks, even the stuff we write
    • Respect CAP theorem
    • Respect Conway’s Law
    • Small batch sizes works for replatforming too
  • Microservices architecture, Devops culture, Continuous delivery – Pick all three

Cloud Crafting – Public / Private / Hybrid  – Steven Ellis

  • What does Hybrid mean to you?
  • What is private Cloud (IAAS)
  • Hybrid – communicate to public cloud and manage local stuff
  • ManageIQ – single pain of glass for hardware, vms, clounds, containers
  • What does it do?
    • Brownfields as well as Greenfields, gathers current setup
    • Discovery, API presentations, control and detect when env non-complient (eg not fully patched)
    • Premise or public cloud
    • Supplied as a virtual appliance, HA, scale out
    • Platform – Centos 7, rails, postgress, gui, some dashboards our of the box.
  • Get involved
    • Online, roadmap is public
    • Various contributors
  • DEMO
  • Just put in credentials to allow access and then it can gather the data straiht away

Live Migration of Linux Containers by Tycho Andersen

  • LXC / LXD
  • LXD is a REST API that you use to control the container system
  • tool -> RST -> Daemon -> lxc -> Kernel
  • “lxc move host1:c1 host2: ” – Live migrations
    • Needs a bit of work since lots moving, lots of ways it could fail
    • 3 channels created, control, filesystem, container processes state
  • CRIU
    • 5 years of check-pointing
    • Lots based off open-VZ initial work
    • All sorts of things need to support check-pointing and moving (eg selinux)
    • Iterative migration added
    • Lots of hooks needed for very privileged kernel features
  • Filesystems
    • btrfs, lvm, zfs, (swift, nfs), have special support for migration that it hooks into
    • rsync between incompatable hosts
  • Memory State
    • Stop the world and move it all
    • Iterative incremental transfer (via p.haul) being worked on.
  • LXC + LXD 2.0 should be in Ubuntu 16.04 LTS
  • Need to use latest versions and latest kernels for best results.

FacebookGoogle+Share

Planet Linux AustraliaCraige McWhirter: ESPlant - Open Hardware Miniconf - LCA2016

Today at Linux Conference Australia 2016 or "LCA by the Bay" in Geelong, I attended the Open Hardware miniconf for the first time after a number of years of wanting to attend but being unable to.

This year they were making an ESPlant (Environment Sensor Plant), which is kind of timely.

The solar powered plant we're building has the following sensors:

  • BME280 Temperature/Humidity/Barometric Pressure sensor
  • 2 soil moisture sensors
  • DS18B20 temperature sensor
  • PIR (infrared motion) sensor (in case your plants are running away from something)
  • ADXL345 accelerometer (to see how fast your plants are running away)
  • WS2812B LED strip
  • UV sensor SEN0162 from DF Robot.

Top of the ESPlant board

Underside of the ESPlant board

It only took about 30 minutes to assemble, and it looked like this on my desk:

All connected and running

This was an enjoyable and extremely productive hardware hacking session. I am going to have some of these all over our gardens and they will look something like this but with added weather shields:

Prototype in the garden

I intend to hook the moisture sensors into the watering system so that water is delivered when soil moisture levels reach a pre-determined low point and stopped once they reach a desired saturation point. I want to maximise the efficiency of water consumption and I think this should help.

Sample raw serial output from the ESPlant:

temp = 26.22
pressure = 1005.65
humidity = 43.34
acc/x = 0.04  acc/y = -0.35  acc/z = 10.04
adc/uv_sensor = 0
adc/soil_1 = 91
adc/soil_2 = 38
adc/input_voltage = 4398
adc/internal_temp = 1656
external/temp_sensor = 24.00
chip/free_heap = 47120
chip/vcc = 3431
pir = low
led = 6

This data could also be fed into some long term graphing on my server via it's WiFi chip.

Planet Linux AustraliaOpenSTEM: Free Robotics Incursion in Brisbane Area

caterpillar-headIf your school or homeschool group is based in our around the Brisbane area, we can visit with our robotic caterpillar and other critters as part of our FREE Robotics Incursion.

The caterpillar has quickly become our main mascot, as students, teachers and parents take a liking to it! It is an autonomous robot, with a 3D printed frame and Arduino controlled electronics.

caterpillar-logo-banner-1260x240
The OpenSTEM caterpillar design and code are fully open and also serve as a good example of how subjects such as robotics can be explored at relatively low cost – that is, without expensive branded kits. This can be a real enabler for both school and home.

For more details on what we cover and do on this incursion, see the Robotics Incursion page, or contact us to discuss!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2016 – Monday – Session 1

Open Cloud Miniconf – Continuous Delivery using blue-green deployments and immutable infrastructure by Ruben Rubio Rey

  • Lots of things can go wrong in a deployment
  • Often hard to do rollbacks once upgrade happens
  • Blue-Green deployment is running several envs at the same time, each potentially with different versions
  • Immutable infrastructure , split between data (which changes) and everything else only gets replaced fully by deployments, not changed
  • When you use docker don’t store data in the container, makes it immutable. But containers are not required to do this.
  • Rule 1 – Never modify the infrastructure
  • Rule 2 – Instead of modifying – always create from ground up everything that is not data.
  • Advantages
    • Rollbacks easy
    • Avoid Configuration drift
    • Updated and accurate infrastructure documentation
  • Split things up
    • No State – LBs, Web servers, App Servers
    • Temp data , Volatile State – message queues, email servers
    • Persistent data – Databases, Filesystems, slow warming cache
  • In case of temp data you have to be able to drain
  • USe LBs and multiple servers to split up infrastructure, more bit give more room to split up the upgrades.
  • If pending jobs require old/new version of app then route to servers that have/not been upgraded yet.
  • Put toy rocket launcher in devs office, shoots person who broke the build.
  • Need to “use activity script” to bleed traffic off section of the “temp data” layer of infrastructure, determine when it is empty and then re-create.

FacebookGoogle+Share

,

Planet Linux AustraliaLeon Brooks: An education in two minutes

Today, another will switch from the Borg to Kubuntu.  All that was necessary was to list the things one can do without viruses, without paying the proverbial arm-and-leg, without facing six conflicting EULAs to use one application.  Done.

In less sanguine news, it became apparent that the heartless hypocrite has further isolated one of their emotional slaves.

,

Krebs on SecuritySources: Security Firm Norse Corp. Imploding

Norse Corp., a Foster City, Calif. based cybersecurity firm that has attracted much attention from the news media and investors alike this past year, fired its chief executive officer this week amid a major shakeup that could spell the end of the company. The move comes just weeks after the company laid off almost 30 percent of its staff.

Sources close to the matter say Norse CEO Sam Glines was asked to step down by the company’s board of directors, with board member Howard Bain stepping in as interim CEO. Those sources say the company’s investors have told employees that they can show up for work on Monday but that there is no guarantee they will get paid if they do.

A snapshot of Norse's semi-live attack map.

A snapshot of Norse’s semi-live attack map.

Glines agreed earlier this month to an interview with KrebsOnSecurity but later canceled that engagement without explanation. Bain could not be immediately reached for comment.

Two sources at Norse said the company’s assets will be merged with Irvine, Ca. based networking firm SolarFlare, which has some of the same investors and investment capital as Norse. Neither Norse nor SolarFlare would comment for this story. Update, Feb. 1, 12:34 p.m. ET: SolarFlare CEO Russell Stern just pinged me to say that “there has been no transaction between Norse and SolarFlare.”

Original story: The pink slips that Norse issued just after New Years’s Day may have come as a shock to many employees, but perhaps the layoffs shouldn’t have been much of a surprise: A careful review of previous ventures launched by the company’s founders reveals a pattern of failed businesses, reverse mergers, shell companies and product promises that missed the mark by miles.

EYE CANDY

In the tech-heavy, geek-speak world of cybersecurity, infographics and other eye candy are king because they promise to make complicated and boring subjects accessible and sexy. And Norse’s much-vaunted interactive attack map is indeed some serious eye candy: It purports to track the source and destination of countless Internet attacks in near real-time, and shows what appear to be multicolored fireballs continuously arcing across the globe.

Norse says the data that feeds its online attack map come from a network of more than eight million online “sensors” — honeypot systems that the company has strategically installed at Internet properties in 47 countries around the globe to attract and record malicious and suspicious Internet traffic.

According to the company’s marketing literature, Norse’s sensors are designed to mimic a broad range of computer systems. For example, they might pretend to be a Web server when an automated attack or bot scans the system looking for Web server vulnerabilities. In other cases, those sensors might watch for Internet attack traffic that would typically only be seen by very specific machines, such as devices that manage complex manufacturing systems, power plants or other industrial control systems.

Several departing and senior Norse employees said the company’s attack data was certainly voluminous enough to build a business upon — if not especially sophisticated or uncommon. But most of those interviewed said Norse’s top leadership didn’t appear to be interested in or capable of building a strong product behind the data. More worryingly, those same people said there are serious questions about the validity of the data that informs the company’s core product.

UP IN SMOKE(S)

Norse Corp. and its fundamental technology arose from the ashes of several companies that appear to have been launched and then acquired by shell companies owned by Norse’s top executives — principally the company’s founder and chief technology officer Tommy Stiansen. Stiansen did not respond to multiple requests for comment.

This acquisition process, known as a “reverse merger” or “reverse takeover,” involves the acquisition of a public company by a private company so that the private company can bypass the lengthy and complex process of going public.

Reverse mergers are completely legal, but they can be abused to hide the investors in a company and to conceal certain liabilities of the acquired company, such as pending lawsuits or debt. In 2011, the U.S. Securities and Exchange Commission (SEC) issued a bulletin cautioning investors about plunking down investments in reverse mergers, warning that they may be prone to fraud and other abuses.

The founders of Norse Corp. got their start in 1998 with a company called Cyco.net (pronounced “psycho”). According to a press release issued at the time, “Cyco.net was a New Mexico based firm established to develop a network of cyber companies.”

“This site is a lighthearted destination that will be like the ‘People Magazine’ of the Internet,” said Richard Urrea, Cyco’s CEO, in a bizarre explanation of the company’s intentions. “This format has proven itself by providing Time Warner with over a billion dollars of ad revenue annually. That, combined with the CYCO.NET’s e-commerce and various affiliations, such as Amazon.com, could amount to three times that figure. Not a portal like Yahoo, the CYCO.NET will serve as the launch pad to rocket the Internet surfer into the deepest reaches of cyberspace.”

In 2003, Cyco.net acquired Orion Security Services, a company founded by Stiansen, Norse’s current CTO and founder and the one Norse executive who is actually from Norway. Orion was billed as a firm that provides secure computer network management solutions, as well as video surveillance systems via satellite communications.

The Orion acquisition reportedly came with $20 million in financing from a private equity firm called Cornell Capital Partners LP, which listed itself as a Cayman Islands exempt limited partnership whose business address was in Jersey City, NJ.

Cornell later changed its name to Yorkville Advisors, an entity that became the subject of an investigation by the U.S. Securities and Exchange Commission (SEC) and a subsequent lawsuit in which the company was accused of reporting “false and inflated values.”

Despite claims that Cyco.net was poised to “rocket into the deepest riches of cyberspace,” it somehow fell short of that destination and ended up selling cigarettes online instead. Perhaps inevitably, the company soon found itself the target of a lawsuit by several states led by the Washington state attorney general that accused the company of selling tobacco products to minors, failing to report cigarette sales and taxes, and for falsely advertising cigarettes as tax-free.

COPYRIGHT COPS

In 2005, Cyco.net changed its name to Nexicon, but only after acquiring by stock swap another creation by Stiansen — Pluto Communications — a company formed in 2002 and whose stated mission was to provide “operational billing solutions for telecom networks.” Again, Urrea would issue a press release charting a course for the company that would have almost no bearing on what it actually ended up doing.

“We are very excited that the transition from our old name and identity is now complete, and we can start to formally reposition our Company under the new brand name of Nexicon,” Urrea said. “After the divestiture of our former B2C company in 2003, we have laid the foundation for our new business model, offering all-in-one or issue-specific B2B management solutions for the billing, network control, and security industries.”

In June 2008, Sam Glines — who would one day become CEO of Norse Corp. — joined Nexicon and was later promoted to chief operating officer. By that time, Nexicon had morphed itself into an online copyright cop, marketing a technology they claimed could help detect and stop illegal file-sharing. The company’s “GetAmnesty” technology sent users a pop-up notice explaining that it was expensive to sue the user and even more expensive for the user to get sued. Recipients of these notices were advised to just click the button displayed and pay for the song and all would be forgiven.

In November 2008, Nexicon was acquired by Priviam, another shell company operated by Stiansen and Nexicon’s principals. Nexicon went on to sign Youtube.com and several entertainment studios as customers. But soon enough, reports began rolling in of rampant false-positives — Internet users receiving threatening legal notices from Nexicon that they were illegally sharing files when they actually weren’t. Nexicon/Priviam’s business began drying up, and it’s stock price plummeted.

In September 2011, the Securities and Exchange Commission revoked the company’s ability to trade its penny stock (then NXCO on the pink sheets), noting that the company had failed to file any periodic reports with the SEC since its inception. In June 2012, the SEC also revoked Priviam’s ability to trade its stock, citing the same compliance failings that led to the de-listing of Nexicon.

By the time the SEC revoked Nexicon’s trading ability, the company’s founders were already working to reinvent themselves yet again. In August 2011, they raised $50,000 in seed money from Capital Innovators to jump-start Norse Corp. A year later, Norse received $3.5 million in debt refinancing, and in December 2013 got its first big infusion of cash — $10 million from Oak Investment Partners. In September 2015, KPMG invested $11.4 million in the company.

Several former employees say Stiansen’s penchant for creating shell corporations served him well in building out Norse’s global sensor network. Some of the sensors are in countries where U.S. assets are heavily monitored, such as China. Those same insiders said Norse’s network of shell corporations also helped the company gain visibility into attack traffic in countries where it is forbidden for U.S. firms to do business, such as Iran and Syria.

THE MAN BEHIND THE CURTAIN

By 2014, Norse was throwing lavish parties at top Internet security conferences and luring dozens of smart security experts away from other firms. Among them was Mary Landesman, formerly a senior security researcher at Cisco Systems. Landesman said Norse had recently hired many of her friends in the cybersecurity business and had developed such a buzz in the industry that she recruited her son to come work alongside her at the company.

As a senior data scientist at Norse, Landesman’s job was to discover useful and interesting patterns in the real-time attack data that drove the company’s “cyber threat intelligence” offerings (including its eye candy online attack map referenced at the beginning of this story). By this time, former employees say Norse’s systems were collecting a whopping 140 terabytes of Internet attack and traffic data per day. To put that in perspective a single terabyte can hold approximately 1,000 copies of the Encyclopedia Britannica. The entire printed collection of the U.S. Library of Congress would take up about ten terabytes.

Landesman said she wasn’t actually given access to all that data until the fall of 2015 — seven months after being hired as Norse’s chief data scientist — and that when she got the chance to dig into it, she was disappointed: The information appeared to be little more than what one might glean from a Web server log — albeit millions of them around the world.

“The data isn’t great, and it’s pretty much the same thing as if you looked at Web server logs that had automated crawlers and scanning tools hitting it constantly,” Landesman said in an interview with KrebsOnSecurity. “But if you know how to look at it and bring in a bunch of third-party data and tools, the data is not without its merits, if not just based on the sheer size of it.”

Landesman and other current and former Norse employees said very few people at the company were permitted to see how Norse collected its sensor data, and that Norse founder Stiansen jealously guarded access to the back-end systems that gathered the information.

“With this latest round of layoffs, if Tommy got hit by a bus tomorrow I don’t think there would be a single person in the company left who understands how the whole thing works,” said one former employee at Norse who spoke on condition of anonymity.

SHOW ME THE DATA

Stuart McClure, president and founder of the cybersecurity firm Cylance, said he found out just how reluctant Stiansen could be to share Norse data when he visited Stiansen and the company’s offices in Northern California in late 2014. McClure said he went there to discuss collaborating with Norse on two upcoming reports: One examining Iran’s cyber warfare capabilities, and another about exactly who was responsible for the massive Nov. 2014 cyber attack on Sony Pictures Entertainment.

The FBI had already attributed the attack to North Korean hackers. But McClure was intrigued after Stiansen confidentially shared that Norse had reached a vastly different conclusion than the FBI: Norse had data suggesting the attack on Sony was the work of disgruntled former employees.

McClure said he recalls listening to Stiansen ramble on for hours about Norse’s suspicions and simultaneously dodging direct questions about how it had reached the conclusion that the Sony attack was an inside job.

“I just kept going back to them and said, ‘Tommy, show me the data.’ We wanted to work with them, but when they couldn’t or wouldn’t produce any data or facts to substantiate their work, we couldn’t proceed.”

After that experience, McClure said he decided not to work with Norse on either the Sony report or the Iran investigation. Cylance ended up releasing its own report on Iran’s cyber capabilities; that analysis — dubbed “Operation Cleaver” (PDF) — was later tacitly acknowledged in a confidential report by the FBI.

Conversely, Norse’s take on Iran’s cyber prowess (PDF) was trounced by critics as a deeply biased, headline-grabbing report. It came near the height of international negotiations over lifting nuclear sanctions against Iran, and Norse had teamed up with the American Enterprise Institute, a conservative think tank that has traditionally taken a hard line against threats or potential threats to the United States.

In its report, Norse said it saw a half-million attacks on industrial control systems by Iran in the previous 24 months — a 115 percent increase in attacks. But in a scathing analysis of Norse’s findings, critical infrastructure security expert Robert M. Lee said Norse’s claim of industrial control systems being attacked and implying it was definitively the Iranian government was disingenuous at best. Lee said he obtained an advanced copy of an earlier version of the report that was shared with unclassified government and private industry channels, and that the data in the report simply did not support its conclusions.

“The systems in question are fake systems….and the data obtained cannot be accurately used for attribution,” Lee wrote of Norse’s sensor network. “In essence, Norse identified scans from Iranian Internet locations against fake systems and announced them as attacks on industrial control systems by a foreign government. The Norse report’s claims of attacks on industrial control systems is wrong. The data is misleading. The attention it gained is damaging. And even though a real threat is identified it is done in a way that only damages national cybersecurity.”

FROM SMOKES TO SMOKE & MIRRORS?

KrebsOnSecurity interviewed almost a dozen current and former employees at Norse, as well as several outside investors who said they considered buying the firm. None but Landesman would speak on the record. Most said Norse’s data — the core of its offering — was solid, if prematurely marketed as a way to help banks and others detect and deflect cyber attacks.

“I think they just went to market with this a couple of years too soon,” said one former Norse employee who left on his own a few months prior to the January 2016 layoffs, in part because of concerns about the validity of the data that the company was using to justify some of its public threat reports. “It wasn’t all there, and I worried that they were finding what they wanted to find in the data. If you think about the network they built, that’s a lot of power.”

On Jan. 4, 2016, Landesman learned she and roughly two dozen other colleagues at Norse were being let go. The data scientist said she vetted Norse’s founders prior to joining the firm, but that it wasn’t until she was fired at the beginning of 2016 that she started doing deeper research into the company’s founders.

“I realized that, oh crap, I think this is a scam,” Landesman said. “They’re trying to draw this out and tap into whatever the buzzwords du jour there are, and have a product that’s going to meet that and suck in new investors.”

Calls to Norse investor KPMG International went unreturned. An outside PR firm for KPMG listed on the press release about the original $11.4 million funding for Norse referred my inquiry to a woman running an outside PR firm for Norse, who declined to talk on the record because she said she wasn’t sure whether her firm was still representing the tech company.

“These shell companies formed by [the company’s founders] bilked investors,” Landesman said. “Had anyone gone and investigated any of these partnerships they were espousing as being the next big thing, they would have realized this was all smoke and mirrors.”

Planet Linux AustraliaLev Lafayette: NFS Cluster Woes

A far too venerable cluster (Scientific Linux release 6.2, 2.6.32 kernel, Opteron 6212 processors) with more than 800 user accounts makes use of NFS-v4 to access storage directories. It is a typical architecture, with a management and login node with a number of compute nodes. The directory /usr/local is on the management node and mounted across to the login and compute nodes. User and project directories are distributed two storage arrays appropriately named storage1 and storage2.


[root@edward-m ~]# cat /etc/fstab

read more

Planet Linux AustraliaOpenSTEM: Curriculum Samples

download-logo-300pxResponding to popular request, we have now made our FREE sample resource PDFs available directly from a page: Curriculum Samples in our new Curriculum overview section.

This means that you don’t have to login or fill in any details to download these files.

We make these free watermarked samples available at no charge so that you are able to see and assess the quality of our materials.

Planet Linux AustraliaTridge on UAVs: APM:Plane 3.5.0 released

The ArduPilot development team is proud to announce the release of the 3.5.0 version of APM:Plane. There have only been a few small changes since the 3.5.0beta1 release 3 weeks ago.

The key changes since 3.5.0beta1 are:

  • addition of better camera trigger logging
  • fixes to override handling (for users of the OVERRIDE_CHAN) parameter
  • fixed a pulse glitch on startup on PX4

See the full notes below for details on the camera trigger changes.

For completeness, here are the full release notes. Note that this is mostly the same as the 3.5.0beta1 release notes, with a few small changes noted above.

The biggest changes in this release are:

  • switch to new EKF2 kalman filter for attitude and position estimation
  • added support for parachutes
  • added support for QuadPlanes
  • support for 4 new flight boards, the QualComm Flight, the BHAT, the PXFmini and the Pixracer
  • support for arming on moving platforms
  • support for better camera trigger logging

New Kalman Filter


The 3.4 release series was the first where APM:Plane used a Kalman Filter by default for attitude and position estimation. It works very well, but Paul Riseborough has been working hard recently on a new EKF variant which fixes many issues seen with the old estimator. The key improvements are:

  • support for separate filters on each IMU for multi-IMU boards (such as the Pixhawk), giving a high degree of redundancy
  • much better handling of gyro drift estimation, especially on startup
  • much faster recovery from attitude estimation errors

After extensive testing of the new EKF code we decided to make it the default for this release. You can still use the old EKF if you want to by setting AHRS_EKF_TYPE to 1, although it is recommended that the new EKF be used for all aircraft.

Parachute Support

This is the first release with support for parachute landings on plane. The configuration and use of a parachute is the same as the existing copter parachute support. See http://copter.ardupilot.com/wiki/parachute/

Note that parachute support is considered experimental in planes.

QuadPlane Support

This release includes support for hybrid plane/multi-rotors called QuadPlanes. More details are available in this blog post: http://diydrones.com/profiles/blogs/quadplane-support-in-apm-plane-...

Support for 4 new Flight Boards

The porting of ArduPilot to more flight boards continues, with support for 4 new flight boards in this release. They are:


More information about the list of supported boards is available here: http://dev.ardupilot.com/wiki/supported-autopilot-controller-boards/

I think the Pixracer is a particularly interesting board as it is so small, and will allow for some very small planes to fitted with an ArduPilot based Autopilot. It is really aimed at racing quads, but works well on small planes as well as long as you don't need more than 6 servos. Many thanks to AUAV for providing development Pixracer boards for testing.

Startup on a moving platform

One of the benefits of the new EKF2 estimator is that it allows for rapid estimation of gyro offset without doing a gyro calibration on startup. This makes it possible to startup and arm on a moving platform by setting the INS_GYR_CAL parameter to zero (to disable gyro calibration on boot). This should be a big help when flying off boats.

Improved Camera Trigger Logging

This release adds new CAM_FEEDBACK_PIN and CAM_FEEDBACK_POL parameters. These add support for separate CAM and TRIG log messages, where TRIG is logged when the camera is triggered and the CAM message is logged when an external pin indicates the camera has actually fired. This pin is typically based on the flash hotshoe of a camera and provides a way to log the exact time of camera triggering more accurately. Many thanks to Dario Andres and Jaime Machuca for their work on this feature.

Lots more!

That is just a taste of all of the improvements in this release. In total the release includes over 1500 patches. Some of the other more significant changes include:

  • RPM logging
  • new waf build system
  • new async accel calibrator
  • SITL support for quadplanes
  • improved land approach logic
  • better rangefinder power control
  • ADSB adapter support
  • dataflash over mavlink support
  • settable main loop rate
  • hideable parameters
  • improved crash detection logic
  • added optional smooth speed weighting for landing
  • improved logging for dual-GPS setups
  • improvements to multiple RTK GPS drivers
  • numerous HAL_Linux improvements
  • improved logging of CAM messages
  • added support for IMU heaters in HAL_Linux
  • support for RCInput over UDP in HAL_Linux
  • improved EKF startup checks for GPS accuracy
  • added raw IMU logging for all platforms
  • added BRD_CAN_ENABLE parameter
  • support FlightGear visualisation in SITL
  • configurable RGB LED brightness
  • improvements to the OVERRIDE_CHAN handling, fixing a race condition
  • added OVERRIDE_SAFETY parameter


Many thanks to everyone who contributed to this release! The development team is growing at a fast pace, with 57 people contributing changes over this release cycle.

I'd like to make special mention of Tom Pittenger and Michael du Breuil who have been doing extensive testing of the plane development code, and also contributing a great deal of their own improvements. Thanks!

,

CryptogramFriday Squid Blogging: Polynesian Squid Hook

From 1909, for squid fishing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Rondam RamblingsWell, this is stupid

Kenya is about to destroy 270 million dollars worth of ivory.  That is an incredibly stupid thing to do.  They should instead sell the ivory and use the proceeds to fund anti-poaching efforts.  By destroying the ivory, they are reducing the supply, which drives up the price, which encourages more poaching by making it more lucrative. I fully understand the visceral revulsion at state-sponsored

CryptogramIntegrity and Availability Threats

Cyberthreats are changing. We're worried about hackers crashing airplanes by hacking into computer networks. We're worried about hackers remotely disabling cars. We're worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It's been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We're heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can't start. Or worse, disable the entire system so that no one's car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It's actually more complicated.

What I'm calling the "World Sized Web" is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can't be subverted to do real damage.

It's one thing if your smart door lock can be eavesdropped to know who is home. It's another thing entirely if it can be hacked to prevent you from opening your door or allowing a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage -- once someone dies from a hacked car or medical device, or an entire city's 911 services go down for a day -- there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won't be well-thought-out, and they probably won't mitigate the actual risks. If we're lucky, they won't cause even more problems.

I worry that we're rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we've tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That's going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.

Worse Than FailureError'd: Who Needs an Interface when you have Tape?

"We spent a good deal of time developing our customer information display software, to make it easy for our users to update the daily menu screen outside our restaurant," Steve M. wrote, "Someone, however, noticed that the price of the Fish Steak Crunch was wrong, and decided to take a more hardware-based approach to doing the update."

 

"I was expecting an in depth article about the pitfalls of MongoDB, but it seems the author is still failing," writes Gary S.

 

"Speedtest.net's new non-flash site isn't just great, it's off the charts!" wrote Kevin C.

 

Wouter wrote, "If anybody needs me, I'll be entering my PIN for the next few days."

 

"If the formula wasn't broken before, it sure is now," writes Mike.

 

"Maybe this is just a clever anti-VLC ad?" writes Ben M.

 

Adam wrote, "Recieved this status message recently from switch monitoring software I did."

 

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Linux Conference Picnic, February 1, 2016

Feb 1 2016 19:30
Feb 1 2016 20:30
Feb 1 2016 19:30
Feb 1 2016 20:30
Location: 

Frank Moore Reserve, Western Beach Rd, Geelong

As the annual Linux Conference is being held in Geelong this year for the first time ever we will hold a second Penguin Picnic at the Frank Moore Reserve on Western Beach Rd, Geelong from 19:30 on Monday 1 February 2016 in place of our usual February main meeting.

LUV will be providing various items for the BBQ, e.g., plates, sausages to cater for carnivores, pouletvores, piscevores, and herbivores, bread, onions, and sauces, and fruit. Please bring salads and drinks.

Main meetings will resume on Tuesday 1 March.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and Infoxchange for the use of their premises.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

February 1, 2016 - 19:30

Planet Linux AustraliaBinh Nguyen: Conspiracy Theories, Understanding Propaganda, and More

- everytime you go through history things are a lot more complex than they actually seem. One thing is fundamentally clear though is that if you meet the average Australian, Japanese, Russian, American, Chinese, Korean, Saudi Arabian, Iranian, etc... person they couldn't care less about a lot of the things that the government seems to want. However, one thing is obvious. It's the feeling of

,

Rondam RamblingsUpgrade or die: Apple's diabolical re-invention of the version ratchet

Now that I'm done with my travel tales I can go back to ranting about geeky things.  Apple just announced an update to an update to OS X Snow Leopard.  It's not that Snow Leopard is suddenly being supported again.  It's now fully five versions behind the current state of the art.  But there are some die-hards who are still running it (I'm actually one of them) and this update is necessary to

Krebs on SecurityFTC: Tax Fraud Behind 47% Spike in ID Theft

The U.S. Federal Trade Commission (FTC) today said it tracked a nearly 50 percent increase in identity theft complaints in 2015, and that by far the biggest contributor to that spike was tax refund fraud. The announcement coincided with the debut of a beefed up FTC Web site aimed at making it easier for consumers to report and recover from all forms of ID theft.

In kicking off “Tax Identity Theft Awareness Week,” FTC released new stats showing that the agency received more than 490,000 identity theft complaints last year, a 47 percent increase over 2014. In a conference call with the news media, FTC Chairwoman Edith Ramirez called tax refund fraud “the largest and fastest growing ID theft category” that the commission tracks.

Tax refund fraud contributed mightily to a big spike in ID theft complaints to the FTC in 2015. Image: FTC

Tax refund fraud contributed mightily to a big spike in ID theft complaints to the FTC in 2015. Image: FTC

Those numbers roughly coincide with data released by the Internal Revenue Service (IRS), which also shows a major increase in tax-related identity theft in 2015.

Incidence of tax-related ID theft as of Sept. 2015. Source: IRS.

Incidence of tax-related ID theft as of Sept. 2015. Source: IRS.

Ramirez was speaking to reporters to get the word out about the agency’s new and improved online resource, identitytheft.gov, which aims to streamline the process of reporting various forms of identity theft to the FTC, the IRS, the credit bureaus and to state and local officials.

“The upgraded site, which is mobile and tablet accessible, offers an array of easy-to-use tools, that enables identity theft victims to create the documents they need to alert police, the main credit bureaus and the IRS among others,” Ramirez said. “Identity theft victims can now go online and get a free, personalized identity theft recovery plan.”

Ramirez added that the agency’s site does not collect sensitive data — such as drivers license or Social Security numbers. The areas where that information is required are left blank in the forms that get produced when consumers finish stepping through the process of filing an ID theft complaint (consumers are instructed to “fill these items in by hand, after you print it out”).

The FTC chief also said the agency is working with the credit bureaus to further streamline the process of reporting fraud. She declined to be specific about what that might entail, but the new and improved identitytheft.gov site is still far from automated. For example, the “recovery plan” produced when consumers file a report merely lists the phone numbers and includes Web site links for the major credit bureaus that consumers can use to place fraud alerts or file a security freeze.

The "My Recovery Plan" produced when I filed a test report claiming the worst possible scenario of ID theft that I could think up. The FTC requests that consumers not file false reports (I had their PR person remove this entry after filing it).

The “My Recovery Plan” produced when I filed a test report claiming the worst possible scenario of ID theft that I could think up. The FTC kindly requests that consumers not file false reports (I had their PR person remove this entry after filing it).

Nevertheless, I was encouraged to see the FTC urging consumers to request a security freeze on their credit file, even if this was the last option listed on the recovery plan that I was issued and the agency’s site appears to do little to help consumers actually file security freezes.

I’m also glad to see the Commission’s site employ multi-factor authentication for consumers who wish to receive a recovery plan in addition to filing an ID theft report with the FTC. Those who request a plan are asked to provide an email address, pick a complex password, and input a one-time code that is sent via text message or automated phone call.

ALERTS VS. FREEZES

Many people do not understand the difference in protection between a fraud alert and a credit freeze. A fraud alert is free, lasts for 90 days, and is supposed to require potential creditors to contact you and obtain your permission before opening new lines of credit in your name. Applicants merely need to file a fraud alert (also called a “security alert”) with one of the credit bureaus (Equifax, Experian or Trans Union). Whichever one you file with is required by law to alert the other two bureaus as well.

There is actually a fourth credit bureau that you should alert: Innovis. This bureau follows the same rules as the big three, and you may file a fraud alert with them at this link.

Although fraud alerts only last 90 days, you may renew them as often as you like (a recurring calendar entry can help with this task). Consumers who can demonstrate that they are victims or are likely to be victims of identity theft can apply for a long-term, “extended fraud alert” that lasts up to 7 years (a police report and other documentation may be required).

The problem with fraud alerts is that creditors are encouraged but not required to check for them. The only step that will stop fraudsters from being granted new lines of credit in your name is the security freeze. For more information on what’s involved in obtaining a security freeze and why it beats a fraud alert and/or credit monitoring services, see How I Learned to Stop Worrying and Embrace the Security Freeze. Parents or guardians who are interested in freezing the credit files of their kids or dependents should check out last week’s story, The Lowdown on Freezing Your Kid’s Credit.

TAX REFUND FRAUD

Tax refund fraud occurs when criminals use your personal information and file a tax refund request with the IRS in your name. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. As I wrote in my recent column, Don’t Be a Victim of Tax Refund Fraud in 2016, even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

Your best defense against tax refund fraud? File your taxes as soon as possible. Unfortunately, many companies are only now starting the process of mailing out W2 forms that taxpayers need to complete their filings, while fraudsters are already at work. Also, if you use online tax preparation services, please pick a strong password and do not use a password that you also use at another site or service.

If you go to file your taxes this year and receive a rejection notice stating that your return has already been filed, have a look at my primer, What Tax Fraud Victims Can Do. Oh, and consider filing an identity theft report at IdentityTheft.gov.

It’s important to note that as much as I advocate everyone freeze their credit files, a freeze will not prevent thieves from committing tax refund fraud in your name. Also, freezes may do nothing to stop thieves from perpetrating a variety of other crimes in your name, including providing your identity information to the police in the event of their arrest, or using your information to obtain medical services.

That said, a credit freeze may actually help tax refund fraud victims avoid being victimized two years in a row. First, a little background: The IRS has responded to the problem of tax ID theft partly by mailing some 2.7 million tax ID theft victims Identity Protection PINs (IP PINs) that must be supplied on the following year’s tax application before the IRS will accept the return.

The only problem with this approach is that the IRS allows IP PIN recipients to retrieve their PIN via the agency’s Web site, after supplying the answers to so-called knowledge-based authentication (KBA) questions supplied by the credit bureaus.

These KBA questions — which involve four multiple choice, “out of wallet” questions such as previous address, loan amounts and dates — can be successfully enumerated with random guessing.  In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.

If any readers here doubt how easy it is to buy personal data (SSNs, dates of birth, etc.) on just about anyone, check out the story I wrote in December 2014, wherein I was able to find the name, address, Social Security number, previous address and phone number on all current members of the U.S. Senate Commerce Committee. This information is no longer secret (nor are the answers to KBA-based questions), and we are all made vulnerable to identity theft as long as institutions continue to rely on static information as authenticators.

So, how does a security freeze help tax fraud victims avoid becoming victims two years running? A freeze prevents the IRS from being able to ask those KBA questions that are key to retrieving one’s IP PIN via the agency’s Web site. Last year, the IRS briefly removed the ability for tax fraud victims to retrieve their IP PINs via the site, but it has since reinstated the feature, apparently ignoring its auditor’s advice to re-enable it only after implementing some type of two-factor authentication.

Helpfully, though, the IRS does now allow taxpayers to lock their online account, effectively requiring all future correspondence to be conducted via snail mail. You do have an account at irs.gov, don’t you? If not, it might be a good idea to create one, even if you don’t think you’ll ever use it. That goes ditto for the Social Security Administration, by the way.

Planet Linux AustraliaLinux Australia News: Linux Australia 2015 AGM Minutes

Mon, 2015-01-12 17:39 - 19:01

Minutes of Linux Australia
Annual General Meeting 2015

University of Auckland, Auckland, New Zealand
Monday, 12th January 2015, 1739hrs - 1901hrs, Room OGGB4

Meeting opened at 1739hrs NZST by Mr JOSHUA HESKETH
Minutes were taken by Ms KATHY REID

1. President’s welcome
MR JOSHUA HESKETH, President

JOSHUA HESKETH invited participants to review the Minutes and Office Bearers’ Reports
JOSHUA HESKETH invited participants to register on the registration sheets
JOSHUA HESKETH thanked participants for attending the AGM.

Note: Reports were located at presenting lectern of the room

2. MOTION by Mr JOSHUA HESKETH that the minutes of the Annual General Meeting 2014 of Linux Australia be accepted as complete and accurate.
Refer: http://linux.org.au/meeting/2014-01-08

SECONDED by CHRISTOPHER NEUGEBAUER
CARRIED with seven abstentions

3. To receive the REPORTS of activities of the preceding year from OFFICE BEARERS

MR JOSHUA HESKETH - President
MR FRANÇOIS MARIER - Treasurer
Includes presentation of the Auditor’s Report
MS KATHERINE (KATHY) REID - Secretary

Please refer to reports

President’s report
http://lists.linux.org.au/pipermail/linux-aus/2015-January/021983.html

Presentation of the President’s Report - Mr JOSHUA HESKETH

Mr HESKETH noted that there was significant growth in events that ran locally, with Linux Australia running 8 events in total. He went on to identify that Linux Australia had faced some financial perils in 2014, including making a small loss, largely due to LCA2014 not meeting sales expectations. Mr HESKETH referred community members to the broader discussion around this issue which was posted to the linux-aus list. Mr HESKETH noted that Treasurer, MR FRANÇOIS MARIER had reduced risks and overheads, thus minimising the overall financial loss.

http://lists.linux.org.au/pipermail/announce/2014-June/000181.html

Mr HESKETH noted that the Council and Community can be very proud of the submission made to the Department of Foreign Affairs and Trade (DFAT) on the Transpacific Partnership Agreement (TPP). Mr HESKETH noted his special thanks to Mr JOSH STEWART (Ordinary Council Member, Linux Australia) and to Mr LEV LAFAYETTE (President, Linux Users’ Victoria) for their exemplary efforts here.

Mr HESKETH noted that although it has been a somewhat turbulent year for Linux Australia, he was very happy with how Linux Australia has run. He indicated that the organisation made sure to operate within their values, and that budgetary choices reflected this.

Mr HESKETH referred the community to the SubCommittee page for further information on subcommittee activities, and noted that the non-conference SubCommittees were as per 2013. Mr HESKETH thanked SubCommittees for all their hard work during the year.

Mr HESKETH noted the new Subcommittee policy and structure, and thanked Christopher Neugebauer in particular for all his hard work. Mr HESKETH explained that the new policy delivers increased oversight. However, this has been difficult to implement practically, and aligning events to the new Subcommittee policy remains a challenge for Council 2015. Mr HESKETH invited attendees to read the Subcommittee Policy.

http://linux.org.au/sub-committees

Mr HESKETH noted that Linux Australia had supported 4 grants this year. He went on to note that the budget for Linux Australia ran from October to September, while Council runs from January to January.

Mr HESKETH explained that 2014 took linux.conf.au bids for the event two years out and that results of this process will be announced during closing.

Mr HESKETH noted the new mailing list for policies and invited participants to join the policies mailing list if desired.

SEE: http://lists.linux.org.au/listinfo/policies

Mr HESKETH noted that Linux Australia had kept members informed through usual channels, including the Announce and Linux-Aus mailing lists, and had increased the use of Twitter, and that the organisation’s key outreach piece this year was the TPP submission.

Mr HESKETH covered financials briefly, and noted that the Council had not hindered its operations as a result of the loss. He further noted that Council had planned for a long time to hold sufficient cash reserve to allow for the conduct of an entire linux.conf.au based on registration income alone. Mr HESKETH noted the foresight of previous Councils for Linux Australia currently being in this fortunate financial position.

Mr HESKETH read an excerpt from Linux Australia’s TPP submission to summarise the organisation’s position;

“Linux Australia and Linux Users of Victoria strongly urges that the TPP and future trade agreements be negotiated in an open and transparent manner. Leaked chapters of the TPP have raised serious concerns that threaten Australian growth and innovation, but that have been held behind closed doors with limited consultation”

Mr HESKETH noted that the current membership platform is frail, and that Linux Australia will be working to enhance it.

Mr HESKETH noted that Linux Australia had a number of challenges facing the organisation in the near future. The remit of the organisation has changed. LA is a community - and Mr HESKETH noted that it is the members that do the hard work of the organisation. He went on to note that Linux and open source are generally well accepted as technologies, but the underpinning freedoms around these technologies are less known. He questioned how Linux Australia’s values align with the popularity of web and mobile platforms, and how we as a community felt about companies collecting and retaining large amounts of data about us. Mr HESKETH underscore the importance of these questions, and encouraged members to debate these issues. He questioned how Linux Australia ensures these values are upheld in industry, and challenged members to be more involved in these issues. He reinforced that the organisation only achieves what its members do - and that it is through the hard work of organisers, volunteers and subcommittee members that outcomes are achieved. Mr HESKETH called on members to be involved in these things and strongly encouraged members to put in conference bids, or to volunteer in areas where they feel passionate.

Mr HESKETH concluded by stating that Linux Australia has some big questions ahead, and they will not be easy to answer. These issues need to be addressed collectively.

Mr HESKETH invited questions from the floor.

Ms DONNA BENJAMIN - You’ve raised the challenge of big and unanswered questions and invited members to contribute to those questions. How should we do that?

Mr HESKETH responded by inviting attendees to the Birds of a Feather (BoF) session at 12.20 lunchtime on Thursday for discussions around LA’s future.

Mr HESKETH invited Mr FRANÇOIS MARIER to the floor to present the Treasurer’s report

Presentation of the Treasurer’s Report - Mr FRANÇOIS MARIER

Mr MARIER outlined that the most significant issue that happened with Linux Australia accounts this year was the linux.conf.au 2014 loss. This was identified halfway through the year, and the budget was scaled back to accommodate the loss. Overall, Linux Australia ended up with a loss of $AUD 10k as a whole, which is not bad considering that linux.conf.au lost $AUD 39k overall. Mr MARIER outlined that budget expenses and costs were cut that would have a minimal impact on operations - such as reducing the grants budget.

Mr MARIER explained that one of the things that Linux Australia used to do with conferences was not charge at all for the provision of services such as administration, bank accounts, insurance etc. Therefore, if a conference broke even, it represented a loss to Linux Australia because the conference consumed resources from Linux Australia centrally. Therefore, a rate of $AUD 11 per person, or 2% of sales, is now expected to be budgeted in for conferences and the conference should break event at this amount.

Mr MARIER reported that existing financial relationships remain generally unchanged. A new structure is in place for bank accounts in New Zealand to better facilitate New Zealand based activities.

Mr MARIER talked through the Budget and Revised Budget.

He first noted that the budget covered the financial year October 1 2013, to September 30, 2014. He went on to note that conferences had returned a large profit, including DrupalSouth Wellington and WordCamp Sydney. He further noted that some conferences fell across two financial years however their income is only listed against the budget year they operated in. The attached Profit and Loss statements were, however, limited to just the financial year.

Mr MARIER talked through the Revised Budget and noted the smaller donations and grants budget, with minimal reductions in other line items. He summarised by articulating that Linux Australia plans to return to profit in 2015, that is, to return to the financial position prior to 2014.

Mr MARIER presented recommendations for future Treasurers such as cutting some unnecessary expenses and tidying up some accounts.

Mr MARIER thanked all conference Treasurers for their hard work in reconciling expenses and particularly noted Mr RUSSELL STUART - Treasurer of Pycon AU for his attention to detail and conscientiousness. Mr MARIER also noted the assistance of Mr JOSHUA HESKETH.

Mr MARIER invited questions from the floor.

Ms ANITA KUNO - Two conference did spectacularly well in the report - Pycon AU and DrupalSouth Wellington. How many people came from outside Australia and New Zealand and from within Australia and New Zealand?

Mr CHRISTOPHER NEUGEBAUER, Chair of Pycon AU, responded that of the 380 attendees of Pycon AU approximately 5-10% were from outside Australia.

Ms DONNA BENJAMIN, Chair of Drupal South, responded that of the around 200 attendees and volunteers, many Australians went to New Zealand and a small percentage from the rest of the world attended. She further noted that these were primarily regional events, but do attract international speakers.

Mr CHRISTOPHER NEUGEBAUER further noted that Pycon AU attracts fewer attendees as there is also a Pycon NZ event.

Ms ANITA KUNO - I experienced a great deal of difficulty paying for the conference, as my employer issues American Express (AMEX), and the only way I had to pay for the conference was by AMEX. Is there any acknowledgement of payment selection or options to make things easier for international attendees to pay for linux.conf.au?

Mr MARIER acknowledged the difficulty Anita faced and noted that American Express is difficult to facilitate. He further noted that AMEX has higher fees. François indicated that conferences should have Amex. Mr HESKETH clarified, and noted that Linux Australia can do AMEX for AUD dollars, but not yet for AMEX for NZD dollars, as this was not arranged in time, but that we will arrange this in the future. Ms KUNO encouraged Linux Australia to make it easier for international delegates to pay for linux.conf.au. Mr TIM SERONG noted that higher fees represents a problem as the higher fees are transferred to the card holder. Mr MARIER noted that the higher fees were not significant problem for Linux Australia.

Mr MICHAEL CORDOVER - are the conference returns for 2015 based on the $11 per person rate?

Mr MARIER responded that the numbers won’t change very much as the events have already occurred. For events in the future, they are estimates. With estimates we need to be careful, as we don’t want to put too much pressure on organisers to return a profit. For Pycon, it will be the same city run by the same people, so the estimate is likely to be accurate, but Drupal South for instance will be in a different city so returns are likely to vary more.

Mr HESKETH added that the returns are “Pre” the $11 conference amount, noting that he wanted conference organisers to consider conference purchases versus how the profits could be used to benefit the whole Linux Australia.
DONNA BENJAMIN - Proffered a suggestion for future reporting for conferences, that it would be great to have a single sheet that had all the expenses and income for the conference, as well as the number of attendees. This would help articulate and compare the events - as the number of attendees is not reported in the financials. She concluded that you can then get a sense of the margin the events are returning. She further noted the historical relationship between linux.conf.au and Linux Australia, where there had been an intention to spend conference surplus and return nothing to Linux AUstralia. She noted the greater maturity now seen in this relationship, and that conferences are paying Linux Australia for a service. She suggested that the $11 per head cost be increased to $15 per head.

Mr HESKETH responded by underlining that a 5 day event is different to a 2 day event. Regarding the relationship between Linux Australia and linux.conf.au he made the point that both are all part of the same broader community that upholds or values of open communit and open technology and that drawing a distinction is really not useful.
Mr TOMAS MILJENOVIC: Have all the transaction occurred for WordCamp Sydney?

Mr HESKETH noted that the conference had not closed its books yet, and he expected the large profit reported to decline slightly.

Mr HESKETH invited Ms KATHY REID to the floor to deliver the Secretary’s report

NOTE: Minutes at this stage kindly taken by Ms DONNA BENJAMIN

Presentation of the Secretary’s report - Ms KATHY REID

Ms REID thanked attendees and the Council and noted that they were a very dedicated and professional team.

She went on to note that 2014 was a successful year for Council. Positive engagement with the community. She noted that Linux Australia had run very smoothly, and noted that the Council had reached quorum at all meetings in the past two years - a sign of a healthy organisation.

Ms REID further noted this would be her last term as Secretary due to her role in linux.conf.au 2016.

She noted that the membership system, MemberDB had reached end of life and needed to be replaced. She noted that at 29 November 2014 membership stood at 3207, representing a 4% increase on 2013.

Ms REID noted that Twitter had been a useful tool for engaging with the community, and while it was not an open source platform, had been a useful channel for member communication. She walked through social media engagement to demonstrate the increased. Ms REID further noted that the newsletter initiative failed to gain traction, and that this was something next year’s Council may wish to look at.

Ms REID noted that linux.org.au is Linux Australia’s key digital property, with 4500 page views per month, and that the Jobs page is very popular, and that this may be worth focussing on in the future.

Ms REID noted that organisational correspondence included support requests, and enquiries from other organisations, and that 1 formal complaint was handled.

Ms REID invited questions from the floor, but none were received.

NOTE: Minutes continued from this point by Ms KATHY REID

MOTION by Mr JOSHUA HESKETH that the Auditor’s report is clear and accurate
SECONDED by Mr HUGH BLEMINGS
CARRIED with 12 abstentions

Mr MARIER and Mr HESKETH signed the Auditor’s Report

MOTION by Mr FRANÇOIS MARIER that the President’s report is correct
SECONDED by Ms KATHY REID
CARRIED with 4 abstentions

MOTION by Mr FRANÇOIS MARIER that Treasurer's report is accepted
SECONDED Mr JOSHUA HESKETH
CARRIED with 1 abstention

MOTION by Mr FRANÇOIS MARIER that Secretary’s report is accepted
SECONDED by Ms BIANCA GIBSON
CARRIED with 1 abstention

MOTION by MR DAVID BELL that the actions of Council 2014 are endorsed by the membership
SECONDED by MR PETER CHUBB
CARRIED with 2 abstentions

4. No agenda items were tabled with the Secretary prior to the AGM

5. To HEAR and RESPOND to questions from the floor

Mr JOSHUA HESKETH invited questions from the floor

Mr TIM SERONG - Given the mention of the jobs page being quite popular, is it sensible for people to pay for job listings?

MR JOSHUA HESKETH responded that he wasn’t sure whether it was worth the overhead of charging for posting, and also noted that it may be a disservice to members for excluding jobs they may be interested in.
Mr PAUL WAYPER - Noted that small companies and volunteer organisations who may not have funds available for a jobs listing and the added complexities of funding arrangements may be a barrier to interesting jobs being posted.
Ms ANITA KUNO - Noted that she had not seen a job board around the conference. The floor responded that there was a Jobs Birds of a Feather (BoF).

Mr JOSHUA HESKETH responded with more information around the Jobs BoF.
Ms DONNA BENJAMIN picked up on the point made by Mr TIM SERONG and advised that jobs from Supporting Members of the Drupal Association are posted on jobs.drupal.org, and that Supporting Members get a number of free postings per year. She identified that this was a mechanism for diversifying revenue. Ms BENJAMIN encouraged future Councils to strongly consider monetising the Jobs Page given its popularity, while keeping it free to look at.

The floor responded by wondering whether this initiative was making more than pocket money.

Ms BENJAMIN responded by advising that although there are costs with running it, the revenue is covering costs, and the revenue stream is growing over time.

Mr JOSHUA HESKETH advised that the point of diversifying revenue was attractive, and that this should be the point for Council to take away. He distinguished the point around Corporate Membership, noting that it was a separate matter, and that Corporate Membership is not currently covered in LA’s constitution. He further noted that while the Jobs page gets a lot of views, the number of people listing jobs on there is not large.

Mr PETER CHUBB further noted that only a small number of applicants came through the Jobs Listing.

Ms ANITA KUNO further suggested that it would be worthwhile for Council to do some further research on the Jobs page and whether Linux Australia could grow or improve the experience -as this may represent an increase in future attendance.

Mr JOSHUA HESKETH noted this as a useful suggestion.
Mr ANDREW McDONNELL - noted that the GovHack/Adelaide Unleashed event occurred in July, which drew participation from the startup/webdev scene, many of whom had not heard of linux.conf.au, and wondered whether Linux Australia could sponsor the event again.

Mr JOSHUA HESKETH advised the floor that Linux Australia sponsored the event in 2013, and particularly underlined the advantage of companies getting to know the organisation.

Ms LANA BRINDLEY (referring to 2014 AGM Minutes) - What’s happening with the old videos on the mirror?

Mr JOSHUA HESKETH responded that there had been a lot of effort to track them down, and that some are still remaining. He noted that this is a large job to do. Mr STEVEN WALSH (Admin and Mirror Team) noted that the hard drive and videos are in different geographic locations, and they will eventually be reunited.

6. DECLARATION of Election and WELCOME of incoming Council

Mr HESKETH invited Mr JONATHAN OXER to stage as Returning Officer to declare the results of the Council election. Mr HESKETH explained that the role of the Returning Officer is to scrutinise the conduct of the election to ensure the integrity of the process.

Mr OXER advised that there were three items to note with respect to the election;

The dates of the election, and the voting period, were changed to allow or the AGM to take place on Monday 12th January. The period had originally been designed to end Wednesday 14th January.

It was noted that MemberDB has some duplicate records. Mr OXER advised that he had examined that one of the members with duplicate records has only cast one vote, and that he was satisfied there were no duplicate votes.

It was noted that the server which hosts MemberDB, and thus the election, was unavailable for a period of 4 hours, 4am-8am on a Saturday morning, and that this did not unduly affect the voting.

Mr MICHAEL STILL questioned what percentage of the voting hours did this represent.
Ms KATHY REID took this as a question on notice, and this is addressed herein;

The voting period was 0000hrs on 18 December 2014 until 2359hrs on 11 January 2015, AEDST. This represents 24 days 23 hours and 59 minutes, or approximately 600 hours. The outage thus represented 0.67% of the allocated voting period.

Mr JOSHUA HESKETH, unopposed, was elected President for a third term
Mr JOSH STEWART was elected Vice-President, after serving two terms as OCM
Ms SAE RA GERMAINE, unopposed, was elected Secretary, after serving one term as OCM
Mr TONY BREEDS, unopposed, was elected Treasurer
Mr CHRISTOPHER NEUGEBAUER was elected Ordinary Committee Member (OCM) for a second term
Mr CRAIGE McWHIRTER was elected Ordinary Committee Member
Mr JAMES ISEPPI was elected Ordinary Committee Member

Mr STILL exclaimed to Mr BREEDS that he was a ‘sucker’.

Mr OXER welcomed the new Council Members.
Mr HESKETH thanked the outgoing Council Members, and welcomed the incoming Council.

Mr HESKETH noted that this will likely be his last year on Council, after three years as President and 6 in total on Council. He encouraged members to consider if this is a role for them, and urged them to consider if they would like to be involved. He stated that there is a degree of complacency in Linux Australia as a community and this is reflected in voting patterns.

Mr HESKETH took further questions from the floor.

Mr TOMAS MILJENOVIC asked whether Linux Australia was going to hold another Member Survey this year.

MR HESKETH responded that the last Survey was at the end of 2013 and was still very relevant, and that it is currently too soon to do another one. He noted that it is certainly used for decision making. He further noted that the survey date is published on the website in open data format, and encouraged members to play with the data.
Mr TIM SERONG noted that around 70 votes were cast, and that this is a small percentage of the membership - around 3200 members. What are the reasons behind this?

MR HESKETH responded that MemberDB did not currently have expiry and renewal functions, and that membership of Linux Australia did not currently expire membership. He also noted that records for signing the box on the linux.conf.au registration form to be a member of Linux Australia have not been entered manually into the database.

Mr STILL noted that the voting numbers were about the same as 2014.

Ms REID clarified that they were about 70% of 2014.

Mr STILL noted the deficiencies of the MemberDB system.
Mr MILJENOVIC expressed that Membership Management should be a priority for Council, and asked whether there were any issues with renewal.

Mr HESKETH advised that there were some constitutional issues with renewal under the constitution, and agreed that once the membership platform was updated that Linux Australia should then invite people to become members who hadn’t been members previously, and request members to reaffirm their membership, and roll over from there.

Mr STILL suggested this action be delegated to a Subcommittee.

Ms BENJAMIN thanked Mr STILL for volunteering this action.

Mr HESKETH encouraged members to become involved, and advised Council would enable them.

Mr MARCO OSTINI noted that people do wish to be associated with Linux Australia, and that there would be more obvious signs if people did not want to be associated with the organisation.

MOTION by Mr CHRISTOPHER NEUGEBAUER that the Membership thanks the outgoing Council Members, being

Mr Hugh Blemings
Mr François Marier
Ms Kathy Reid

SECONDED by Ms Donna Benjamin
The motion was carried unanimously.

Mr HESKETH closed the meeting at 1901hrs.

END MINUTES

Record of attendance
Council in attendance

Mr Joshua Hesketh
Mr Hugh Blemings
Mr François Marier
Ms Kathy Reid
Mr Christopher Neugebauer
Apologies received by the Secretary

Ms Sae Ra Germaine
Mr Josh Stewart

Linux Australia members in attendance

Mr Paul Foxworthy
Mr Michael Cordover
Mr Mark Ellem
Mr Craige McWhirter
Mr Jonathan Oxer
Mr Daniel Bryan
Ms Bianca Gibson
Mr Matthew Cengia
Mr Mike Abrahall
Mr Julian DeMarchi
Mr Paul Dwerryhouse
Mr James Iseppi
Ms Donna Benjamin
Mr Andrew Bartlett
Mr Michael ‘LT’ Ellery
Mr Nick Bannon
Mr Peter Chubb
Mr Simon Lyall
Mr Andrew Cowie
Mr Brendan O’Dea
Mr Marco Ostini
Mr Tony Breeds
Mr Matthew Franklin
Ms Lana Brindley
Mr Russell Stuart
Mr Richard Jones
Mr John Dalton
Mr Michael Davies
Mr Matthew Oliver
Mr David Tulloh
Mr Julian Goodwin
Mr Neill Cox
Mr Anthony ‘AJ’ Towns
Mr Paul Del Fante
Mr Samuel Bishop
Mr Aaron Theodore
Mr Anibal Monsalve Salazar
Mr David Bell
Mr Paul Wayper
Mr Eyal Lebedinsky
Mr Steve Walsh
Mr Brian May
Mr Clinton Roy
Mr Ryan Stuart
Mr Jan Schmidt
Mr Jeremy Visser
Ms Katie McLaughlin
Mr Michael Still
Mr Tomas Miljenovic
Mr Simon Pamment
Mr Tim Serong
Mr Peter Lawler
Mr Brett James
Mr Jonathan Woithe
Ms Lin Nah
Mr Steve Ellis

Observers (those participants of the meeting who were not at the time Members, but whom may have subsequently applied for, and been granted membership)

Mr Gordon Condon
Mr Thomas Sprinkmeier
Mr Jose Rios
Ms Stephanie Huot
Mr Jaco (Surname not supplied)
Ms Alexandra Settle
Mr Darren Chon
Mr Anton ‘Red’ Steel
Ms Anne Jessel
Mr Jared Ring
Mr Kelven Wauchope
Ms Audrey Pulo
Ms Anita Kuno
Ms Sarah Allard
Mr Kevin Tran
Mr Jim Whittaker

Planet Linux AustraliaLUV Book Review: The Linux Programming Interface

Reviewed by Major Keary

The book's sub-title, A Linux and UNIX System Programming Handbook, indicates a broader coverage than just Linux. If you happen to see a copy of The Linux Programming Interface on a bookshelf, take the time to read the first few pages of the preface. One paragraph is worth quoting here:

Although I focus on Linux, I give careful attention to standards and portability issues, and clearly distinguish the discussion of Linux-specific details from the discussion of features that are coomon to most UNIX implementations and stamdardized by POSIX and the Singe UNIX Specification. Thus this book also provides a comprehensive description of the UNIX/POSIX programming interface and can be used by programmers writingf applications targeted at other UNIX systems or intended to be portable across multiple systems.

 

A feature of the book that greatly impressed me is the binding. A reference such as this will be subect to much handling, something the publishers have anticipated: the robust hard cover should stand a lot of wear, and the lay-flat binding is superb. Much of the damage that results in books--especially those with a large page count--to fall apart is caused by book vs. reader wrestling bouts. All kinds of stratagems, some quite violent, are employed to make a book stay open where the reader chooses; but books often have a mind of their own. Even if described as having a lay-flat binding a thick book may be reluctant to behave when one wants it to stay open at, say, page 10. The Linux Programming Interface is a pleasure to handle; the binding is a feature that will add greatly to its usefulness.

This is not a reference covering every Linux command; its focus—as the title makes clear—is on the Linux programming interface.

Michael Kerrisk: The Linux Programming Interface
ISBN 978-1-59327-220-3
Published by No Starch Press, 1506 pp., RRP AU$ 130

Woodslane. This title can be purchased from the Australian distributor at
www.computer.bookcentre.com.au
A discount can be redeemed by entering the following code at the checkout:
PLANET

Planet Linux AustraliaLUV Book Review: Snip, Burn, Solder, Shred

Reviewed by Major Keary

The book has a sub-title, Seriously geeky stuff to make with your kids, which is not to be taken to mean 'turning your kids into geeky stuff', but rather entertaining them with simple, but interesting projects in which they can participate. Apart from parents (and other rels) the book is a useful resource for teachers.

The materials required are inexpensive, or even free; the tools are simple; the instructions are in clear language supported by illustrations; and there are plenty of safety warnings (did you know that the friction from a saw can heat PVC sufficiently to cause it to release phosgene?). Along the way there are gems of knowledge that, because of the informal environment created by the author, are likely to be remembered (such as, the didgeridoo is in a class of instruments called labrosones, and luthiers are craftsmen who build and repair stringed instruments).

There are twenty-four projects that range from simple (such as making a teepee--tent in ozspeak) through a ticklebox (a step-up transformer that delivers a "safe 100 volt jolt"), to musical instruments (drums, guitars, a synthesizer. and an electronic didgeridoo), and on to locomotivated projects (including boomerangs, a water rocket, and a marshmallow muzzleloader). For each project there is a list of necessary tools, a list of materials, and clear "Building It" instructions (Step 1, step 2, …) supported by illustrations and digrams.

One of the early projects, Switchbox: A soldering Project for Greenhorns, is a concise guide to soldering, circuit diagrams, and switches, and has a brief tutorial on voltage, current, and resistance. At the end of the book an appendix, Electronics Components, Tools, and Skills is a valuable, well written and well illustrated, resource.

No Starch Press has been a leader in publishing innovation, such as the Manga Guides, and it is to be congratulated on Snip, Burn, Solder, Shred. The writing is well suited to a young audience. Good value. Recommended a library acquisition, not just for crafts-and-hobbies shelving, but for anyone who wants a hands-on plain-language introduction to electronics.

David Nelson: Snip, Burn, Solder, Shred
ISBN 978-1-59327-259-3
Published by No Starch Press, 337 pp., RRP AU$ 29.95

Woodslane. This title can be purchased from the Australian distributor at
www.computer.bookcentre.com.au
A discount can be redeemed by entering the following code at the checkout:
PLANET