Planet Russell

,

Cory DoctorowThe Haunted Mansion Ghost Post wins a Themed Entertainment Award!

When I wrote about the Haunted Mansion loot crates (“Ghost Post”) last March, what I couldn’t say was that I was the writer on the project, penning the radio scripts, newspapers, letters, and associated gubbins and scraps that went along with the three boxes of custom-made props and merch, tying them together into a series of puzzles that the boxes’ 999 owners solved together over the internet.


That’s because the work — like all the contracting I do for Imagineering — was covered by a legendarily exhaustive non-disclosure agreement that is standard for all Disney contractors.

But! The Ghost Post won a Thea Award for Outstanding Achievement last night from the Themed Entertainment Association, and Disney graciously listed all the contractors who worked on the project in the credits for it — including me!


This was one of the coolest projects I’ve ever worked on, thanks in large part to Sara Thacher, who led the team, but also the other creatives on it, including Wondermark’s David Malki, and the magic propmasters Derek Delgadio and Michael Weber.

The boxes sold out in hours, and they go for silly money on Ebay now. I’ll definitely treasure my set forever.

Here’s Laughing Place’s exhaustive guide to box #1.

Planet DebianAndreas Metzler: balance sheet snowboarding season 2016/17

Another year of minimal snow. Again there was early snowfall in the mountains at the start of November, but the snow was gone soon again. There was no snow up to 2000 meters of altitude until about January 3. Christmas week was spent hiking up and taking the lift down.

I had my first day on board on January 6 on artificial snow, and the first one on natural snow on January 19. Down where I live (800m), snow was tight the whole winter, never topping 1m. Measuring station Diedamskopf at 1800m above sea-level topped at slightly above 200cm, on April 19. Last boarding day was yesterday (April 22) in Warth with hero-conditions.

I had a preopening on the glacier in Pitztal at start of November with Pure Boarding. However due to the long waiting-period between pre-opening and start of season it did not pay off. By the time I rode regularily I had forgotten almost everything I learned at Carving-School.

Nevertheless I strong season due to long periods on stable, sunny weather with 30 days on piste (counting the day I went up and barely managed a single blind run in superdense fog).

Anyway, here is the balance-sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17
number of (partial) days251729373030252330241730
Damüls1010510162310429944
Diedamskopf154242313414191131223
Warth/Schröcken030413100213
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m132781101512245
# of runs309189503551462449516468597530354634

TEDExperience the TED2017 conference in movie theaters, with other curious minds

You probably watch TED Talks on an assortment of small screens. But next week, you’re invited to experience them on the big screen.

As the TED2017 conference takes place in Vancouver, Canada, special sessions will be broadcast live in movie theaters across the United States and world. On April 24, the Opening Event will bring surprising life advice from Tim Ferriss, insights on AI from former world chess champion Garry Kasparov, an inspiring idea from Rabbi Jonathan Sacks and a thrilling performance from music video masters OK Go. On April 25, the Prize Event will include a Q&A with Serena Williams, Atul Gawande’s first TED Talk, a surprise guest who’ll boggle your mind and the reveal of 2017 TED Prize winner Raj Panjabi’s wish for the world.

These sessions will screen live in the US and Canada, and on time-shifted schedules internationally. Then shortly, after the conference closes, you’re invited to enjoy a Highlights Exclusive, featuring the very best talks from the week-long conference. This is truly your ticket to TED2017, so check local listings for times.

“I’m thrilled that we get to share the TED experience with a much wider audience,” said TED Curator Chris Anderson in a preview post. “Watching a full session is nothing like watching individual TED Talks. It’s dramatically more powerful, more immersive.”

This will not be the kind of movie-going experience where people stare ahead and chomp popcorn. The talks in these events will ignite your curiosity and flip your thinking. As you think through these big ideas and make connections between them, we expect conversations to spark between friends and strangers. Just as they do at the conference.

More than 100 organizers of TEDx events are hosting discussions before or after screenings. The organizers of TEDxEvansville went on a public access show last week to let people in Evansville, Indiana, know that they’re taking over a local multiplex for the Opening Event. Meanwhile, TEDxPeachtree in Atlanta, Georgia, is inviting past speakers for a meet-and-greet after the TED Prize Event screening, to keep the conversation going.

Below, see if your local TEDx chapter is hosting a cinema event near you:

Australia
TEDxBirrarungMarr in Melbourne
TEDxBrisbane
TEDxCanberra in Canberra, Australian Capital Territory
TEDxHelensvaleLibrary in Helensvale, Queensland
TEDxHunterTAFE in Newcastle, New South Wales
TEDxSydney

Belgium
TEDxAntwerp
TEDxWomenFlanders in Antwerp

Canada
TEDxCalgary
TEDxChathamKent in Chatham, Ontario
TEDxEastVan in Vancouver
TEDxMontreal
TEDxVancouver
TEDxVaughan in Vaughan, Ontario
TEDxWinnipeg
TEDxYYC in Calgary

Denmark
TEDxOdense in Odense, Syddanmark

Finland
TEDxOtaniemi in Espoo, Uusimaa

Germany
TEDxYouth@BBIS in Kleinmachnow, Brandenburg

Ireland
TEDxDrogheda in Drogheda, Louth

Netherlands
TEDxAlmere in Almere, Flevoland
TEDxAmsterdam

New Zealand
TEDxAuckland
TEDxRuakura in Hamilton, Waikato
TEDxTauranga in Tauranga, Bay of Plenty

Poland
TEDxWroclaw in Wrocław, Dolnośląskie

Romania
TEDxBucharest

United Kingdom
TEDxBrum in Birmingham
TEDxEton in Windsor
TEDxExeter in Devon
TEDxFidelityInternational in London
TEDxGoodenoughCollege in London
TEDxLondon
TEDxOxfordBrookesUniversity in Oxford
TEDxReading
TEDxUAL in London
TEDxUniverstiyofBrighton in Brighton
TEDxYouth@Manchester in Macclesfield, Cheshire East
TEDxYouth@RMS in Rickmansworth, Hertfordshire

United States
TEDxAustin in Austin, Texas
TEDxBalconesHeightsLive in Austin, Texas
TEDxBend in Bend, Oregon
TEDxBergenCommunityCollege in Paramus, New Jersey
TEDxBolingbrookWomen in Bolingbrook, Illinois
TEDxBrookfieldSalon in Brookfield, Connecticut
TEDxBuffalo in Buffalo, New York
TEDxBYU in Provo, Utah
TEDxCharleston in South Carolina
TEDxCincinnati in Ohio
TEDxCooperRiverWomen in Westampton, New Jersey
TEDxDeerPark in New York
TEDxDetroit in Michigan
TEDxDurham in North Carolina
TEDxEastMecklenburgHighSchool in Charlotte, North Carolina
TEDxEdgemontSchool in Scarsdale, New York
TEDxEdina in Edina, Minnesota
TEDxEvansville in Indiana
TEDxFashionInstituteofTechnology in New York City
TEDxFSCJ in Jacksonville, Florida
TEDxGatewayArch in Saint St. Louis, Missouri
TEDxGreenville in South Carolina
TEDxHiltonHead in South Carolina
TEDxHuntsville in Alabama
TEDxIntuit in Mountain View, California
TEDxKids@ElCajon in El Cajon, California
TEDxLasVegas
TEDxLehighRiver in Bethlehem, Pennsylvania
TEDxLeonardtown in Leonardtown, Maryland
TEDxLongwood in Boston, Massachusetts
TEDxLosAlSchools in Los Alamitos, California
TEDxMalibu in California
TEDxMidAtlantic in Washington, D.C.
TEDxMinneapolis in Minnesota
TEDxMobile in Alabama
TEDxMtHood in Portland, Oregon<
TEDxNaperville in Naperville, Illinois
TEDxNavesink in Asbury Park, New Jersey
TEDxOaklandUniversity in Rochester, Michigan
TEDxOaksChristianSchool in Westlake Village, California
TEDxOmaha in Nebraska
TEDxOnBoard in San Francisco, California
TEDxPeachtree in Atlanta, Georgia
TEDxPershingSq in Los Angeles, California
TEDxPittsburgh in Pennsylvania
TEDxPortland in Oregon
TEDxProvidence in Rhode Island
TEDxSalem in Oregon
TEDxSanDiego in California
TEDxSanJoseCA
TEDxSeattle in Washington
TEDxShelburneFalls in Shelburne Falls, Massachusetts
TEDxSoMa in San Francisco, California
TEDxTemecula in Temecula, California
TEDxTysons in Tysons, Virginia
TEDxUniversityofNevada in Reno, Nevada
TEDxVail in Denver, Colorado
TEDxWestBrowardHigh in Pembroke Pines, Florida
TEDxWilmington in Delaware
TEDxYouth@CEHS in Cape Elizabeth, Maine
TEDxYouth@Erie in Erie, Pennsylvania
TEDxYouth@MBJH in Mountain Brook, Alabama
TEDxYouth@MSJA in Flourtown, Pennsylvania

Purchase tickets to the TED Cinema Experience »


,

Planet DebianEnrico Zini: Splitting a git-annex repository

I have a git annex repo for all my media that has grown to 57866 files and git operations are getting slow, especially on external spinning hard drives, so I decided to split it into separate repositories.

This is how I did it, with some help from #git-annex. Suppose the old big repo is at ~/oldrepo:

# Create a new repo for photos only
mkdir ~/photos
cd photos
git init
git annex init laptop

# Hardlink all the annexed data from the old repo
cp -rl ~/oldrepo/.git/annex/objects .git/annex/

# Regenerate the git annex metadata
git annex fsck --fast

# Also split the repo on the usb key
cd /media/usbkey
git clone ~/photos
cd photos
git annex init usbkey
cp -rl ../oldrepo/.git/annex/objects .git/annex/
git annex fsck --fast

# Connect the annexes as remotes of each other
git remote add laptop ~/photos
cd ~/photos
git remote add usbkey /media/usbkey

At this point, I went through all repos doing standard cleanup:

# Remove unneeded hard links
git annex unused
git annex dropunused --force 1-12345

# Sync
git annex sync

To make sure nothing is missing, I used git annex find --not --in=here to see if, for example, the usbkey that should have everything could be missing some thing.

TEDPaul Nicklen’s new images carry a dire warning about climate change

This post first appeared at BillMoyers.com.

In the summer of 2014, one of the world’s top nature photographers was on an expedition in the far north to document the changing Arctic. Paul Nicklen was sailing around Svalbard, an archipelago halfway between Scandinavia and the North Pole. The largely uninhabited land sees 24 hours of sunlight in the summer and 24 hours of darkness in the winter, and is closer to the top of the planet than most people ever get.

While on this journey, members of Nicklen’s crew had been searching for polar bears, without success. Eventually, Nicklen did spy some in the distance, unmoving. When he approached them, he discovered they were dead. “They had cubs who had starved to death — 3-year-old cubs, big cubs,” he recalls.

They couldn’t linger. “All of a sudden a blizzard came up, a massive storm — 80 knots of wind — so we had to go and hide,” Nicklen recalls. For protection, the best choice was to sail behind Nordaustlandet, a large, ice-covered island in the Svalbard archipelago. “And the temperature, even though we’re 600 miles from the North Pole, was 62 degrees Fahrenheit. And you’ve got all the waterfalls pouring off the Nordaustlandet ice cap.”

Nicklen snapped a photo — and, on this balmy day in the Arctic, captured a potent picture of climate change: A wall of ice in a steel-colored sea, with water pouring from the top of it.

“You go from the dead bears to this, and then look at the science — you come to understand that if we wait for the streets of New York or Miami to be flooded from rising sea levels, then we’ll be 200 years too late,” he says.

The image was used by National Geographic and Nature Conservancy magazine; Al Gore frequently uses it in his talks on climate change. “While I sleep, that image is up there working on behalf of our planet, and that’s exciting to me,” Nicklen says.

This Saturday — Earth Day 2017 — the image will have another venue: Nicklen’s new art gallery in New York’s fancy SoHo neighborhood, where the walls of a sunny room on West Broadway will display some of his recent work. Large prints will show up-close encounters with ghostly Kermode (“spirit”) bears, penguins parading along glaciers, seals swimming beneath icy waters — and disintegrating ice caps, like the one in Svalbard.

That image was captured three years ago, and Earth’s poles, where Nicklen spends a good deal of his time, have only gotten warmer. The Arctic and Antarctica are warming at twice the rate of the rest of the planet — and the last couple of years have seen record-breaking high temperatures at both poles.

They’ll be the first place where species — and entire ecosystems — disappear, and Nicklen may be one of the last humans to witness them before they go. He is known for braving extremes to document inaccessible and inhospitable environments, and disseminating his photos far and wide. His two most common venues are the pages — digital and print — of National Geographic, and an Instagram account with 3.1 million followers. A TED talk he gave in 2011 showcasing his work has been viewed nearly 2 million times. His gallery will add one more venue.

Instagram Photo

“My goal, really, is to break down the walls of apathy with this show, to have people start to care,” Nicklen says, sitting in the gallery on a recent warm April afternoon — one of the first pleasant days of a cold spring that followed a warm winter. Nicklen and his partner, Cristina Mittermeier (who, by the way, is also a researcher turned prolific nature photographer), were working quickly to get the space ready for this Saturday’s public opening. Three photographs were already on the walls, with many more to come. Workers moved about the room, and a large Burmese mountain dog named Bernie paced cheerfully back and forth. Mittermeier says she hopes people attending New York’s March for Science will make their way downtown to the gallery’s opening.

“This is another form of storytelling,” Nicklen says. “As a photographer, there’s nothing more beautiful than seeing your work really big. To be able to transport an audience into the scene. You want people to feel what you feel.”

Nicklen grew up in an Inuit community on Canada’s Baffin Island. He loved the Arctic environment, and initially decided to become a wildlife biologist for the Canadian government. Repeatedly, however, Nicklen saw the data he collected have little impact on policy makers who controlled the future of the ecosystems he worked in. “To come out with data sets, and to be completely ineffective, was such a slap in the face,” he remembers. He turned instead to photography. “That took another seven years, of being a starving photographer, just out there trying,” he says. His hard work paid off: For more than a decade and a half now, Nicklen’s pictures of our changing Arctic ecosystems have reached a global audience.

In the years since the expedition to Svalbard, Nicklen has continued to travel frequently — and to see the effects of three very hot years on Earth, each breaking the previous year’s record. He just returned from Antarctica, where some parts of the continent experienced their hottest summer ever. (Temperatures reached the 60s at one research station.)

Photo courtesy of Paul Nicklen

Antarctica is normally a frozen desert, but it rained the whole time Nicklen was there. “When it’s that warm and you get that much humidity and that much precipitation, it actually wipes out penguin populations,” he explains. “Their chicks are in that fluffy downy phase, and when they get wet, they freeze. You start to see a lot of dead penguins.” He’s observed other unnatural phenomena during other recent travels: Inuit fisherman whipping out their iPhones to photograph fish they’ve never seen before — new migrants borne north by warming waters.

Last March, as Nicklen and Mittermeier were in the Arctic traveling by dogsled on assignment for National Geographic, the ice split open beneath their sled. “One of the dogs drowned,” Mittermeier recalls. “Imagine the sled, with all of our gear, all of our cameras, pulling down on these poor dogs — it was just so dramatic.”

Experiences like these can be overwhelming reminders of the potency of climate change. “We are going to lose a lot of species. The silver lining for me, I guess, is that — it’ll be sad, but ultimately I can see us disappearing as a species if we keep on this path. The Earth will recover. But how many species are we going to take with us?”

Instagram Photo

But it’s not yet time to despair. Nicklen is starting to see signs of a shift. “Ten years ago I’d say the word ‘climate change’ in a lecture and people would kind of roll their eyes,” Nicklen says. People are starting to listen — but not fast enough. “The problem with humans is we sort of deny, deny, deny, panic,” he continues. “And right now we’re in the denial phase.”

Millennials are the exception. They’re already panicking — and that gives Nicklen hope. “Now, if I get any opposition, I’ll have 200 millennials rise to the occasion and take on the opposer. They get it, they’re smart, they’re not in denial. They’re willing to be inconvenienced. To see true change, we have to be uncomfortable.“

To help get the message out, Nicklen and Mittermeier have brought together some of their colleagues — other world-renowned photographers, filmmakers and storytellers — under the banner of a group they founded called SeaLegacy. The group’s mission is to spread the word about disappearing animals, environments, and the vulnerable seaside human communities that rely on them. “We’re like Jacques Cousteau 2.0,” Nicklen says. A portion of the gallery’s proceeds will go to support SeaLegacy.

But Nicklen’s plans have a high price tag — an estimated $8 million a year. The gallery, he hopes, will help cover some of those costs. “It’s another way to feed the beast to keep us out there working,” he says.

Paul Nicklen’s gallery will open on April 22, 2017, at 347 W. Broadway in New York.

 

Instagram Photo

 


Sam VargheseTheresa May needs an election now. Else, she may lose even her own seat

After British Prime Minster Theresa May called a snap election on April 18, many journalists have been at pains to suck up to her and paint what is, in fact, a move born of desperation as some kind of astute political gambit.

This, despite the fact that this kind of sucking up to politicians has been, in the main, the reason why newspapers and magazines have gradually lost readership over the last two decades to other more rough-edged publications that speak the unvarnished truth.

The next British election is due in 2020. By then, Britain would have completed negotiations to leave the European Union, a decision the people voted for in a referendum in 2016. Even if things are not completely sewn up, the general points of the deal would be clear by them.

And given that the UK is bound to get the rough edge of the stick — what Australians call a shit sandwich — it is highly unlikely that May will be able to win any election after that.

Indeed, she would be lucky to retain her own seat.

After the talks begin on Britain’s exit, slowly the extent of what it has lost by leaving the EU will become apparent. Both France and Germany, the two major powers in the EU, are extremely annoyed about Brexit and seem determined to give the UK the worst deal they can.

As the conditions laid down by the remaining EU countries become clearer with the progress of negotiations, it will become more and more difficult for May to continue to put on a brave face and say that Britain will get a good deal from the EU.

She has called an election now to guarantee her survival. That is the plain and unvarnished truth.

But journalists are still willing to talk rubbish and write it too.

On the day that May acted, the Australian Broadcasting Corporation’s Europe correspondent Lisa Millar claimed that winning an election would give May a “stronger hand” to negotiate the terms for the UK’s exit from the EU.

This is bunkum of a very high order; May holds a hand with no cards at all and winning the poll on June 8 will only ensure that she is in power for the next five years. It gives her no additional leverage with the rest of the EU.

She thought she could steal a march on the EU by traipsing across the Atlantic and cosying up to the new orange-haired occupant of the White House, but has found that Donald Trump is not overly sentimental about the so-called “special relationship” now that Britain is not part of a much bigger trading bloc.

The newspaper headline below says it much better:

Then we had the delusional Greg Sheridan, the foreign editor of The Australian, who wrote: “Despite May being ahead in the polls, this is the mother of all gambles.” (High-grade rubbish; if she cannot win a poll now, she might as well commit hara-kiri.)

He went on: “She has a stable if narrow parliamentary majority, and three years of the government’s term to run. She has gambled it all.” (Gambled? Sheridan appears to be hallucinating.)

And on: “But as pro-Tory newspaper The Sun put it in three declarative decks of heading the next day: ‘PM’s snap poll will kill off Labour; She’ll smash rebel Tories too; Bid for clear Brexit mandate’.” (When you have to quote from The Sun to back up an argument, that means you have no smoke in your stack.)

There are loads of Anglophiles who still think Britain, — sorry, Great Britain — is still a colonial power when all it is now is the US’s poodle. But then we all need our little delusions to live, don’t we?

When will people like Sheridan ever learn?

Don MartiNPM without sudo

Setting up a couple of Linux systems to work with FilterBubbler, which is one of the things that I'm up to at work now. FilterBubbler is a WebExtension, and the setup instructions use web-ext, so I need NPM. In order to keep all the NPM stuff under my own home directory, but still put the web-ext tool on my $PATH, I need to make one-line edits to three files.

One line in ~/.npmrc

prefix = ~/.npm

One line in ~/.gitignore

.npm/

One line in ~/.bashrc

export PATH="$PATH:$HOME/.npm/bin"

(My /bashrc has a bunch of export PATH= lines so that when I add or remove one it's more likely to get a clean merge. Because home directory in git.) I think that's it. Now I can do

npm install --global web-ext

with no sudo or mess. And when I clone my home directory on another system it will just work.

Based on: HowTo: npm global install without root privileges by Johannes Klose

Planet DebianManuel A. Fernandez Montecelo: Debian GNU/Linux port for RISC-V 64-bit (riscv64)

This is a post describing my involvement with the Debian GNU/Linux port for RISC-V (unofficial and not endorsed by Debian at the moment) and announcing the availability of the repository (still very much WIP) with packages built for this architecture.

If not interested in the story but you want to check the repository, just jump to the bottom.

Roots

A while ago, mostly during 2014, I was involved in the Debian port for OpenRISC (or1k) ─ about which I posted (by coincidence) exactly 2 years ago.

The two of us working on the port stopped in August or September of that year, after knowing that the copyright of the code to add support for this architecture in GCC would not be assigned to the FSF, so it would never be added to GCC upstream ─ unless the original authors changed their mind (which they didn't) or there was a clean-room reimplementation (which didn't happen so far).

But a few other key things contributed to the decision to stop working on that port, which bear direct relationship to this story.

One thing that particularly influenced me to stop working on it was a sense of lack of purpose, all things considered, for the OpenRISC port that we were working on.

For example, these chips are sometimes used as part of bigger devices by Samsung to control or wake up other chips; but it was not clear whether there would ever be devices with OpenRISC as the main chip, specially in devices powerful enough to run Linux or similar kernels, and Debian on top. One can use FPGAs to synthesise OpenRISC or1k, but these are slow, and expensive when using lots of memory.

Without prospects of having hardware easily available to users, there's not much point in having a whole Debian port ready to run on hardware that never comes to be.

Yeah, sure, it's fun to create such a port, but it's tons of work to maintain and keep it up to-date forever, and with close to zero users it's very unrewarding.

Another thing that contributed to decide to stop is that, at least in my opinion, 32-bit was not future-proof enough for general purpose computing, specially for new devices and ports starting to take off on that time and age. There was some incipient work to create another OpenRISC design for 64-bits, but it was still in an early phase.

My secret hope and ultimate goal was to be able to run as much a free-ish computer as possible as my main system. Still today many people are buying and using 32-bit devices, like small boards; but very few use it as their main computer or servers for demanding workloads or complex services. So for me, even if feasible if one is very austere and dedicated, OpenRISC or1k failed that test.

And lastly, another thing happened at the time...

Enter RISC-V

In August 2014, at the point when we were fully acknowledging the problem of upstreaming (or rather, lack thereof) the support for OpenRISC in GCC, RISC-V was announced to the world, bringing along papers with suggestive titles such as Instruction Sets Should Be Free: The Case For RISC-V (pdf) and articles like RISC-V: An Open Standard for SoCs - The case for an open ISA in EE Times.

RISC-V (as the previous RISC-n marks) had been designed (or rather, was being designed, because it was and is an as yet unfinished standard) by people from UC Berkeley, including David Patterson, the pioneer of RISC computer designs and co-author of the seminal book “Computer Architecture: A Quantitative Approach”. Other very capable people are also also leading the project, doing the design and legwork to make it happen ─ see the list of contributors.

But, apart from throwing names, the project has many other merits.

Similarly to OpenRISC, RISC-V is an open instruction set architecture (ISA), but with the advantage of being designed in more recent times (thus avoiding some mistakes and optimising for problems discovered more recently, as technology evolves); with more resources; with support for instruction widths of 32, 64 and even 128 bits; with a clean set of standard but optional extensions (atomics, float, double, quad, vector, ...); and with reserved space to add new extensions in ordered and compatible ways.

In the view of some people in the OpenRISC community, this unexpected development in a way made irrelevant the ongoing update of OpenRISC for 64-bits, and from what I know and for better or worse, all efforts on that front stopped.

Also interestingly (if nothing else, for my purposes of running as much a free system as possible), was that the people behind RISC-V had strong intentions to make it useful to create modern hardware, and were pushing heavily towards it from the beginning.

And together with this announcement or soonly after, it came the promise of free-ish hardware in the form of the lowRISC project. Although still today it seems to be a long way from actually shipping hardware, at least there was some prospect of getting it some day.

On top of all that, about the freedom aspect, both the Berkeley and lowRISC teams engaged since very early on with the free hardware community, including attending and presenting at OpenRISC events; and lowRISC intended to have as much free hardware as possible in their planned SoC.

Starting to work on the Debian port, this time for RISC-V

So in late 2014 I slowly started to prepare the Debian port, switching on and off.

The Userland spec was frozen in 2014 just before the announcement, the Privilege one is still not frozen today, so there was no need to rush.

There were plans to upstream the support in the toolchain for RISC-V (GNU binutils, glibc and GCC; Linux and other useful software like Qemu) in 2015, but sadly these did not materialise until late 2016 and 2017. One of the main reasons for the delay was due to the slowness to sort out the copyright assignment of the code to the FSF (again). Still today, only binutils and GCC are upstreamed, and Linux and glibc depend on the Privilege spec being finished, so it will take a while.

After the experience with OpenRISC and the support in GCC, I didn't want to invest too much time, lest it all became another dead-end due to lack of upstreaming ─ so I was just cross-compiling here and there, testing Qemu (which still today is very limited for this architecture, e.g. no network support and very limited character and block devices) and trying to find and report bugs in the implementations, and send patches (although I did not contribute much in the end).

Incompatible changes in the toolchain

In terms of compiling packages and building-up a repository, things were complicate, and less mature and stable than the OpenRISC ones were even back in 2014.

In theory, with the Userland spec being frozen, regular programs (below the Operating System level) compiled at any time could have run today; but in practice what happened is that there were several rounds of profound ─or, at least, disrupting─ changes in the toolchain before and while being upstreamed, which made the binary packages that I had built to not work at all (changes in dynamic loader, registers where arguments are stored when jumping functions, etc.).

These major breakages happened several times already, and kind of unexpectedly ─ at least for the people not heavily involved in the implementation.

When the different pieces are upstreamed it is expected that these breakages won't happen; but still there's at least the fundamental bit of glibc, which will probably change things once again in incompatible ways before or while being upstreamed.

Outside Debian but within the FOSS / Linux world, the main project that I know of is that some people from Fedora also started a port in mid 2016 and did great advances, but ─from what I know─ they put the project in the freezer in late 2016 until all such problems are resolved ─ they don't want to spend time rebootstrapping again and again.

What happened recently on the Debian front

In early 2016 I created the page for RISC-V in the Debian wiki, expecting that things were at last fully stable and the important bits of the toolchain upstreamed during that year ─ I was too optimistic.

Some other people (including Debian folks) have been contributing for a while, in the wiki, mailing lists and IRC channels, and in the RISC-V project mailing lists ─ you will see their names everywhere.

However, due to the combination of lack of hardware, software not upstreamed and shortcomings of emulators (chiefly Qemu) make contributions hard and very tedious, nothing much happened recently visible to the outside world in terms of software.

The private repository-in-the-making

In late 2015 and beginning of 2016, having some free time in my hands and expecting that all things would coalesce quickly, I started to build a repository of binary packages in a more systematic way, with most of the basic software that one can expect in a basic Debian system (including things common to all Linux systems, and also specific Debian software like dpkg or apt, and even aptitude!).

After that I also built many others outside the basic system (more than 1000 source packages and 2000 or 3000 arch-dependent binary packages in total), specially popular libraries (e.g. boost, gtk+ version 2 and 3), interpreters (several versions of lua, perl and python, also version 2 and 3) and in general packages that are needed to build many other packages (like doxygen). Unfortunately, some of these most interesting packages do not compile cleanly (more because of obscure or silly errors than proper porting), so they are not included at the moment.

I intentionally avoided trying to compile thousands of packages in the archive which would be of nobody's use at this point; but many more could be compiled without much effort.

About the how, initially I started cross-compiling and using rebootstrap, which was of great help in the beginning. Some of the packages that I cross-compiled had bugs that I did not know how to debug without a “live” and “native” (within emulators) system, so I tried to switch to “natively” built packages very early on. For that I needed many packages built natively (like doxygen or cmake) which would be unnecessary if I remained cross-compiling ─ the host tools would be used in that case.

But this also forced me to eat my own dog food, which even if much slower and tedious, it was on the whole a more instructive experience; and above all, it helped to test and make sure that the the tools and the whole stack was working well enough to build hundreds of packages.

Why the repository-in-the-making was private

Until now I did not attempt to make the repository available on-line, for several reasons.

First because it would be kind of useless to publish files that were not working or would soon not work, due to the incompatible changes in the toolchain, rendering many or most of the packages built useless. And because, for many months now, I expected that things would stabilise and to have something stable “really soon now” ─ but this didn't happen yet.

Second because of lack of resources and time since mid 2016, and because I got some packages only compiled thanks to (mostly small and unimportant, but undocumented and unsaved) hacks, often working around temporary bugs and thus not worth sending upstream; but I couldn't share the binaries without sharing the full source and fulfill licenses like the GNU GPL. I did a new round of clean rebuilds in the last few weeks, just finished, the result is close to 1000 arch-dependent packages.

And third, because of lack of demand. This changed in the last few weeks, when other people started to ask me to share the results even if incomplete or not working properly (I had one request in the past, but couldn't oblige in time at the time).

Finally, the work so far: repository now on-line

So finally, with the great help from Kurt Keville from MIT, and Bytemark sponsoring a machine where most of the packages were built, here we have the repository:

The lines for /etc/apt/sources.list are:

 deb [ arch=riscv64 signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main
 deb-src [ signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main

The repository is signed with my key as Debian Developer, contained in the file /usr/share/keyrings/debian-keyring.gpg, which is part of the package debian-keyring (available from Debian and derivatives).

WARNING!!

This repository, though, is very much WIP, incomplete (some package dependencies cannot be fulfilled, and it's only a small percentage of the Debian archive, not trying to be comprehensive at the moment) and probably does not work at all in your system at this point, for the following reasons:

  • I did not create Debian packages for fundamental pieces like glibc, gcc, linux, etc.; although I hope that this happens soon after the next stable release (Stretch) is out of the door and the remaining pieces are upstreamed (help welcome).
     
  • There's no base system image yet, containing the software above plus a boot loaders and sequence to set up the system correctly. At the moment you have to provide your own or use one that you can find around the net. But hopefully we will provide an image soon. (Again, help welcome).
     
  • Due to ongoing changes in the toolchain, if the version of the toolchain in your image is very new or very old, chances are that the binaries won't work at all. The one that I used was the master branch on 20161117 of their fork of the toolchain: https://github.com/riscv/riscv-gnu-toolchain

The combination of all these shortcomings is specially unfortunate, because without glibc provided it will be difficult that you can get the binaries to run at all; but there are some packages that are arch-dependent but not too tied to libc or the dynamic loader will not be affected.

At least you can try one the few static packages present in Debian, like the one in the package bash-static. When one removes moving parts like the dynamic loader and libc, since the basic machine instructions are stable for several years now, it should work, but I wouldn't discard some dark magic that prevents even static binaries from working.

Still, I hope that the respository in its current state is useful to some people, at least for those who requested it. If one has the environment set-up, it's easy to unpack the contents of the .deb files and try out the software (which often is not trivial or very slow to compile, or needs lots of dependencies to be built first first).

... and finally, even if not useful at all for most people at the moment, by doing this I also hope that efforts like this spark your interest to contribute to free software, free hardware, or both! :-)

,

CryptogramFriday Squid Blogging: Video of Squid Attacking Another Squid

Wow, is this cool.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityHow Cybercrooks Put the Beatdown on My Beats

Last month Yours Truly got snookered by a too-good-to-be-true online scam in which some dirtball hijacked an Amazon merchant’s account and used it to pimp steeply discounted electronics that he never intended to sell. Amazon refunded my money, and the legitimate seller never did figure out how his account was hacked. But such attacks are becoming more prevalent of late as crooks increasingly turn to online crimeware services that make it a cakewalk to cash out stolen passwords.

The elusive Sonos Play:5

The elusive Sonos Play:5

The item at Amazon that drew me to this should-have-known-better bargain was a Sonos wireless speaker that is very pricey and as a consequence has hung on my wish list for quite some time. Then I noticed an established seller with great feedback on Amazon was advertising a “new” model of the same speaker for 32 percent off. So on March 4, I purchased it straight away — paying for it with my credit card via Amazon’s one-click checkout.

A day later I received a nice notice from the seller stating that the item had shipped. Even Amazon’s site seemed to be fooled because for several days Amazon’s package tracking system updated its progress slider bar steadily from left to right.

Suddenly the package seemed to stall, as did any updates about where it was or when it might arrive. This went on for almost a week. On March 10, I received an email from the legitimate owner of the seller’s account stating that his account had been hacked.

Identifying myself as a reporter, I asked the seller to tell me what he knew about how it all went down. He agreed to talk if I left his name out of it.

“Our seller’s account email address was changed,” he wrote. “One night everything was fine and the next morning our seller account had a email address not associated with us. We could not access our account for a week. Fake electronic products were added to our storefront.”

He couldn’t quite explain the fake tracking number claim, but nevertheless the tactic does seem to be part of an overall effort to delay suspicion on the part of the buyer while the crook seeks to maximize the number of scam sales in a short period of time.

“The hacker then indicated they were shipped with fake tracking numbers on both the fake products they added and the products we actually sell,” the seller wrote. “They were only looking to get funds through Amazon. We are working with Amazon to refund all money that were spent buying these false products.”

As these things go, the entire ordeal wasn’t awful — aside maybe from the six days spent in great anticipation of audiophilic nirvana (alas, after my refund I thought better of the purchase and put the item back on my wish list.) But apparently I was in plenty of good (or bad?) company.

The Wall Street Journal notes that in recent weeks “attackers have changed the bank-deposit information on Amazon accounts of active sellers to steal tens of thousands of dollars from each, according to several sellers and advisers. Attackers also have hacked into the Amazon accounts of sellers who haven’t used them recently to post nonexistent merchandise for sale at steep discounts in an attempt to pocket the cash.”

Perhaps fraudsters are becoming more brazen of late with hacked Amazon accounts, but the same scams mentioned above happen every day on plenty of other large merchandising sites. The sad reality is that hacked Amazon seller accounts have been available for years at underground shops for about half the price of a coffee at Starbucks.

The majority of this commerce is made possible by one or two large account credential vendors in the cybercrime underground, and these vendors have been collecting, vetting and reselling hacked account credentials at major e-commerce sites for years.

I have no idea where the thieves got the credentials for the guy whose account was used to fake sell the Sonos speaker. But it’s likely to have been from a site like SLILPP, a crime shop which specializes in selling hacked Amazon accounts. Currently, the site advertises more than 340,000 Amazon account usernames and passwords for sale.

The price is about USD $2.50 per credential pair. Buyers can select accounts by balance, country, associated credit/debit card type, card expiration date and last order date. Account credentials that also include the password to the victim’s associated email inbox can double the price.

The Amazon portion of SLILPP, a long-running fraud shop that at any given time has hundreds of thousands of Amazon account credentials for sale.

The Amazon portion of SLILPP, a long-running fraud shop that at any given time has hundreds of thousands of Amazon account credentials for sale.

If memory serves correctly, SLILPP started off years ago mainly as a PayPal and eBay accounts seller (hence the “PP”). “Slil” is transliterated Russian for “слил,” which in this context may mean “leaked,” “download” or “to steal,” as in password data that has leaked or been stolen in other breaches. SLILPP has vastly expanded his store in the years since: It currently advertises more than 7.1 million credentials for sale from hundreds of popular bank and e-commerce sites.

The site’s proprietor has been at this game so long he probably deserves a story of his own soon, but for now I’ll say only that he seems to do a brisk business buying up credentials being gathered by credential-testing crime crews — cyber thieves who spend a great deal of time harvesting and enriching credentials stolen and/or leaked from major data breaches at social networking and e-commerce providers in recent years.

SLILPP's main inventory page.

SLILPP’s main inventory page.

Fraudsters can take a list of credentials stolen from, say, the Myspace.com breach (in which some 427 million credentials were posted online) and see how many of those email address and password pairs from the MySpace accounts also work at hundreds of other bank and e-commerce sites.

Password thieves often then turn to crimeware-as-a-service tools like Sentry MBA, which can vastly simplify the process of checking a list of account credentials at multiple sites. To make blocking their password-checking activities more challenging for retailers and banks, these thieves often try to route the Internet traffic from their password-guessing tools through legions of open Web proxies, hacked PCs or even stolen/carded cloud computing instances.

PASSWORD RE-USE: THE ENGINE OF ALL ONLINE FRAUD

In response, many major retailers are being forced to alert customers when they see known account credential testing activity that results in a successful login (thus suggesting the user’s account credentials were replicated and compromised elsewhere). However, from the customer’s perspective, this is tantamount to the e-commerce provider experiencing a breach even though the user’s penchant for recycling their password across multiple sites is invariably the culprit.

There are a multitude of useful security lessons here, some of which bear repeating because their lack of general observance is the cause of most password woes today (aside from the fact that so many places still rely on passwords and stupid things like “secret questions” in the first place). First and foremost: Do not re-use the same password across multiple sites. Secondly, but equally important: Never re-use your email password anywhere else.

Also, with a few exceptions, password length is generally more important than password complexity, and complex passwords are difficult to remember anyway. I prefer to think in terms of “pass phrases,” which are more like sentences or verses that are easy to remember.

If you have difficult recalling even unique passphrases, a password manager can help you pick and remember strong, unique passwords for each site you interact with, requiring only one strong master password to unlock any of them. Oh, and if the online account in question allows 2-factor authentication, be sure to take advantage of that.

I hope it’s clear that Amazon is just one of the many platforms where fraudsters lurk. SLILPP currently is selling stolen credentials for nearly 500 other banks and e-commerce sites. The full list of merchants targeted by this particularly bustling fraud shop is here (.txt file).

As for the “buyer beware” aspect of this tale, in retrospect there were several warning signs that I either ignored or neglected to assign much weight. For starters, the deal that snookered me was for a luxury product on sale for 32 percent off without much explanation as to why the apparently otherwise pristine item was so steeply discounted.

Also, while the seller had a stellar history of selling products on Amazon for many years (with overwhelmingly positive feedback on virtually all of his transactions) he did not have a history of selling the type of product that thieves tried to sell through his account. The old adage “If something seems too good to be true, it probably is,” ages really well in cyberspace.

Planet DebianRitesh Raj Sarraf: Indian Economy

This has gotten me finally ask the question

 

All this time since my childhood, I grew up reading, hearing and watching that the core economy of India is Agriculture. And that it needs the highest bracket in the budgets of the country. It still applies today. Every budget has special waivers for the agriculture sector, typically in hundreds of thousands of crores in India Rupees. The most recent to mention is INR 27420 Crores waived off for just a single state (Uttar Pradesh), as was promised by the winning party during their campaign.

Wow. Quick search yields that I am not alone to notice this. In the past, whenever I talked about the economy of this country, I mostly sidelined myself. Because I never studied here. And neither did I live here much during my childhood or teenage days. Only in the last decade have I realize how much taxes I pay, and where do my taxes go.

I do see a justification for these loan waivers though. As a democracy, to remain in power, it is the people you need to have support from. And if your 1.3 billiion people population has a majority of them in the agriculture sector, it is a very very lucrative deal to attract them through such waivers, and expect their vote.

Here's another snippet from Wikipedia on the same topic:

Agricultural Debt Waiver and Debt Relief Scheme[edit]

On 29 February 2008, P. Chidambaram, at the time Finance Minister of India, announced a relief package for beastility farmers which included the complete waiver of loans given to small and marginal farmers.[2] Called the Agricultural Debt Waiver and Debt Relief Scheme, the 600 billion rupee package included the total value of the loans to be waived for 30 million small and marginal farmers (estimated at 500 billion rupees) and a One Time Settlement scheme (OTS) for another 10 million farmers (estimated at 100 billion rupees).[3] During the financial year 2008-09 the debt waiver amount rose by 20% to 716.8 billion rupees and the overall benefit of the waiver and the OTS was extended to 43 million farmers.[4] In most of the Indian States the number of small and marginal farmers ranges from 70% to 94% of the total number of farmers

 

And not to forget how many people pay taxes in India. To quote an unofficial statement from an Indian Media House

Only about 1 percent of India's population paid tax on their earnings in the year 2013, according to the country's income tax data, published for the first time in 16 years.

The report further states that a total of 28.7 million individuals filed income tax returns, of which 16.2 million did not pay any tax, leaving only about 12.5 million tax-paying individuals, which is just about 1 percent of the 1.23 billion population of India in the year 2013.

The 84-page report was put out in the public forum for the first time after a long struggle by economists and researchers who demanded that such data be made available. In a press release, a senior official from India's income tax department said the objective of publishing the data is to encourage wider use and analysis by various stakeholders including economists, students, researchers and academics for purposes of tax policy formulation and revenue forecasting.

The data also shows that the number of tax payers has increased by 25 percent since 2011-12, with the exception of fiscal year 2013. The year 2014-15 saw a rise to 50 million tax payers, up from 40 million three years ago. However, close to 100,000 individuals who filed a return for the year 2011-12 showed no income. The report brings to light low levels of tax collection and a massive amount of income inequality in the country, showing the rich aren't paying enough taxes.

Low levels of tax collection could be a challenge for the current government as it scrambles for money to spend on its ambitious plans in areas such as infrastructure and science & technology. Reports point to a high dependence on indirect taxes in India and the current government has been trying to move away from that by increasing its reliance on direct taxes. Official data show that the dependence has come down from 5.93 percent in 2008-09 to 5.47 percent in 2015-16.

 

 

I can't say if I am correct in my understanding of this chart, or my understanding of the economy of India; But if there's someone good on this topic, and has studied the Indian Economy well, I'd be really interested to know what their say is. Because, otherwise, from my own interpretation on the subject, I don't see the day far away when this economy will plummet

 

PS: Image source Wikipedia https://upload.wikimedia.org/wikipedia/commons/2/2e/1951_to_2013_Trend_C...

Categories: 

Keywords: 

Like: 

Planet Linux AustraliaDave Hall: Many People Want To Talk

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.

Planet DebianJoachim Breitner: veggies: Haskell code generation from scratch

How hard it is to write a compiler for Haskell Core? Not too hard, actually!

I wish we had a formally verified compiler for Haskell, or at least for GHC’s intermediate language Core. Now formalizing that part of GHC itself seems to be far out of reach, with the many phases the code goes through (Core to STG to CMM to Assembly or LLVM) and optimizations happening at all of these phases and the many complicated details to the highly tuned GHC runtime (pointer tagging, support for concurrency and garbage collection).

Introducing Veggies

So to make that goal of a formally verified compiler more feasible, I set out and implemented code generation from GHC’s intermediate language Core to LLVM IR, with simplicity as the main design driving factor.

You can find the result in the GitHub repository of veggies (the name derives from “verifiable GHC”). If you clone that and run ./boot.sh some-directory, you will find that you can use the program some-directory/bin/veggies just like like you would use ghc. It comes with the full base library, so your favorite variant of HelloWorld might just compile and run.

As of now, the code generation handles all the Core constructs (which is easy when you simply ignore all the types). It supports a good number of primitive operations, including pointers and arrays – I implement these as need – and has support for FFI calls into C.

Why you don't want to use Veggies

Since the code generator was written with simplicity in mind, performance of the resulting code is abysmal: Everything is boxed, i.e. represented as pointer to some heap-allocated data, including “unboxed” integer values and “unboxed” tuples. This is very uniform and simplifies the code, but it is also slow, and because there is no garbage collection (and probably never will be for this project), will fill up your memory quickly.

Also, the code is currently only supports 64bit architectures, and this is hard-coded in many places.

There is no support for concurrency.

Why it might be interesting to you nevertheless

So if it is not really usable to run programs with, should you care about it? Probably not, but maybe you do for one of these reasons:

  • You always wondered how a compiler for Haskell actually works, and reading through a little over a thousands lines of code is less daunting than reading through the 34k lines of code that is GHC’s backend.
  • You have wacky ideas about Code generation for Haskell that you want to experiment with.
  • You have wacky ideas about Haskell that require special support in the backend, and want to prototype that.
  • You want to see how I use the GHC API to provide a ghc-like experience. (I copied GHC’s Main.hs and inserted a few hooks, an approach I copied from GHCJS).
  • You want to learn about running Haskell programs efficiently, and starting from veggies, you can implement all the trick of the trade yourself and enjoy observing the speed-ups you get.
  • You want to compile Haskell code to some weird platform that is supported by LLVM, but where you for some reason cannot run GHC’s runtime. (Because there are no threads and no garbage collection, the code generated by veggies does not require a runtime system.)
  • You want to formally verify Haskell code generation. Note that the code generator targets the same AST for LLVM IR that the vellvm2 project uses, so eventually, veggies can become a verified arrow in the top right corner map of the DeepSpec project.

So feel free to play around with veggies, and report any issues you have on the GitHub repository.

Sociological ImagesWomen and exclusion from long distance running

Flashback Friday, in honor of Kathrine Switzer running the Boston marathon 50 years after she was physically removed from the race because it was Men Only.

The first Olympic marathon was held in 1896. It was open to men only and was won by a Greek named Spyridon Louis. A woman named Melpomene snuck onto the marathon route. She finished an hour and a half behind Louis, but beat plenty of men who ran slower or dropped out.

Women snuck onto marathon courses from that point forward. Resistance to their participation was strong and, I believe, reflects men’s often unconscious fear that women might in fact be their equals. Why else would they so vociferously object to women’s participation? If women are, indeed, so weak and inferior, what’s to fear from their running alongside men?

Illustrating what seems to be a degree of panic above and beyond an imperative to follow the rules, the two photos  below show the response to Syracuse University Katherine Switzer’s running the man-only Boston marathon in 1967 (Switzer registered for the marathon using her initials). After two miles, race officials realized one of their runners was a girl. Their response? To physically remove her from the race. Luckily, some of her male Syracuse teammates body blocked their grab:

Why not let her run? The race was man-only, so her stats, whatever they may be, were invalid. Why take her out of the race by force? For the same reason that women were excluded to begin with: their actual potential is not obviously inferior to men’s. If it were, there’d be no risk in letting her run. The only sex that is threatened by co-ed sports is the sex whose superiority is assumed.

Women were allowed to begin competing in marathons starting in 1972 — not so very long ago — and, just like Melponeme, while they’ve been slower on average, individual women have been beating individual men ever since. In fact, women have been getting faster and faster, shrinking the gender gap in completion times, because achievement and opportunity go hand in hand.

Thanks Kathrine Switzer, and congratulations.

Originally posted in 2012.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowWalkaway Q&A: great debut novels, collections, and favorites



With less than a week to go until the debut of Walkaway, my next novel for adults, Portland’s Powell’s Bookstore has run a long Q&A with me about the book, my writing habits, my favorite reads, and many other subjects.


I’ll be at Powell’s on May 14 with Andy “Waxy” Baio as interlocutor — it’s one of 20+ cities I’m visiting on the US/Canada tour (the UK tour schedule is coming shortly!).

Besides your personal library, do you have any beloved collections?

A shocking number of them. I collect all my conference badges in a huge, sagging garland that threatens to pull my bookcase out of the wall; I collect neolithic stone axe-heads; I collect human knucklebones from dismembered Victorian articulated anatomical skeletons; I collect 1970s/’80s merchandise from Disney’s Haunted Mansions; I collect (and wear) vintage striped pajamas; I collect tiny service lapel-pins from engineering institutions, fraternal orders, etc; I collect stickers and cover everything I value in multiple layers of them; I collect vintage mechanical watches, skull rings, and fidget toys; I collect swizzle sticks, bourbon and fine bar glassware; I collect interesting tiki vases; and many, many other things.

What’s the strangest or most interesting job you’ve ever had?


I was the night watchman at a pizza parlor/petting zoo in Baja Sur, Mexico; I slept on a mattress on the floor of the pizzeria while the zoo animals wandered in and out. The building had previously been a brothel and sometimes drunks would show up in the middle of the night and I’d have to explain the change of use to them.

What scares you the most as a writer?

Being forgotten by history.

If someone were to write your biography, what would be the title and subtitle?

Privilege: Another White Guy Who Wrote Some Books.

Powell’s Q&A: Cory Doctorow, Author of ‘Walkaway’
[Powell’s]

Worse Than FailureError'd: When Good Dev Tools Go Bad

"I'd say that this brings new meaning to what a 'core dump' really is," Paul N. writes.

 

"Looks like someone at Google got tired of typing exit names all day," Shawn A. writes, "And in case you wondered, voice navigation actually spelled 'ASD'."

 

"At first glance, Google News got the wrong pic, but the more you think about it, maybe it didn't," wrote Matt S.

 

Mike S. writes, "I was hoping for overflow, but all I got was NaN."

 

"I got this dialog/error message pixel for pixel (originally 2930x128) today and boy I am glad that I have two monitors or I wouldn't be able to press the OK button," writes John A.

 

"I'd love to see the datetime logic that resulted in this gem," writes Baldrick.

 

Pramod V. writes, "Now this is what I call the ULTIMATE portable!"

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianRhonda D'Vine: Home

A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.

One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.

A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …

… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.

I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.

/personal | permanent link | Comments: 3 | Flattr this

Planet DebianNoah Meyerhans: Stretch images for Amazon EC2, round 2

Following up on a previous post announcing the availability of a first round of AWS AMIs for stretch, I'm happy to announce the availability of a second round of images. These images address all the feedback we've received about the first round. The notable changes include:

  • Don't install a local MTA.
  • Don't install busybox.
  • Ensure that /etc/machine-id is recreated at launch.
  • Fix the security.debian.org sources.list entry.
  • Enable Enhanced Networking and ENA support.
  • Images are owned by the official debian.org AWS account, rather than my personal account.

AMI details are listed on the wiki. As usual, you're encouraged to submit feedback to the cloud team via the cloud.debian.org BTS pseudopackage, the debian-cloud mailing list, or #debian-cloud on irc.

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.6

Time for a new release of Rblpapi -- version 0.3.6 is now on CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the seventh release since the package first appeared on CRAN last year. This release brings a very nice new function lookupSecurity() contributed by Kevin Jin as well as a number of small fixes and enhancements. Details below:

Changes in Rblpapi version 0.3.6 (2017-04-20)

  • bdh can now store in double preventing overflow (Whit and John in #205 closing #163)

  • bdp documentation has another ovveride example

  • A new function lookupSecurity can search for securities, optionally filtered by yellow key (Kevin Jin and Dirk in #216 and #217 closing #215)

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE (Dirk in #220)

  • getBars and getTicks can now return data.table objects (Dirk in #221)

  • bds has improved internal protect logic via Rcpp::Shield (Dirk in #222)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Sociological ImagesBill O’Reilly was paid more to leave FOX than FOX paid the women he harassed. Is this progress? Yes.

Sometimes you have to take the long view.

This week Bill O’Reilly — arguably the most powerful political commentator in America — was let go from his position at Fox News. The dismissal came grudgingly. News broke that he and Fox had paid out $13 million dollars to women claiming O’Reilly sexually harassed them; Fox didn’t budge. They renewed his contract. There was outcry and protests. The company yawned. But when advertisers started dropping The O’Reilly Factor, they caved. O’Reilly is gone.

Fox clearly didn’t care about women — not “women” in the abstract, nor the women who worked at their company — but they did care about their bottom line. And so did the companies buying advertising space, who decided that it was bad PR to prop up a known sexual harasser. Perhaps the decision-makers at those companies also thought it was the right thing to do. Who knows.

Is this progress?

Donald Trump is on record gleefully explaining that being a celebrity gives him the ability to get away with sexual battery. That’s a crime, defined as unwanted contact with an “intimate part of the body” that is done to sexually arouse, gratify, or abuse. He’s president anyway.

And O’Reilly? He walked away with $25 million in severance, twice what all of his victims together have received in hush money. Fox gaves Roger Ailes much more to go away: $40 million. Also ousted after multiple allegations of sexual harassment, his going away present was also twice what the women he had harassed received.

Man, sexism really does pay.

But they’re gone. Ailes and O’Reilly are gone. Trump is President but Billy Bush, the Today host who cackled when Trump said “grab ’em by the pussy,” was fired, too.  Bill Cosby finally had some comeuppance after decades of sexual abuse and rape. At the very least, his reputation is destroyed. Maybe these “victories” — for women, for feminists, for equality, for human decency — were driven purely by greed. And arguably, for all intents and purposes, the men are getting away with it. Trump, Ailes, O’Reilly, Bush, and Cosby are all doing fine. Nobody’s in jail; everybody’s rich beyond belief.

But we know what they did.

Until at least the 1960s, sexual harassment — along with domestic violence, stalking, sexual assault, and rape — went largely unregulated, unnoticed, and unnamed. There was no language to even talk about what women experienced in the workplace. Certainly no outrage, no ruined reputations, no dismissals, and no severance packages. The phrase “sexual harassment” didn’t exist.

In 1964, with the passage of the Civil Rights Act, it became illegal to discriminate against women at work, but only because the politicians who opposed the bill thought adding sex to race, ethnicity, national origin, and religion would certainly tank it. That’s how ridiculous the idea of women’s rights was at the time. But that was then. Today almost no one thinks women shouldn’t have equal rights at work.

What has happened at Fox News, in Bill Cosby’s hotel rooms, in the Access Hollywood bus, and on election day is proof that sexism is alive and well. But it’s not as healthy as it once was. Thanks to hard work by activists, politicians, and citizens, things are getting better. Progress is usually incremental. It requires endurance. Change is slow. Excruciatingly so. And this is what it looks like.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramThe DEA Is Buying Cyberweapons from Hacking Team

The US Drug Enforcement Agency has purchased zero-day exploits from the cyberweapons arms manufacturer Hacking Team.

BoingBoing post.

TEDChris Burkard’s quest for the perfect Arctic surf

Along the remote, frozen coast of Hornstrandir National Park, photographer Chris Burkard (TED Talk: The Joy of Surfing in Ice-Cold Water) and his team were in search of the Shangri-La of ice-cold surfing. Instead, they encountered Iceland’s biggest storm to hit land in 25 years. The challenges and hardships that transpired over those few days, Burkard captured in his film Under an Arctic Sky, a multi-dimensional journey that documents how even the darkest of storm clouds can still have a silver lining — and may just lead to the perfect surf.

DECEMBER, ICELAND, SURF, SURFER, WESTWARDS, LIFESTYLE

Iceland, an ice-cold surfer’s backyard

This particular trip to Iceland wasn’t Burkard’s first — in fact, he had been almost 30 times before. What drew him back was the familiar call of a new adventure and, most important, a willing soul to take him and his team out to an incredibly remote region in the middle of winter. “Ultimately, there was this draw to just tell one more final story and to have one more challenge,” Burkard says. “That challenge for me was always to explore this one specific area, this one specific peninsula that hadn’t really been explored for surf potential.”

Photographer Chris Burkard and five surfers on the hunt for premium ice-cold surfing conditions.

Photographer Chris Burkard and five surfers on the hunt for premium ice-cold surfing conditions.

ICELAND DECEMBER SURFER TRIP
Taking the plunge on the eve of a tempest

When Burkard and his band of surfers first made the long trek out to the isolated Westfjords of Iceland, they had only been on the water for a day and a half — the original plan was to be out for at least four days — before their boat captain warned them of an approaching storm, potentially one of the biggest he’d ever seen. At the time, no one knew how massive the storm would become. Severe storm Diddú, the first-ever named storm in Iceland’s history, brought wind speeds of more than 160mph, rain and snow, according to local media. “Luckily, we had the foresight to basically get off the water and get back on the land,” says Burkard. “That’s when it really came in, that was when it really hit for the next 24 hours.”

DECEMBER, ICELAND, SURF, SURFER, WESTWARDS, LIFESTYLE

Refuge as the storm rages

The moment they touched land, the storm struck. Burkard and his team were left with a big decision to make — take shelter in the little town where they landed, or keep moving forward in their quest for surf? They decided to press onwards through the storm, but were met with more and more hazardous obstacles. A drive that was planned to take about six hours took almost 18 hours because of road closures due to avalanche danger. “We eventually didn’t make it — we got stuck in a small cabin in the middle of nowhere, having to dig our car out of multiple roadslides of snow,” says Burkard. “It just became a total nightmare.”

man standing on snowy coast with surf board looking out over the water with northern lights above

A silver lining, tinged by the aurora borealis

The surfers’ battle against the elements left them in an entirely different part of Iceland, so far north that they encountered the Northern Lights when their ambitious travels delivered them safely behind the storm’s ire. “On the backside of that swell came some of the most legendary surf conditions you could ever envision,” says Burkard. “I’d always dreamt of shooting surfers under the Northern Lights.”

1610 ICELAND FEBRUARY SURF WESTFJORD

“In terms of the most gratifying experience, I think it would have easily been just the opportunity to sit there under the Northern Lights and see perfect waves peeling,” says Burkard. “To capture that footage and to know: That was maybe the first time anybody’s ever done that.”

"[We want] to inspire people, to seek experiences outside their comfort zone, and to think about those crazy ideas that you have in your head," says Burkard. "Not to shun them off so quickly, but actually think about how you could accomplish it."

“[We want] to inspire people to seek experiences outside their comfort zone, and to think about those crazy ideas that you have in your head,” says Burkard. “Not to shun them off so quickly, but actually think about how you could accomplish it.”

ICELAND DECEMBER SURFER TRIP

On the never-ending road to adventure

“I guess I’m just continually trying to reinvent the way I can see the world,” says Burkard. “I think in the end, as a storyteller — not as a photographer or as a filmmaker, but as a storyteller — we aim to live the stories that we experience.”

Under an Arctic Sky will premiere at Tribeca Film Festival in April 2017. See dates and get tickets for the film tour.


TEDThese TED2017 speakers’ talks will be broadcast live to cinemas April 24 and 25

The speaker lineup for the TED2017 conference features more than 70 thinkers and doers from around the world — including a dozen or so whose unfiltered TED Talks will be broadcast live to movie theater audiences across the U.S. and Canada.

Presented with our partner BY Experience, our TED Cinema Experience event series offers three opportunities for audiences to join together and experience the TED2017 Conference, and its first two evenings feature live TED Talks. Below: find out who’s part of the live cinema broadcast here (as with any live event, the speaker lineup is subject to change, of course!).

The listing below reflects U.S. and Canadian times; international audiences in 18 countries will experience TED captured live and time-shifted. Check locations and show times, and purchase tickets here >>

Opening Night Event: Monday, April 24, 2017
US: 8pm ET/ 7pm CT/ 6pm MT/ time-shifted to 8pm PT
Experience the electric opening night of TED, with half a dozen TED Talks and performances from:
Designer Anab Jain
Cyberspace analyst Laura Galante
Artist Titus Kaphar
Grandmaster and analyst Garry Kasparov
Author Tim Ferriss
The band OK Go
Rabbi Lord Jonathan Sacks

TED Prize Event: Tuesday, April 25, 2017
US: 8pm ET/ 7pm CT/ 6pm MT/ time-shifted to 8pm PT
On the second night of TED2017, the TED Prize screening offers a lineup of awe-inspiring speakers with big ideas for our future, including:
Champion Serena Williams
Physician and writer Atul Gawande
Data genius Anna Rosling Rönnlund
Movement artists Jon Boogz + Lil Buck
TED Prize winner Raj Panjabi, who will reveal for the first time plans to use his $1 million TED Prize to fund a creative, bold wish to spark global change.


TEDWho’s speaking at TED2017? Announcing our speaker lineup

TED2017: The Future You

Meet the co-creator of Siri, the founder of the world’s largest hedge fund, a Nobel-winning researcher who helped discover how we age, the head of the World Bank, and one of the greatest athletes of all time. We’re thrilled to announce the speaker lineup for TED2017, with a mix of illustrious names and the up-and-coming minds who are creating the future.

TED2017 is a week to explore the most powerful ideas of our time. In these mainstage sessions (including one in Spanish) we’ll ask – and try to answer – the big questions of the moment. This year’s TED will take on the hard political topics that are unavoidable in this turbulent era — and also look within, to the qualities that can make us into better people, and make our world a better place to be. And throughout the upcoming year, we’ll be making these mainstage talks into online TED Talks, sharing them free with the world.

Wherever you are: You can watch Session 1 (Monday night) and Session 4 (Tuesday night) live in cinemas, thanks to the TED Cinema Experience. Can’t make it to the movies on a weekday evening? On the weekend after TED2017 wraps, you can watch a best-of compilation — 90 minutes of the best moments from the week at TED — as a special weekend matinee. Click here to find the cinema closest to you.


TEDTED en Español: TED2017’s first-ever Spanish-language speaker session

Haz clic para leer este artículo en español >

For the first time ever, the annual TED Conference in Vancouver will feature an entire session of Spanish-language TED Talks, a bit of programming we felt called for celebration: We’ll be making the live session available for free online at live.ted.com on Tuesday, April 25, from 2:15 pm to 4:00 pm PT.

Titled ¨Conexión y sentido¨, or “Connection and meaning,” the session will feature six speakers from a range of backgrounds and disciplines. They include artist Tomas Saraceno (Argentina/Germany); poet, musician and singer Jorge Drexler (Uruguay/Spain); former presidential candidate and peace activist Ingrid Betancourt (Colombia/France/UK); primatologist Isabel Behncke Izquierdo (Chile/US); physicist Gabriela Gonzalez (Argentina/US) and journalist Jorge Ramos (Mexico/US).

The session marks the official launch of TED en Español, a sweeping initiative from TED designed to build content and community in the Spanish-speaking world. The TED en Español team has already been hard at work laying the groundwork for a major effort, actively curating content in Spanish and beginning to share via dedicated Spanish-language channels, including:

…and more than 2,000 TED Talks with Spanish subtitles

TED-Ed Clubs are also underway in Spanish, and mobile users can now download a Spanish-language version of the TED mobile app for iOS and Android.

TED also has plans to bring in new partners to support TED en Español as well as develop distribution deals for the Spanish-language content. In the second half of 2017, we’ll be curating and producing a TED en Español speaker salon event at our NYC theater.

“Native Spanish speakers make up a massive piece of the TED global audience,” said Gerry Garbulsky, TED en Español Director. “By expanding our focus to other languages, we’ll both unearth new troves of ideas, as well as better equip ourselves to share them with a broader audience.”


CryptogramSmart TV Hack via the Broadcast Signal

This is impressive:

The proof-of-concept exploit uses a low-cost transmitter to embed malicious commands into a rogue TV signal. That signal is then broadcast to nearby devices. It worked against two fully updated TV models made by Samsung. By exploiting two known security flaws in the Web browsers running in the background, the attack was able to gain highly privileged root access to the TVs. By revising the attack to target similar browser bugs found in other sets, the technique would likely work on a much wider range of TVs.

Worse Than FailureCodeSOD: ByteBool

Tony Hoare has called null references his “billion dollar mistake”. Dealing with nulls and their consequences have created a large number of bugs, and eaten a lot of developer time. It’s certainly bad enough when you understand nulls and why they exist, but Benjamin Soddy inherited code from someone who absolutely didn’t.

First, there’s our new type, the ByteBool:

    public enum ByteBool
    {
        IsFalse,
        IsTrue,
        IsNull
    }

Not quite FileNotFound territory. Based on this code alone, you might think that this is some legacy .NET 1.1 code, back before .NET had nullable value types, and thus there was no way to set a boolean to null.

You’d be wrong, however. Because as a sibling to ByteBool, we have the class NullHelper:

public class NullHelper
    {
        /// <summary>
        /// Like the Helper.GetSafeInteger but returns a negative one instead of zero which may be a valid number
        /// </summary>
        /// <param name="value">The value.</param>
        /// <returns></returns>
        public static int ConvertNullableInt(int? value)
        {
            if (value.HasValue)
            {
                return value.Value;
            }
            else
            {
                return -1;
            }
        }

        /// <summary>
        /// Like the Helper.GetSafeInteger but returns a negative one instead of zero which may be a valid number
        /// </summary>
        /// <param name="value">The value.</param>
        /// <returns></returns>
        public static int? ConvertBackToNullableInt(int value)
        {
            if (value != -1)
            {
                return (int?)value;
            }
            else
            {
                return null;
            }
        }

        /// <summary>
        /// Converts the null bool to byte bool.
        /// </summary>
        /// <param name="boolean">The boolean.</param>
        /// <returns></returns>
        public static ByteBool ConvertNullBoolToByteBool(bool? boolean)
        {
            if (boolean.HasValue)
            {
                return (boolean.Value ? ByteBool.IsTrue : ByteBool.IsFalse);
            }
            else
            {
                return ByteBool.IsNull;
            }
        }

        /// <summary>
        /// Converts the byte bool to null bool to.
        /// </summary>
        /// <param name="byteBool">The byte bool.</param>
        /// <returns></returns>
        public static bool? ConvertByteBoolToNullBoolTo(ByteBool byteBool)
        {
            switch (byteBool)
                {
                        case ByteBool.IsFalse:
                    return false;
                break;
                case ByteBool.IsTrue:
                    return true;
                break;
                case ByteBool.IsNull:
                    return null;
                break;
                default:
                    return false;
                break;
                }
        }
    }

I’m no expert on the subject, but the comments alone read to me like poetry.

Like the Helper.GetSafeInteger,
but returns a negative one instead of zero which may be a valid number,
the value,
Like the Helper.GetSafeInteger,
but returns a negative one instead of zero which may be a valid number,
the value,
Converts the null bool to byte bool,
the boolean,
Converts the byte bool to null bool to,
the byte bool

Benjamin junked this code, but ByteBool is still a legend around his office. “Are you sure you aren’t ByteBooling this?” is the polite way of saying “your code is bad and you should feel bad” during code-reviews. “I found a ByteBool,” is called out when someone trawls through legacy code, in the same tone of voice as a lifeguard finding a “floater” in the pool.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianDirk Eddelbuettel: RcppQuantuccia 0.0.1

New package! And, as it happens, a effectively a subset or variant of one my oldest packages, RQuantLib.

Fairly recently, Peter Caspers started to put together a header-only subset of QuantLib. He called this Quantuccia, and, upon me asking, said that it stands for "little sister" of QuantLib. Very nice.

One design goal is to keep Quantuccia header-only. This makes distribution and deployment much easier. In the fifteen years that we have worked with QuantLib by providing the R bindings via RQuantLib, it has always been a concern to provide current QuantLib libraries on all required operating systems. Many people helped over the years but it is still an issue, and e.g. right now we have no Windows package as there is no library build it against.

Enter RcppQuantuccia. It only depends on R, Rcpp (for seamless R and C++ integrations) and BH bringing Boost headers. This will make it much easier to have Windows and macOS binaries.

So what can it do right now? We started with calendaring, and you can compute date pertaining to different (ISDA and other) business day conventions, and compute holiday schedules. Here is one example computing inter alia under the NYSE holiday schedule common for US equity and futures markets:

R> library(RcppQuantuccia)
R> fromD <- as.Date("2017-01-01")
R> toD <- as.Date("2017-12-31")
R> getHolidays(fromD, toD)        # default calender ie TARGET
[1] "2017-04-14" "2017-04-17" "2017-05-01" "2017-12-25" "2017-12-26"
R> setCalendar("UnitedStates")
R> getHolidays(fromD, toD)        # US aka US::Settlement
[1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-05-29" "2017-07-04" "2017-09-04"
[7] "2017-10-09" "2017-11-10" "2017-11-23" "2017-12-25"
R> setCalendar("UnitedStates::NYSE")
R> getHolidays(fromD, toD)        # US New York Stock Exchange
[1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-04-14" "2017-05-29" "2017-07-04"
[7] "2017-09-04" "2017-11-23" "2017-12-25"
R>

The GitHub repo already has a few more calendars, and more are expected. Help is of course welcome for both this, and for porting over actual quantitative finance calculations.

More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianNorbert Preining: TeX Live 2017 pretest started

Preparations for the release of TeX Live 2017 have started a few days ago with the freeze of updates in TeX Live 2016 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.

Notable changes are listed on the pretest page, I only want to report about the changes in the core infrastructure: changes in the user/sys mode of fmtutil and updmap, and introduction of the tlmgr shell.

User/sys mode of fmtutil and updmap

We (both at TeX Live and Debian) regularly got error reports about fonts not being found or formats not updated etc. The reason for all this was unmistakably the same: The user has called updmap or fmtutil without the -sys option, thus creating a copy of set of configuration files under his home directory, shadowing all later updates on the system side.

Reason for this behavior is the wide-spread misinformation (outdated information) on the internet suggesting to call updmap.

To counteract this, we have changed the behavior so that both updmap and fmtutil now accept a new argument -user (in addition to the already present -sys), and rejects to run when called without either of it given, giving a warning and linking to an explanation page. This page provides more detailed documentation, and best practice examples.

tlmgr shell

The TeX Live Manager got a new `shell’ mode, invoked by tlmgr shell. Details need to be flashed out, but in principle it is possible to use get and set to query and set some of the options normally passed via command lines, and use all the actions as defined in the documentation. The advantage of this is that it is not necessary to load the tlpdb for each invocation. Here a short example:

[~] tlmgr shell
protocol 1
tlmgr> load local
OK
tlmgr> load remote
tlmgr: package repository /home/norbert/public_html/tlpretest (verified)
OK
tlmgr> update --list
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
update:   bchart             [147k]: local:    27496, source:    43928
...
update:   xindy              [535k]: local:    43873, source:    43934
OK
tlmgr> update --all
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
[ 1/22, ??:??/??:??] update: bchart [147k] (27496 -> 43928) ... done
[ 2/22, 00:00/00:00] update: biber [1041k] (43873 -> 43910) ... done
...
[22/22, 00:50/00:50] update: xindy [535k] (43873 -> 43934) ... done
running mktexlsr ...
done running mktexlsr.
...
OK
tlmgr> quit
tlmgr: package log updated: /home/norbert/tl/2017/texmf-var/web2c/tlmgr.log 
[~] 

Please test and report bugs to our mailing list.

Thanks

,

Planet DebianReproducible builds folks: Reproducible Builds: week 103 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 9 and Saturday April 15 2017:

Upcoming events

On April 26th Chris Lamb will give a talk at foss-north 2017 in Gothenburg, Sweden on Reproducible Builds.

Media coverage

Jake Edge wrote a summary of Vagrant Cascadian's talk on Reproducible Builds at LibrePlanet.

Toolchain development and fixes

Ximin Luo forwarded patches to GCC for BUILD_PATH_PREFIX_MAP support.

With this patch to backported to GCC-6, as well as a patched dpkg to set the environment variable, he scheduled ~3,300 packages that are unreproducible in unstable-amd64 but reproducible in testing-amd64 - because we vary the build path in the former but not latter case. Our infrastructure ran these in just under 3 days, and we reproduced ~1,700 extra packages.

This is about 6.5% of ~26,100 Debian source packages, and about 1/2 of the ones whose irreproducibility is due to build-path issues. Most of the rest are not related to GCC, such as things built by R, OCaml, Erlang, LLVM, PDF IDs, etc.

(The dip afterwards, in the graph linked above, is due to reverting back to an unpatched GCC-6, but we'll be rebasing the patch continually over the next few weeks so the graph should stay up.)

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Chris West:

Reviews of unreproducible packages

38 package reviews have been added, 111 have been updated and 85 have been removed in this week, adding to our knowledge about identified issues.

6 issue types have been updated:

Added:

Updated:

Removed:

diffoscope development

Development continued in git on the experimental branch:

Chris Lamb:

  • Don't crash on invalid archives (#833697)
  • Tidy up some other code

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)
  • Chris West (1)

Misc.

This week's edition was written by Ximin Luo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityTracing Spam: Diet Pills from Beltway Bandits

Reading junk spam messages isn’t exactly my idea of a good time, but sometimes fun can be had when you take a moment to check who really sent the email. Here’s the simple story of how a recent spam email advertising celebrity “diet pills” was traced back to a Washington, D.C.-area defense contractor that builds tactical communications systems for the U.S. military and intelligence communities.

atballYour average spam email can contain a great deal of information about the systems used to blast junk email. If you’re lucky, it may even offer insight into the organization that owns the networked resources (computers, mobile devices) which have been hacked for use in sending or relaying junk messages.

Earlier this month, anti-spam activist and expert Ron Guilmette found himself poring over the “headers” for a spam message that set off a curious alert. “Headers” are the usually unseen addressing and routing details that accompany each message. They’re generally unseen because they’re hidden unless you know how and where to look for them.

Let’s take the headers from this particular email — from April 12, 2017 — as an example. To the uninitiated, email headers may seem like an overwhelming dump of information. But there really are only a few things we’re interested in here (Guilmette’s actual email address has been modified to “ronsdomain.example.com” in the otherwise unaltered spam message headers below):

Return-Path: <dan@gtacs.com>
X-Original-To: rfg-myspace@ronsdomain.example.com
Delivered-To: rfg-myspace@ronsdomain.example.com
Received: from host.psttsxserver.com (host.tracesystems.com [72.52.186.80])
by subdomain.ronsdomain.example.com (Postfix) with ESMTP id 5FE083AE87
for <rfg-myspace@ronsdomain.example.com>; Wed, 12 Apr 2017 13:37:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=gtacs.com;
s=default; h=MIME-Version:Content-Type:Date:Message-ID:Subject:To:From:
Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:Content-Description:
Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:
In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
List-Post:List-Owner:List-Archive;
Received: from [186.226.237.238] (port=41986 helo=[127.0.0.1])
by host.psttsxserver.com with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256)
(Exim 4.87)
(envelope-from <dan@gtacs.com>)
id 1cyP1J-0004K8-OR
for rfg-myspace@ronsdomain.example.com; Wed, 12 Apr 2017 16:37:42 -0400
From: dan@gtacs.com
To: rfg-myspace@ronsdomain.example.com
Subject: Discover The Secret How Movies & Pop Stars Are Still In Shape
Message-ID: <F5E99999.A1F67C94585E5E2F@gtacs.com>
X-Priority: 3
Importance: Normal
Date: Wed, 12 Apr 2017 22:37:39 +0200
X-Original-Content-Type: multipart/alternative;
boundary=”–InfrawareEmailBoundaryDepth1_FD5E8CC5–”
MIME-Version: 1.0
X-Mailer: Infraware POLARIS Mobile Mailer v2.5
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname – host.psttsxserver.com
X-AntiAbuse: Original Domain – ronsdomain.example.com
X-AntiAbuse: Originator/Caller UID/GID – [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain – gtacs.com
X-Get-Message-Sender-Via: host.psttsxserver.com: authenticated_id: dan@gtacs.com
X-Authenticated-Sender: host.psttsxserver.com: dan@gtacs.com

Celebrities always have to look good and that’s as hard as you might
{… snipped…}

In this case, the return address is dan@gtacs.com. The other bit to notice is the Internet address and domain referenced in the fourth line, after “Received,” which reads: “from host.psttsxserver.com (host.tracesystems.com [72.52.186.80])”

Gtacs.com belongs to the Trace Systems GTACS Team Portal, a Web site explaining that GTACS is part of the Trace Systems Team, which contracts to provide “a full range of tactical communications systems, systems engineering, integration, installation and technical support services to the Department of Defense (DoD), Department of Homeland Security (DHS), and Intelligence Community customers.” The company lists some of its customers here.

The home page of Trace Systems.

The home page of Trace Systems.

Both Gtacs.com and tracesystems.com say the companies “provide Cybersecurity and Intelligence expertise in support of national security interests: “GTACS is a contract vehicle that will be used by a variety of customers within the scope of C3T systems, equipment, services and data,” the company’s site says. The “C3T” part is military speak for “Command, Control, Communications, and Tactical.”

Passive domain name system (DNS) records maintained by Farsight Security for the Internet address listed in the spam headers — 72.52.186.80 — show that gtacs.com was at one time on that same Internet address along with many domains and subdomains associated with Trace Systems.

It is true that some of an email’s header information can be forged. For example, spammers and their tools can falsify the email address in the “from:” line of the message, as well as in the “reply-to:” portion of the missive. But neither appears to have been forged in this particular piece of pharmacy spam.

The Gtacs.com home page.

The Gtacs.com home page.

I forwarded this spam message back to Dan@gtacs.com, the apparent sender. Receiving no response from Dan after several days, I grew concerned that cybercriminals might be rooting around inside the networks of this defense contractor that does communications for the U.S. military. Clumsy and not terribly bright spammers, but intruders to be sure. So I forwarded the spam message to a Linkedin contact at Trace Systems who does incident response contracting work for the company.

My Linkedin source forwarded the inquiry to a “task lead” at Trace who said he’d been informed gtacs.com wasn’t a Trace Systems domain. Seeking more information in the face of many different facts that support a different conclusion, I escalated the inquiry to Matthew Sodano, a vice president and chief information officer at Trace Systems Inc.

“The domain and site in question is hosted and maintained for us by an outside provider,” Sodano said. “We have alerted them to this issue and they are investigating. The account has been disabled.”

Presumably, the company’s “outside provider” was Power Storm Technologies, the company that apparently owns the servers which sent the unauthorized spam from Dan@gtacs.com. Power Storm did not return messages seeking comment.

According to Guilmette, whoever Dan is or was at Gtacs.com, he got his account compromised by some fairly inept spammers who evidently did not know or didn’t care that they were inside of a U.S. defense contractor which specializes in custom military-grade communications. Instead, the intruders chose to use those systems in a way almost guaranteed to call attention to the compromised account and hacked servers used to forward the junk email.

“Some…contractor who works for a Vienna, Va. based government/military ‘cybersecurity’ contractor company has apparently lost his outbound email credentials (which are probably useful also for remote login) to a spammer who, I believe, based on the available evidence, is most likely located in Romania,” Guilmette wrote in an email to this author.

Guilmette told KrebsOnSecurity that he’s been tracking this particular pill spammer since Sept. 2015. Asked why he’s so certain the same guy is responsible for this and other specific spams, Guilmette shared that the spammer composes his spam messages with the same telltale HTML “signature” in the hyperlink that forms the bulk of the message: An extremely old version of Microsoft Office.

This spammer apparently didn’t mind spamming Web-based discussion lists. For example, he even sent one of his celebrity diet pill scams to a list maintained by the American Registry for Internet Numbers (ARIN), the regional Internet registry for Canada and the United States. ARIN’s list scrubbed the HTML file that the spammer attached to the message. Clicking the included link to view the scrubbed attachment sent to the ARIN list turns up this page. And if you look near the top of that page, you’ll see something that says:

”  … xmlns:m=”http://schemas.microsoft.com/office/2004/12/omml” …”

“Of course, there are a fair number of regular people who are also still using this ancient MS Office to compose emails, but as far as I can tell, this is the only big-time spammer who is using this at the moment,” Guilmette said. “I’ve got dozens and dozens of spams, all from this same guy, stretching back about 18 months.  They all have the same style and all were composed with “/office/2004/12/”.

Guilmette claims that the same spammers who’ve been sending that ancient Office spam from defense contractors also have been spamming from compromised “Internet of Things” devices, like a hacked video conferencing system based in China. Guilmette says the spammer has been known to send out malicious links in email that use malicious JavaScript exploits to snarf credentials stored on the compromised machine, and he guesses that Dan@gtacs.com probably opened one of the booby-trapped JavaScript links.

“When and if he finds any, he uses those stolen credentials to send out yet more spam via the mail server of the ‘legit’ company,” Guilmette said. “And because the spams are now coming out of ‘legit’ mail servers belonging to ‘legit’ companies, they never get blocked, by Spamhaus or by any other blacklists.”

We can only hope that the spammer who pulled this off doesn’t ever realize the probable value of this specific set of login credentials that he has managed to make off with, among many others, Guilmette said.

“If he did realize what he has in his hands, I feel sure that the Russians and/or the Chinese would be more than happy to buy those credentials from him, probably reimbursing him more for those than any amount he could hope to make in years of spamming.”

This isn’t the first time a small email oops may have created a big problem for a Washington-area cybersecurity defense contractor. Last month, Defense Point Security — which provides cyber contracting services to the security operations center for DHS’s Immigration and Customs Enforcement (ICE) division — alerted some 200-300 current and former employees that their W-2 tax information was given away to scammers after an employee fell for a phishing scam that spoofed the boss.

Want to know more about how to find and read email headers? This site has a handy reference showing you how to reveal headers on more than two-dozen different email programs, including Outlook, Yahoo!, Hotmail and Gmail. This primer from HowToGeek.com explains what kinds of information you can find in email headers and what they all mean.

Planet DebianLior Kaplan: Open source @ Midburn, the Israeli burning man

This year I decided to participate in Midburn, the Israeli version of burning man. Whiling thinking of doing something different from my usual habit, I found myself with volunteering in the midburn IT department and getting a task to make it an open source project. Back into my comfort zone, while trying to escape it.

I found a community of volunteers from the Israeli high tech scene who work together for building the infrastructure for Midburn. In many ways, it’s already an open source community by the way it works. One critical and formal fact was lacking, and that’s the license for the code. After some discussion we decided on using Apache License 2.0 and I started the process of changing the license, taking it seriously, making sure it goes “by the rules”.

Our code is available on GitHub at https://github.com/Midburn/. And while it still need to be more tidy, I prefer the release early and often approach. The main idea we want to bring to the Burn infrastructure is using Spark as a database and have already began talking with parallel teams of other burn events. I’ll follow up on our technological agenda / vision. In the mean while, you are more than welcome to comment on the code or join one of the teams (e.g. volunteers module to organize who does which shift during the event).

 

 


Filed under: Israeli Community

Planet Linux AustraliaDave Hall: Drupal, We Need To Talk

Update 21 April: I've published a followup post with details of the BoF to be held at DrupalCon Baltimore on Tuesday 25 April. I hope to see you there so we can continue the conversation.

Drupal has a problem. No, not that problem.

We live in a post peak Drupal world. Drupal peaked some time during the Drupal 8 development cycle. I’ve had conversations with quite a few people who feel that we’ve lost momentum. DrupalCon attendances peaked in 2014, Google search impressions haven’t returned to their 2009 level, core downloads have trended down since 2015. We need to accept this and talk about what it means for the future of Drupal.

Technically Drupal 8 is impressive. Unfortunately the uptake has been very slow. A factor in this slow uptake is that from a developer's perspective, Drupal 8 is a new application. The upgrade path from Drupal 7 to 8 is another factor.

In the five years Drupal 8 was being developed there was a fundamental shift in software architecture. During this time we witnessed the rise of microservices. Drupal is a monolithic application that tries to do everything. Don't worry this isn't trying to rekindle the smallcore debate from last decade.

Today it is more common to see an application that is built using a handful of Laravel micro services, a couple of golang services and one built with nodejs. These applications often have multiple frontends; web (react, vuejs etc), mobile apps and an API. This is more effort to build out, but it likely to be less effort maintaining it long term.

I have heard so many excuses for why Drupal 8 adoption is so slow. After a year I think it is safe to say the community is in denial. Drupal 8 won't be as popular as D7.

Why isn't this being talked about publicly? Is it because there is a commercial interest in perpetuating the myth? Are the businesses built on offering Drupal services worried about scaring away customers? Adobe, Sitecore and others would point to such blog posts to attack Drupal. Sure, admitting we have a problem could cause some short term pain. But if we don't have the conversation we will go the way of Joomla; an irrelevant product that continues its slow decline.

Drupal needs to decide what is its future. The community is full of smart people, we should be talking about the future. This needs to be a public conversation, not something that is discussed in small groups in dark corners.

I don't think we will ever see Drupal become a collection of microservices, but I do think we need to become more modular. It is time for Drupal to pivot. I think we need to cut features and decouple the components. I think it is time for us to get back to our roots, but modernise at the same time.

Drupal has always been a content management system. It does not need to be a content delivery system. This goes beyond "Decoupled (Headless) Drupal". Drupal should become a "content hub" with pluggable workflows for creating and managing that content.

We should adopt the unix approach, do one thing and do it well. This approach would allow Drupal to be "just another service" that compliments the application.

What do you think is needed to arrest the decline of Drupal? What should Drupal 9 look like? Let's have the conversation.

Worse Than FailureWhen Computers Fly

In the Before Times, the Ancients would gather in well-sheltered caverns, gather to themselves foods blessed by the gods, drink strange, unnaturally colored concoctions, and perform the Rite of the LAN Party.

In the era when the Internet was accessed by modem, to have any hope of playing a game with usable latency, you had to get all the players in the same place. This meant packing up your desktop in a car, driving to your friend’s house, and setting up your computer on whatever horizontal surface hadn’t already been claimed by another guest.

In the late 90s, Shawn W was the lead support tech for a small-town ISP. He had little supervision, and lots of networking equipment at his disposal. The owners were laid back, so Shawn got to throw a LAN party every Saturday. There was a solid core group that turned out pretty much every week, but there was also a rotating cast of newbies which made great fodder for practicing your railgun snipes on “Lost Hallways”.

One weekend, one of those newbies was Derek. Derek’s main multiplayer experience was getting stoned and playing split-screen Goldeneye, in Big-Head mode. Seeing multiple computers all networked together was pretty mind-blowing for him. He ended up not gaming very much, but instead wandered around, asking questions about the setup and tripping over network cables.

“Man,” he complained, after he unplugged Shawn for the fifth time, “those shouldn’t be so easy to pull out. Like, you need a lock on it. I could make you one, I’ve got some of those industrial strength zip-ties in the car, like they could lock that cable in real tight, like they hold luggage on your car and stuff. Industrial grade, man.”

No one wanted Derek to modify their computers, and given how he had made a hobby of tripping over any cable, even ones taped to the floor, having them get yanked out was better than having their computers yanked off the folding tables. Derek was a mild nuisance, but he knew how to make up for it: he was a good sport about getting fragged, he was happy to share his stash with anybody who wanted to step behind the building, and he paid for all the pizza.

Shawn had gotten stiffed a few times, so having someone else foot the bill for all the pizza meant he was willing to forgive a lot. When Derek called him at work on Monday, Shawn was pretty well disposed to him.

“Hey, I was talking to Murphy about the party this weekend- wait, you don’t know Murph. He’s cool, man, he’s my neighbor, and we like, game a bunch? We were wondering, we’d like to set up our own network.”

Shawn was happy to help- Derek was even a customer of the ISP, so it was even technically work.

“We were thinking, like, where do you get a really long network cable?” Derek asked.

“Like, how long? Is Murphy going to be setting up in a different room of your house?”

“Nah, man, he’s gonna stay in his house. He’s my neighbor, right? We could just string a cable between our windows. Murph’s got X-Wing vs. Tie-Fighter, and it’s just like the movies. I’ve even got a cool joystick for it.”

Shawn said, “Come on by the office, and I can just give you one.” They had a few thousand foot spools of cable. Shawn made him a 100-ish foot long crossover cable, since Derek didn’t have a hub, but there were only two computers in the network.

Derek picked it up, and called back a while later, looking for some help on configuring the network. “Hey, man, I got the cable run, and like, super tied down, but um… how do we make it work? I see the green lights on the network cards.”

Shawn walked him through configuring the network, and proving it worked via a ping test. Derek was ecstatic, and started to launch into the virtues of the TIE Interceptor in death-match, when there was a sudden crash and the sound of shattering glass. Derek screamed a curse.

“Um… are you okay?” Shawn asked.

“MY COMPUTER JUST FLEW OUT THE WINDOW!” Derek cried.

“Wait… what?” Shawn tried to imagine what that might entail- he remembered Derek mentioning that it was on a folding table, maybe it had collapsed and somehow the computer had fallen out the window?

“Oh, man, look at it out there, it’s totally trashed.”

“How did your computer fly out the window?” Shawn asked.

“A car drove by and caught the network cable.”

“Wait… does Murphy live across the street?”

“Yeah, why?”

Derek’s “logic” had been that their street wasn’t very busy. He had run the cable across the street, and weighed it down with rocks, thinking that would be safe. Since that put some tension on the cable, he didn’t want it to pop out of the network card, so he had broken out those “industrial grade” zip ties, and secured the cable to his computer’s case.

“I figured it’d be fine,” Derek said glumly. “I guess I was wrong. Hey, do you know anything about building computers?”

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaBen Martin: CNC Alloy Candelabra

While learning Fusion 360 I thought it would be fun to flex my new knowledge of cutting out curved shapes from alloy. Some donated LED fake candles were all the inspiration needed to design and cut out a candelabra. Yes, it is industrial looking. With vcarve and ball ends I could try to make it more baroque looking, but then that would require more artistic ability than a poor old progammer might have.


It is interesting working out how to fixture the cut for such creations. As of now, Fusion360 will allow you to put tabs on curved surfaces, but you don't get to manually place them in that case. So its a bit of fun getting things where you want them by adjusting other parameters.

Also I have noticed some issues with tabs on curves where exact multiples of layer depth align perfectly with the top of the tab height. Making sure that case doesn't happen makes sure the resulting undesired cuts don't happen. So as usual I managed to learn a bunch of stuff while making something that wasn't in my normal comfort zone.

The four candles are run of a small buck converter and wired in parallel at 3 volts to simulate the batteries they normall run of.

I can feel a gnarled brass candle base coming at some stage to help mitigate the floating candle look. Adding some melted real wax has also been suggested to give a more real look.

Krebs on SecurityInterContinental Hotel Chain Breach Expands

In December 2016, KrebsOnSecurity broke the news that fraud experts at various banks were seeing a pattern suggesting a widespread credit card breach across some 5,000 hotels worldwide owned by InterContinental Hotels Group (IHG). In February, IHG acknowledged a breach but said it appeared to involve only a dozen properties. Now, IHG has released data showing that cash registers at more than 1,000 of its properties were compromised with malicious software designed to siphon customer debit and credit card data.

An Intercontinental hotel in New York City.

An Intercontinental hotel in New York City.

Headquartered in Denham, U.K., IHG operates more than 5,000 hotels across nearly 100 countries. The company’s dozen brands include Holiday Inn, Holiday Inn Express, InterContinental, Kimpton Hotels, and Crowne Plaza.

According to a statement released by IHG, the investigation “identified signs of the operation of malware designed to access payment card data from cards used onsite at front desks at certain IHG-branded franchise hotel locations between September 29, 2016 and December 29, 2016.”

IHG didn’t say how many properties total were affected, although it has published a state-by-state lookup tool available here. I counted 28 in my hometown state of Virginia alone, California more than double that; Alabama almost the same number as Virginia. So north of 1,000 locations nationwide seems very likely.

Update, April 19, 11:09 a.m. ET: Danish geek Christian Sonne writes that his research on the state lookup tool shows there are at least 1,175 properties on the list so far. The breakdown so far is: 1,175 properties across the USA and Puerto Rico in the following brands, Holiday Inn Express (781), Holiday Inn (176), Candlewood Suites (120), Staybridge Suites (54), Crowne Plaza (30), Hotel Indigo (11), Holiday Inn Resort (3).

Original story:

IHG has been offering its franchised properties a free examination by an outside computer forensic team hired to look for signs of the same malware infestation known to have hit front desk systems at other properties. But not all property owners have been anxious to take the company up on that offer. As a consequence, there may be more breached hotel locations yet to be added to the state lookup tool.

A letter from IHG to franchise customers, offering to pay for the cyber forensics examination.

A letter from IHG to franchise customers, offering to pay for the cyber forensics examination.

IHG franchises who accepted the security inspections were told they would receive a consolidated report sharing information specific to the property, and that “your acquiring bank and/or processor may contact you regarding this investigation.”

IHG also has been trying to steer franchised properties toward adopting its “secure payment solution” (SPS) that ensures cardholder data remains encrypted at all times and at every “hop” across the electronic transaction. According to IHG, properties that used its solution prior to the initial intrusion on Sept. 29, 2016 were not affected.

“Many more properties implemented SPS after September 29, 2016, and the implementation of SPS ended the ability of the malware to find payment card data,” IHG wrote.

Card-stealing cyber thieves have broken into some of the largest hotel chains over the past few years. Hotel brands that have acknowledged card breaches over the last year after prompting by KrebsOnSecurity include Kimpton HotelsTrump Hotels (twice), Hilton, Mandarin Oriental, and White Lodging (twice). Card breaches also have hit hospitality chains Starwood Hotels and Hyatt

In many of those incidents, thieves planted malicious software on the point-of-sale devices at restaurants and bars inside of the hotel chains. Point-of-sale based malware has driven most of the credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a slew of point-of-sale vendors. The malicious code usually is installed via hacked remote administration tools. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell that data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to purchase high-priced electronics and gift cards from big-box stores like Target and Best Buy.

It’s a good bet that none of the above-mentioned companies were running point-to-point encryption (P2PE) solutions before they started hemorrhaging customer credit cards. P2PE is an added cost for sure, but it can protect customer card data even on point-of-sale systems that are already compromised because the malware can no longer read the data going across the wire.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the unauthorized transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).

,

Planet DebianSteinar H. Gunderson: Chinese HDMI-to-SDI converters

I often need to convert signals from HDMI to SDI (and occasionally back). This requires a box of some sort, and eBay obliges; there's a bunch of different sellers of the same devices, selling them around $20–25. They don't seem to have a brand name, but they are invariably sold as 3G-SDI converters (meaning they should go up to 1080p60) and look like this:

There are also corresponding SDI-to-HDMI converters that look pretty much the same except they convert the other way. (They're easy to confuse, but that's not a problem unique tothem.)

I've used them for a while now, and there are pros and cons. They seem reliable enough, and they're 1/4th the price of e.g. Blackmagic's Micro converters, which is a real bargain. However, there are also some issues:

  • For 3G-SDI, they output level A only, with no option for level B. (In fact, there are no options at all.) Level A is the most sane standard, and also what most equipment uses, but there exists certain older equipment that only works with level B.
  • They don't have reclocking chips, so their timing accuracy is not superb. I managed to borrow a Phabrix SDI analyzer and measured the jitter; with a very short cable, I got approximately 0.85–0.95 UI (varying a bit), whereas a Blackmagic converter gave me 0.23–0.24 UI (much more stable). This may be a problem at very long cable lengths, although I haven't tried 100m runs and such.
  • When converting to and from RGB, they seem to assume Rec. 601 Y'CbCr coefficients even for HD resolutions. This means the colors will be a bit off in some cases, although for most people, it will be hard to notice without looking at it side-by-side. (I believe the HDMI-to-SDI and SDI-to-HDMI converters make the same mistake, so that the errors cancel out if you just want to use a pair as HDMI extenders. Also, if your signal is Y'CbCr already, you don't need to care.)
  • They don't insert SMPTE 352M payload ID. (Supposedly, this is because the SDI chip they use, called GV7600, is slightly out-of-standard on purpose in order to avoid paying expensive licensing fees to SMPTE.) Normally, you wouldn't need to care, but 3G-SDI actually requires this, and worse; Blackmagic's equipment (at least the Duo 2, and I've seen reports about the ATEMs as well) enforces that. If you try to run e.g. 1080p50 through them and into a Duo 2, it will be misdetected as “1080p25, no signal”. There's no workaround that I know of.

The last issue is by far the worst, but it only affects 3G-SDI resolutions. 720p60, 1080p30 and 1080i60 all work fine. And to be fair, not even Blackmagic's own converters actually send 352M correctly most of the time…

I wish there were a way I could publish this somewhere people would actually read it before buying these things, but without a name, it's hard for people to find it. They're great value for money, and I wouldn't hesitate to recommend them for almost all use… but then, there's that almost. :-)

Planet DebianVincent Fourmond: make-theme-image: a script to make yourself an idea of a icon theme

I created some time ago a utils repository on github to publish miscellaneous scripts, but it's only recently that I have started to really populate it. One of my recent work is make-theme-image script, that downloads an icon them package, grabs relevant, user-specifiable, icons, and arrange them in a neat montage. The images displayed are the results of running

~ make-theme-image gnome-icon-theme moblin-icon-theme

This is quite useful to get a good idea of the icons available in a package. You can select the icons you want to display using the -w option. The following command should provide you with a decent overview of the icon themes present in Debian:

apt search -- -icon-theme | grep / | cut -d/ -f1 | xargs make-theme-image

I hope you find it useful ! In any case, it's on github, so feel free to patch and share.

Planet DebianSteve Kemp: 3d-Printing is cool

I've heard about 3d-printing a lot in the past, although the hype seems to have mostly died down. My view has always been "That seems cool", coupled with "Everybody says making the models is very hard", and "the process itself is fiddly & time-consuming".

I've been sporadically working on a project for a few months now which displays tram-departure times, this is part of my drive to "hardware" things with Arduino/ESP8266 devices . Most visitors to our flat have commented on it, at least once, and over time it has become gradually more and more user-friendly. Initially it was just a toy-project for myself, so everything was hard-coded in the source but over time that changed - which I mentioned here, (specifically the Access-point setup):

  • When it boots up, unconfigured, it starts as an access-point.
    • So you can connect and configure the WiFi network it should join.
  • Once it's up and running you can point a web-browser at it.
    • This lets you toggle the backlight, change the timezone, and the tram-stop.
    • These values are persisted to flash so reboots will remember everything.

I've now wired up an input-button to the device too, experimenting with the different ways that a single button can carry out multiple actions:

  • Press & release - toggle the backlight.
  • Press & release twice - a double-click if you like - show a message.
  • Press, hold for 1 second, then release - re-sync the date/time & tram-data.

Anyway the software is neat, and I can't think of anything obvious to change. So lets move onto the real topic of this post: 3D Printing.

I randomly remembered that I'd heard about an online site holding 3D-models, and on a whim I searched for "4x20 LCD". That lead me to this design, which is exactly what I was looking for. Just like open-source software we're now living in a world where you can get open-source hardware! How cool is that?

I had to trust the dimensions of the model, and obviously I was going to mount my new button into the box, rather than the knob shown. But having a model was great. I could download it, for free, and I could view it online at viewstl.com.

But with a model obtained the next step was getting it printed. I found a bunch of commercial companies, here in Europe, who would print a model, and ship it to me, but when I uploaded the model they priced it at €90+. Too much. I'd almost lost interest when I stumbled across a site which provides a gateway into a series of individual/companies who will print things for you, on-demand: 3dhubs.

Once again I uploaded my model, and this time I was able to select a guy in the same city as me. He printed my model for 1/3-1/4 of the price of the companies I'd found, and sent me fun pictures of the object while it was in the process of being printed.

To recap I started like this:

Then I boxed it in cardboard which looked better than nothing, but still not terribly great:

Now I've found an online case-design for free, got it printed cheaply by a volunteer (feels like the wrong word, after-all I did pay him), and I have something which look significantly more professional:

Inside it looks as neat as you would expect:

Of course the case still cost 5 times as much as the actual hardware involved (button: €0.05, processor-board €2.00 and LCD I2C display €3.00). But I've gone from being somebody who had zero experience with hardware-based projects 4 months ago, to somebody who has built a project which is functional and "pretty".

The internet really is a glorious thing. Using it for learning, and coding is good, using it for building actual physical parts too? That's something I never could have predicted a few years ago and I can see myself doing it more in the future.

Sure the case is a little rough around the edges, but I suspect it is now only a matter of time until I learn how to design my own models. An obvious extension is to add a status-LED above the switch, for example. How hard can it be to add a new hole to a model? (Hell I could just drill it!)

Sociological ImagesWhere did your 2016 tax dollars go?

More than 80% of the US federal government’s budget comes from payroll and income taxes. The National Priorities Project is dedicated to helping Americans understand how that money is spent. Here’s the data for 2016:

The highest individual income “top” tax rate in history was 94%; that was the rate at which any income above 200,000 was taxed in 1945, equivalent to almost 2.8 million today. Today it’s 39.6%. The Nobel laureate Peter Diamond and economist Emmanuel Saez argue that the top tax rate should optimally be 73%.

Last year corporate taxes made up only about 11% of the federal government’s revenue; this is down from a historic high of almost 40% in 1943. This is partly because of a low tax rate of 35% and partly because of legal loopholes. According to the Project’s 7 Tax Facts for 2017, 100 of the 258 most profitable Fortune 500 companies paid zero in taxes for at least one of the last eight years. General Electric, Priceline, and PG&E haven’t paid a penny in taxes for almost a decade.

Visit the National Priorities Project here and find out how each state benefits from federal tax dollars or fiddle around with how you would organize American priorities.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Geek FeminismSome posts from the last year on inclusion

A sort of topic-specific collection of links from about the last year, broadly talking about inclusion in communities, online and off, especially in geek(y) spaces.

What kind of discourses and conversations do we want to encourage and have?

  • Nalo Hopkinson’s WisCon 2016 Guest of Honor speech: “There are many people who do good in this field, who perform small and large actions of kindness and welcome every day. I’d like to encourage more of that.” In this speech Hopkinson announced the Lemonade Award.
  • “Looking back on a decade in online fandom social justice: unexpurgated version”, by sqbr: “And just because I’m avoiding someone socially doesn’t mean I should ignore what they have to say, and won’t end up facing complex ethical choices involving them. My approach right now is to discuss it with people I trust. Figuring out who those people are, and learning to make myself vulnerable in front of them, has been part of the journey.”
  • “On conversations”, by Katherine Daniels: “I would love for these people who have had so many opportunities already given to them to think about what they are taking away from our collective conversations by continuing to dominate them, and to maybe take a step back and suggest someone else for that opportunity to speak instead.”
  • “Towards a More Welcoming War” by Mary Anne Mohanraj (originally published in WisCon Chronicles 9: Intersections and Alliances, Aqueduct Press, 2015): “This is where I start thinking about what makes an effective community intervention. This is where I wish I knew some people well enough to pick up a phone.”
  • “The chemistry of discourse”, by Abi Sutherland: “What we really need for free speech is a varied ecosystem of different moderators, different regimes, different conversations. How do those spaces relate to one another when Twitter, Reddit, and the chans flatten the subcultural walls between them?”
  • “Hot Allostatic Load”, by porpentine, in The New Inquiry: “This is about disposability from a trans feminine perspective, through the lens of an artistic career. It’s about being human trash….Call-out Culture as Ritual Disposability”
  • “The Ethics of Mob Justice”, by Sady Doyle, in In These Times: “But, again, there’s no eliminating the existence of Internet shaming, even if you wanted to—and if you did, you’d eliminate a lot of healthy dialogue and teachable moments right along with it. At best, progressive people who recognize the necessity of some healthy shame can only alter the forms shaming takes.”

How do we reduce online harassment?

  • “Paths: a YA comic about online harassment”, by Mikki Kendall: “‘It’s not that big of a deal. She’ll get over it.’ ‘Even if she does, that doesn’t make this okay. What’s wrong with you?'”
  • “On a technicality”, by Eevee: “There’s a human tendency to measure peace as though it were the inverse of volume: the louder people get, the less peaceful it is. We then try to optimize for the least arguing.”
  • “Moderating Harassment in Twitter with Blockbots”, by ethnographer R. Stuart Geiger, on the Berkeley Institute for Data Science site: “In the paper, I analyze blockbot projects as counterpublics…I found a substantial amount of collective sensemaking in these groups, which can be seen in the intense debates that sometimes take place over defining standards of blockworthyness…..I also think it is important distinguish between the right to speak and the right to be heard, particularly in privately owned social networking sites.”
  • “The Real Name Fallacy”, by J. Nathan Matias, on The Coral Project site: “People often say that online behavior would improve if every comment system forced people to use their real names….Yet the balance of experimental evidence over the past thirty years suggests that this is not the case. Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment….designers need to commit to testing the outcomes of efforts at preventing and responding to social problems.”

What does it take to make your community more inclusive?

  • “Want more inclusivity at your conference? Add childcare.” by Mel Chua and then “Beyond ‘Childcare Available’: 4 Tips for Making Events Parent-Friendly”, by Camille Acey: “I’ve pulled together a few ideas to help move ‘Childcare Available’ from just a word on a page to an actual living breathing service that empowers people with children to learn/grow alongside their peers, engage in projects they care about, and frankly just have a little break from the rigors of childcare.”
  • Project Hearing: “Project Hearing is a website that consolidates information about technology tools, websites, and applications that deaf and hard of hearing people can use to move around in the hearing world.”
  • “Conference access, and related topics”, by Emily Short: “This is an area where different forms of accessibility are often going at right angles.”
  • “SciPy 2016 Retrospective”, by Camille Scott: “SciPy, by my account, is a curious microcosm of the academic open source community as a whole.”
  • “Notes from Abstractions”, by Coral Sheldon-Hess: “Pittsburgh’s Code & Supply just held a huge (1500 people) conference over the last three days, and of course I’d signed up to attend months ago, because 1) local 2) affordable 3) tech conference 4) with a code of conduct they seemed serious about. Plus, “Abstractions” is a really cool name for a tech conference.”
  • “The letter I just sent to Odyssey Con”, by Sigrid Ellis: “None of us can know the future, of course. And I always hope for the best, from everyone. But I would hate for Odyssey Con to find itself in the midst of another controversy with these men at the center.” (This is Ellis’s post from April 7, 2016, a year before all three of Odyssey Con’s Guests of Honor chose not to attend Odyssey Con because of the very issue Ellis discussed.)
  • “The realities of organizing a community tech conference: an ill-advised rant”, by Rebecca Miller-Webster: “…there’s a lot of unpaid labor that happens at conferences, especially community conferences, that no one seems to talk about. The unpaid labor of conference organizers. Not only do people not talk about it, but in the narrative around conferences as work, these participants are almost always the bad guys.”
  • “Emotional Labor and Diversity in Community Management”, by Jeremy Preacher, originally a speech in the Community Management Summit at Game Developers Conference 2016: “The thing with emotional labor is that it’s generally invisible — both to the people benefiting from the work, and to the people doing it. People who are good at it tend to do it unconsciously — it’s one of the things we’re talking about when we say a community manager has ‘good instincts’.”….What all of these strategies do, what thinking about the emotional labor cost of participation adds up to, is make space for your lurkers to join in.”
  • “White Corporate Feminism”, by Sarah Sharp: “Even though Grace Hopper was hosted in Atlanta that year, a city that is 56% African American, there weren’t that many women of color attending.”
  • “You say hello”, by wundergeek on “Go Make Me a Sandwich (how not to sell games to women)”: “Of course, this is made harder by the fact that I hate losing. And there will be people who will celebrate, people who call this a victory, which only intensifies my feelings of defeat. My feelings of weakness. I feel like I’m giving up, and it kills me because I’m competitive! I’m contrary! Telling me not to do a thing is enough to make me want to do the thing. I don’t give up on things and I hate losing. But in this situation, I have to accept that there is no winning play. No win condition. I’m one person at war with an entire culture, and there just aren’t enough people who give a damn, and I’m not willing to continue sacrificing my health and well-being on the altar of moral obligation. If this fight is so important, then let someone else fight it for a while.”
  • “No One Should Feel Alone”, by Natalie Luhrs: “In addition to listening and believing–which is 101 level work, honestly–there are other things we can do: we can hold space for people to speak their truth and we can hold everyone to account, regardless of their social or professional position in our community. We can look out for newcomers–writers and fans alike–and make them welcome and follow through on our promise that we will have their backs. We can try to help people form connections with each other, so they are not isolated and alone.”
  • “Equality Credentials”, by Sara Ahmed: “Feminist work in addressing institutional failure can be used as evidence of institutional success. The very labour of feminist critique can end up supporting what is being critiqued. The tools you introduce to address a problem can be used as indicators that a problem has been addressed.”
  • “Shock and Care: an essay about art, politics and responsibility”, by Harry Giles (Content note: includes discussion of sex, violence and self-injury in an artistic context): “So, in a political situation in which care is both exceptionally necessary and exceptionally underprovided, acts of care begin to look politically radical. To care is to act against the grain of social and economic orthodoxy: to advocate care is, in the present moment, to advocate a kind of political rupture. But by its nature, care must be a rupture which involves taking account of, centring, and, most importantly, taking responsibility for those for whom you are caring. Is providing care thus a valuable avenue of artistic exploration? Is the art of care a form of radical political art? Is care, in a society which devalues care, itself shocking?”

Planet DebianNorbert Preining: Gaming: Firewatch

A nice little game, Firewatch, puts you into a fire watch tower in Wyoming, with only a walkie-talkie connecting him to his supervisor Delilah. A so called “first person mystery adventure” with very nice graphics and great atmosphere.

Starting with your trip to the watch tower, the game sends the player into a series of “missions”, during which more and more clues about a mystery disappearance are revealed. The game development is rather straight forward, one has hardly any choices, and it is practically impossible to miss something or fail in some way.

The big plus of the game is the great atmosphere, the funny dialogues with Delilah, the story that pulls you into the game, and the development of the character(s). The tower, the cave, all the places one visits are delicately designed with lots of personality, making this a very human like game.

What is weak is the finish. During the game I was always thinking about whether I should tell Delilah everything, or keep some things secret. But in the end nothing matters, all ends with a simple escape in the helicopter and without any tying up the loose ends. Somehow a pity for such a beautiful game to leave the player somehow unsatisfied at the end.

But although the finish wasn’t that good, I still enjoyed it more than I expected. Due to the simple flow it wont keep you busy for many hours, but as a short break a few evenings (for me), it was a nice break from all the riddle games I love so much.

Google AdsenseAdSense best practice: Set yourself up for success

7 minutes to read

In part two of our series for new publishers to turn their #PassionIntoProfit, we’ll cover three steps you can take to set up you up for success. Following these insights won’t guarantee success, however it will give more opportunities for your ads to perform, while making sure they complement your site’s user experience.

To get started, we’ll discuss three topics:

  • Optimal ad placement
  • Mobile optimization
  • How to avoid policy violations

If you’re looking for an introductory overview of AdSense, check out the first part of this series where we cover the basics of AdSense.

TimelinePart2.png

Make the most of your ad placement

It’s important to keep the user in mind when deciding where to place ads. You want your ads to perform well, but adding too many can clutter your page for users.

Before you decide where to place ads, ask these 4 questions:

  • How do your users engage with your site? 
  • Where is their attention going to be focused? 
  • How can you place ads in focus areas without getting in the way of your content? 
  • How can you keep all of your pages easy to navigate?

It’s a best practice to place ads next to the content your visitors are most likely to focus on, but take the time to make sure ads are clearly separate and identifiable from your content.  Remember that ads that confuse or mislead visitors violate the AdSense program policies and are not allowed. Other ad placement actions that can lead to policy violations include:

Here are a few ad placement best practices that may improve the viewability of your ads.

  • For mobile sites, place a 320x100 ad above the fold, where it will be visible to everyone who lands on your page. 
  • For desktop sites, use vertical ad units, which remain visible as users move around a page. The 300x600 layout is one of the fastest growing ad sizes, and is popular with advertisers who want to increase brand awareness.
  • Try to lay out your content in a way that holds your visitor’s attention and entices them to continue reading. 

Optimize for mobile

Today, there are now more searches on mobile than on desktop. This increase in mobile traffic has increased user expectations of mobile site performance. Meeting the expectations of mobile users continues to be a challenge for publishers. 53% of mobile users will abandon a page that takes longer than 3 seconds to load, with that number rising to 74% for pages that take longer than 5 seconds.

BestPracticesBadge.png

Despite this heavy shift towards mobile, a recent study by Think With Google found that the average time it takes to load a mobile landing page is 22 seconds.

Publishers who don’t adapt to mobile user needs are less likely to retain new visitors. This reduces their ability to convert site visits into ad revenue.

To get your mobile site up to speed, keep these insights in mind:

  1. Dedicate time to discover current issues on your mobile site: Tools like PageSpeed Insights, Mobile-Friendly Test, and Web Page Test will highlight the areas of your site that need attention.
  2. Optimize your mobile site for speed: Get rid of bulky content, reduce the number of server requests, and consolidate data and analytics tags. Prioritize loading elements  that are visible above the fold first - styling, javascript logic, and images accessed after the tap, scroll, or swipe can be loaded later.
  3. Monitor your progress: Run regular A/B tests to audit performance and remove anything that lowers  speed or harms user experience. Whenever you update your analytics data, evaluate your requests and remove outdated collection pixels.

Once you’ve got your fast mobile site running smoothly on small-screens, take these additional steps to maximize your profits and ensure that you’re compliant with the AdSense program policies.

  • Help prevent accidental clicks by moving ad units 150 pixels away from your content.
  • Use responsive ad units, which automatically adapt your ad sizes to fit any screen.  
  • Use the 320x100 ad size instead of the 320x50 where possible. By using the 320x100 ad unit, you allow the 320x50 ad to compete as well, doubling the fill-rate competition.
  • Place a 320x100 ad unit just above the fold.
  • Be as consistent as possible across screens - make it easy for users to find what they’re looking for regardless of the device they’re using. 

BestPracticesBook.png

Keep AdSense safe for everyone

After signing up for an AdSense account, you’ll need to make sure your website is policy-compliant before getting approved.

While AdSense will work behind the scenes to maximize your performance, it’s your responsibility  to make sure your content and placements adhere to AdSense policies.

Violation of AdSense policies can lead to your account being suspended or your website removed from the network. To keep your account in good standing, follow these best practices:

  • Don’t click your own ads, or ask others to click them. These clicks won’t count toward revenue and may lead to an account suspension. Even if you’re interested in an ad or looking for its destination URL, clicking on your own ads is considered an invalid click and is prohibited.
  • Maximize content, not ads per page. It’s important that original, regularly updated content remains the focal point of your website.
  • Don’t use deceptive layouts. If the line between your content and your ads becomes blurred to your readers, those ads will be flagged as a violation. 
  • Take responsibility for your traffic. Use Google Analytics to quickly identify and resolve unusual traffic patterns. If you’re unsure of what counts as invalid traffic, then watch this short video.
  • Always follow the Code Implementation Guide and don’t try to modify your AdSense code.

Our Webmaster Guidelines provides detailed notes on how to avoid spammy content and our all-in-one compliance guide is a resource for you to reduce the possibility of receiving a policy violation.

If you have any other questions about AdSense policy check out our Help Center to learn more, or join an #AskAdSense session Thursdays at 9:30am PT to speak directly with a support specialist.

Our beginner’s series will continue with ideas on how to plan and create content that sells.

If you think content monetization is right for your website, then try AdSense.

Posted by: Jay Castro from the AdSense team 

Google AdsenseHelp Google Publishing Solutions improve its services and products

We’d like to personally invite you to share your thoughts with us in this 10-15 minutes survey so that we can keep improving your experience with us.

In the past, we have used your responses to improve how we help you, ways you interact with our product, and what type of features we offer. This year the survey is shortened and mobile friendly. Our questions should take about 10-15 minutes to answer.

You may have received a survey by email over the last few weeks, if so please take the time to respond to it as we value your input.

To make sure that you're eligible to receive the next survey email, please:

Whether you’ve completed this survey before or you’re providing feedback for the first time, we’d like to thank you for sharing your valuable thoughts. We’re looking forward to feedback!

Posted by Susie Reinecke - AdSense Publisher Happiness Team

Planet Linux AustraliaChris Smart: Patches for OpenStack Ironic Python Agent to create Buildroot images with Make

Recently I wrote about creating an OpenStack Ironic deploy image with Buildroot. Doing this manually is good because it helps to understand how it’s pieced together, however it is slightly more involved.

The Ironic Python Agent (IPA) repo has some imagebuild scripts which make building the CoreOS and TinyCore images pretty trivial. I now have some patches which add support for creating the Buildroot images, too.

The patches consist of a few scripts which wrap the manual build method and a Makefile to tie it all together. Only the install-deps.sh script requires root privileges, if it detects missing dependencies, all other Buildroot tasks are run as a non-privileged user. It’s one of the great things about the Buildroot method!

Build

Again, I have included documentation in the repo, so please see there for more details on how to build and customise the image. However in short, it is as simple as:

git clone https://github.com/csmart/ironic-python-agent.git
cd ironic-python-agent/imagebuild/buildroot
make
# or, alternatively:
./build-buildroot.sh --all

These actions will perform the following tasks automatically:

  • Fetch the Buildroot Git repositories
  • Load the default IPA Buildroot configuration
  • Download and verify all source code
  • Build the toolchain
  • Use the toolchain to build:
    • System libraries and packages
    • Linux kernel
    • Python Wheels for IPA and dependencies
  • Create the kernel, initramfs and ISO images

The default configuration points to the upstream IPA Git repository, however you can change this to point to any repo and commit you like. For example, if you’re working on IPA itself, you can point Buildroot to your local Git repo and then build and boot that image to test it!

The following finalised images will be found under ./build/output/images:

  • bzImage (kernel)
  • rootfs.cpio.xz (ramdisk)
  • rootfs.iso9660 (ISO image)

These files can be uploaded to Glance for use with Ironic.

Help

To see available Makefile targets, simply run the help target:

make help

Help is also available for the shell scripts if you pass the –help option:

./build-buildroot.sh --help
./clean-buildroot.sh --help
./customise-buildroot.sh --help

Customisation

As with the manual Buildroot method, customising the build is pretty easy:

make menuconfig
# do buildroot changes, e.g. change IPA Git URL
make

I created the kernel config from scratch (via tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

make linux-menuconfig
# do kernel changes
make

Each time you run make, it’ll pick up where you left off and re-create your images.

Really happy for anyone to test it out and let me know what you think!

CryptogramCovert Channel via Two VMs

Researchers build a covert channel between two virtual machines using a shared cache.

Worse Than FailureCodeSOD: The Wrong Sacrifice

pentacle




Folks, you need to choose a different sacrificial animal for your multithreading issues. Thanks to this comment Edward found in a stubborn bit of Java code, we now know the programming gods won't take our goats.

public static boolean doSomeOSStuff() {

        // Forgive me father for I have sinned - with the bag of crap written in the lines below.
        // When running multiple threads (~500) concurrently, the damn OS commands fail at a 1% rate for no apparent reason.
        // There is no error message, or any indication of failure, they just don't happen and come bite you in the ass.
        // After much frustration, I decided to try 3 things:
        // 1. sacrifice a goat. - did not go as planned.
        // 2. semaphoring this shit. did not change anything...
        // 3. after each failed command, I will try and verify if it succeeded  or not. If not I'll try it 4 more times.
        // I am now going home to take a hot shower, and cry laying on the shower floor. and /or sacrifice a few more goats.
        //
        //                                          ...
        //                       s,                .                    .s
        //                        ss,              . ..               .ss
        //                        'SsSs,           ..  .           .sSsS'
        //                         sSs'sSs,        .   .        .sSs'sSs
        //                          sSs  'sSs,      ...      .sSs'  sSs
        //                           sS,    'sSs,         .sSs'    .Ss
        //                           'Ss       'sSs,   .sSs'       sS'
        //                  ...       sSs         ' .sSs'         sSs       ...
        //                 .           sSs       .sSs' ..,       sSs       .
        //                 . ..         sS,   .sSs'  .  'sSs,   .Ss        . ..
        //                 ..  .        'Ss .Ss'     .     'sSs. ''        ..  .
        //                 .   .         sSs '       .        'sSs,        .   .
        //                  ...      .sS.'sSs        .        .. 'sSs,      ...
        //                        .sSs'    sS,     .....     .Ss    'sSs,
        //                     .sSs'       'Ss       .       sS'       'sSs,
        //                  .sSs'           sSs      .      sSs           'sSs,
        //               .sSs'____________________________ sSs ______________'sSs,
        //            .sSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS'.Ss SSSSSSSSSSSSSSSSSSSSSs,
        //                                    ...         sS'
        //                                     sSs       sSs
        //                                      sSs     sSs
        //                                       sS,   .Ss
        //                                       'Ss   sS'
        //                                        sSs sSs
        //                                         sSsSs
        //                                          sSs
        //                                           s
        
        // snipped OS manipulation code with several cases of aforementioned ritual
}
[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianBits from Debian: Call for Proposals for DebConf17 Open Day

The DebConf team would like to call for proposals for the DebConf17 Open Day, a whole day dedicated to sessions about Debian and Free Software, and aimed at the general public. Open Day will preceed DebConf17 and will be held in Montreal, Canada, on August 5th 2017.

DebConf Open Day will be a great opportunity for users, developers and people simply curious about our work to meet and learn about the Debian Project, Free Software in general and related topics.

Submit your proposal

We welcome submissions of workshops, presentations or any other activity which involves Debian and Free Software. Activities in both English and French are accepted.

Here are some ideas about content we'd love to offer during Open Day. This list is not exhaustive, feel free to propose other ideas!

  • An introduction to various aspects of the Debian Project
  • Talks about Debian and Free Software in art, education and/or research
  • A primer on contributing to Free Software projects
  • Free software & Privacy/Surveillance
  • An introduction to programming and/or hardware tinkering
  • A workshop about your favorite piece of Free Software
  • A presentation about your favorite Free Software-related project (user group, advocacy group, etc.)

To submit your proposal, please fill the form at https://debconf17.debconf.org/talks/new/

Volunteer

We need volunteers to help ensure Open Day is a success! We are specifically looking for people familiar with the Debian installer to attend the Debian installfest, as resources for people seeking help to install Debian on their devices. If you're interested, please add your name to our wiki: https://wiki.debconf.org/wiki/DebConf17/OpenDay#Installfest

Attend

Participation to Open Day is free and no registration is required.

The schedule for Open Day will be announced in June 2017.

DebConf17 logo

Rondam RamblingsIt can't happen here

I wonder how many Turks said that to themselves shortly before Turkish voters passed a referendum to convert Turkey from a secular democracy into a Muslim dictatorship. As I have written so many times before, I'm not sure which is scarier, the similarities to Germany in 1933, or the fact that no one in the U.S. seems to be paying attention. [UPDATE:] OMFG, Donald Trump called Erdogan to

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 2, week 1

Welcome to the new school term, and we hope you all had a wonderful Easter! Many of our students are writing NAPLAN this term, so the HASS program provides a refreshing focus on something different, whilst practising skills that will help students prepare for NAPLAN without even realising it! Both literacy and numeracy are foundation skills of much of the broader curriculum and are reinforced within our HASS program as well. Meantime our younger students are focusing on local landscapes this term, while our older students are studying explorers of different continents.

Foundation to Year 3

Our youngest students (Foundation/Prep Unit F.2) start the term by looking at different types of homes. A wide selection of places can be homes for people around the world, so students can compare where they live to other types of homes. Students in integrated Foundation/Prep and Years 1 to 3 (Units F.61.2; 2.2 and 3.2) start their examination of the local landscape by examining how Aboriginal people arrived in Australia 60,000 years ago. They learn how modern humans expanded across the world during the last Ice Age, reaching Australia via South-East Asia. Starting with this broad focus allows them to narrow down in later weeks, finally focusing on their local community.

Year 3 to Year 6

Students in Years 3 to 6 (Units 3.6; 4.2; 5.2 and 6.2) are looking at explorers this term. Each year level focuses on explorers of a different part of the world. Year 3 students investigate different climate zones and explorers of extreme climate areas (such as the Poles, or the Central Deserts of Australia).  Year 4 students examine Africa and South America and investigate how European explorers during the ‘Age of Discovery‘ encountered different environments, animals and people on these continents. The students start with prehistory and this week they are looking at how Ancient Egyptians and Bantu-speaking groups explored Africa thousands of years ago. They also examine Great Zimbabwe. Year 5 students are studying North America, and this week are starting with the Viking voyages to Greenland and Newfoundland, in the 10th century. Year 6 students focus on Asia, and start with a study in Economics by examining the Dutch East India Company of the 17th and 18th centuries. (Remember HASS for years 5 and 6 includes History, Geography, Civics and Citizenship and Economics and Business – we cover it all, plus Science!)

You might be wondering how on earth we integrate such apparently disparate topics for multi-year classes! Well, our Teacher Handbooks are full of tricks to make teaching these integrated classes a breeze. The Teacher Handbooks with lesson plans and hints for how to integrate across year levels are included, along with the Student Workbooks, Model Answers and Assessment Guides, within our bundles for each unit. Teachers using these units have been thrilled at how easy it is to use our material in multi-year level classes, whilst knowing that each student is covering curriculum-appropriate material for their own year level.

Valerie AuroraPreventing jet lag, one hour per day

My most miserable jet lag experience was the afternoon I struggled for over an hour to liberate my rental car from a tiny paid parking lot in Chamonix, a ski resort town in France. I distinctly remember the feelings of hopeless despair and confusion as I poked at the buttons on the parking machine and made a seemingly endless pilgrimage around the local shops, before I finally acquired the 5 euro bill I needed to effect my escape.

That trip was for “fun,” but nowadays I travel mostly for work: I teach a particularly complex and difficult workshop (the Ally Skills Workshop). Travel is also difficult and painful for me, so I like to spend as little time away from home again as possible. This means that, no matter what the time difference is between San Francisco and my destination, I need to be fully awake and mentally sharp during business hours within a day of my arrival.

About a year ago, I started changing my time zone before I left on my trip. Each day before the trip, I get up one hour earlier or later, until on the day I leave, I am already getting up at the same time I’ll need to be awake at my destination. So for a trip from San Francisco to New York, I’ll get up one hour earlier for three days before my trip. And if I can, I’ll start transitioning back to my home time zone during my trip. One hour a day is still too quick for a full adjustment – I can still feel my home circadian rhythm kicking in for about 10 days after this – but it feels like just a little bit of restless or tiredness a couple of times a day, not the overwhelming sense of doom and despair I remember from that parking lot in Chamonix.

“Get up one hour earlier or later each day” sounds simple, but as the jokes about Daylight Saving Time transition show, doing this for even one day can be difficult. If you live with other people, care for others, or have set work hours, changing your time zone while at home may be difficult or impossible. I set my own hours and I live with my boyfriend, who is incredibly tolerant of me banging around in the middle of the night, or going to bed in the middle of the afternoon. Even so, I just miss spending time with him and my local friends when I’m adjusting my time zone, and being awake alone in the dark is no fun. So, it is by no means a perfect solution even for me – just slightly better than staggering around confusedly at the nadir of my circadian cycle.

I have a collection of tricks that help me stick to my schedule; they might work for you or you might find something else that works better for you.

Going to sleep earlier

Most of my travel is east, and for physiological reasons, it’s harder for most people to get up earlier than to go to bed later. Here are the things that help me go to sleep earlier, in order from first thing I try to last thing I try. On a good night I’ll only go through half the list before falling asleep.

Taking melatonin: The ideal timing and dose of melatonin for going to sleep earlier is 0.3 mg (more is NOT better), one to three hours before you intend to fall asleep. Larger doses don’t help and can make you sleepy for an entire day.

Dimming and darkening: Closing the curtains and dimming the lights two hours before my goal sleep time helps. I aim for complete darkness one hour before goal sleep time. Even if I don’t feel tired when I do this, I’ll start feeling tired soon. I will also adjust my F.lux schedule to match my sleep schedule instead of local sunlight. You might also try blue-blocking glasses or sunglasses if you have to be out in the light.

Lying down: Again, if I don’t feel tired when I do this, I’ll start feeling tired soon.

Reading a familiar book: I’ve read and reread everyone Jane Austen novel multiple times, so I’m never tempted to keep reading after I start to feel sleepy. My Kindle has a built-in light which on the lower settings does not interfere with the effect of darkness. The key here is: low light, soothing distraction from your thoughts, no incentive to keep going after you feel sleepy.

Listening to a familiar book read by a computer: Even more soporific is listening to a computer read Jane Austen to me. I don’t like listening to new books this way because I get stressed about missing out on the words (I have a slightly hard time understanding spoken words) but if it’s something I know backwards and forwards, I find the emotionless robot speech very soothing, especially at a slow speed. Currently, I use iOS’s screenreader feature with the Kindle app; before that I used the Kindle with built-in text-to-speech (now removed from current versions). I suspect that most audiobooks are rather too well read to be as sleep-inducing as the computer-read version. I like switching up the voice occasionally, especially if they’ve got an accent from the country I’m traveling to.

Listening to sleep hypnosis: A friend gave me some sleep hypnosis recordings from Andrew Johnson and I love them. My absolute favorite time to use them is when I’m trying to sleep on a plane and I don’t want to be incapacitated in any way by taking supplements or drugs. I use them with my noise cancelling earbuds – earbuds so that I can sleep on the plane with them in, while listening to my sleep hypnosis recordings. But sleep hypnosis also works for going to sleep earlier when you’re stressed out or worrying.

Taking zolpidem: The side effects of zolpidem make me avoid taking it until I’ve been trying to sleep for at least an hour and failing. I’ll often bite the 10mg pill in half, take half now, and take the second half only if I’m still awake in an hour.

I used to take Benadryl or Unisom to help go to sleep, but the side effects are too negative for me, so I don’t do this any more. I haven’t tried marijuana edibles, but lots of my friends swear by them for going to sleep. Drinking alcohol makes me feel sleepy, but usually I wake up when it wears off, which doesn’t help. Sometimes watching a boring TV show will help me go to sleep when I’m sleeping at my usual time, but it doesn’t seem to work when I’m fighting my own body clock. The Bob Ross painting shows are a good choice for a lot of people.

Getting up earlier

Bright light: A sunrise lamp can really help with waking up if you sleep alone or if your partner doesn’t mind the light. If that doesn’t work for you, turning on the lights ASAP in the room you’re spending time in helps. I like to do a gradual increase of light that mimics the sunrise. This is a situation where you want blue light. I also adjust my F.lux screen temperature to mimic sunrise for my schedule.

Showering immediately: Taking a shower as soon as I get up is super helpful to distract me from the miserable sad feeling in my body. I like having minty-smelling soap and similar “refreshing” smells.

Listening to energetic music: I’m a techno/electronica girl; putting in the headphones and cranking Röyksopp makes my artificial “morning” a lot more bearable.

Taking a walk: As soon as it is light outside, I take a walk. There’s some kind of weird perverse pleasure to being up and about at dawn that helps with my energy, and the earlier I can get real sunlight in my eyes, the better. Physical exercise, bracing air, interacting with people, seeing new things – all of these things help in a way that isn’t as effective as going to a dark empty indoor gym.

Do annoying work: For me, I’m already grumpy and mad and there’s nothing fun I can do anyway because everyone I normally hang out with is asleep, so that’s the perfect time to do annoying tasks that make me grumpy or mad. This is often accounting or tax-related. An additional benefit is that I often get angry, which keeps me awake. Doing something fun and enjoyable will often result in me relaxing and feeling sleepy, so I save that for closer to bed time.

Communicating with friends in other time zones: If I have friends in other time zones who are awake, I send them pictures or chat or talk on the phone if they’re amenable. It helps to feel less alone.

Eating on schedule: Your digestive system is part of your circadian rhythm, and eating on schedule with your new sleep/wake schedule helps. It’s not fun to eat when I’m not hungry, but it helps with waking up as well as adjusting to the new schedule. It’s hard to sleep if my stomach has decided it’s time to eat, so I eat when I am awake to avoid waking up hungry later.

Eating dark chocolate: Eating 70% or higher cacao content chocolate gives me a little bit of sugar and the right kind of caffeine to feel a little more awake and happy. The taste is also interesting and complex and helps me feel awake and interested.

Drinking coffee or tea: Most of the time, coffee and tea make me nauseous and jittery while leaving my tiredness and depression intact. In extreme cases, I will drink a half-caff cappucino or mocha, but I usually avoid that unless I’m traveling 5 or more hours east.

Taking pseudoephedrine: I discovered quite by accident that, for me, pseaudoephedrine completely stops the feelings of depression and sadness I have when I’m getting up too early. When I’m up at 2am the day I fly to Europe, a 12-hour Sudafed makes an incredible improvement in my quality of life. An additional benefit of the 12-hour Sudafed is that I start to feel tired when it wears off, which helps with going to sleep earlier. None of this is surprising when you remember that pseudoephedrine is related to methamphetamine.

I’ve taken amodafinal before and it seemed to work fine with no side effects, but I haven’t tried it for jet lag. I assume it and modafinal work great since they were kind of invented to keep people awake with low side effects.

Sleeping later

I don’t travel west as often, and usually it is much easier for me to adapt my schedule. But when I do, a difficult challenge is when I wake up just a few hours before I’m supposed to wake up, when I can’t take a sleeping pill because I’ll be groggy later on. Here are some tips for sleeping later and going back to sleep when you have to be up in a few hours.

Take melatonin a few hours before waking up: Melatonin can not only help you go to sleep earlier, it can also help you sleep later. I set my alarm for 1-2 hours before I suspect I will wake up (my usual wake time) and take 0.3 mg of melatonin, then read a book in the dark until I go back to sleep. The major downside of this approach is that it gives many people exceptionally vivid dreams. For me, this means I spend the last few hours of sleep having intense dreams in which I am determinedly trying to get some specific task done, like writing an essay or unpacking my suitcase, which I find exhausting and frustrating. It also means waking up at least once in the night. I haven’t tried time-release melatonin but it sounds like it would work better than this jerry-rigged situation.

Blocking light: Even a tiny shaft of sunlight between the curtains can ruin my attempt to sleep in. I cover not only my eyes but also my skin – sometimes it feels like the sun on my skin is waking me up, and apparently the skin has photoreceptors too?

Use any non-pharmaceutical going to sleep aid: Keep it dark, read a boring book, listen to sleep hypnosis, keep lying down, etc. When I’m having a bad night for anxiety, I’ll set up my iPhone with the screenreader and Pride and Prejudice and put in my earbuds and just leave it playing in my ears all night. (This is how I got through the two weeks following the 2016 U.S. presidential election.)

Believe in stage 1 sleep: The first stage of sleep often feels like I’m still awake – I can sense what is going on around me, remember things that happen, feel the passage of time, etc. – but I’m actually technically asleep. This kind of sleep isn’t fantastic and no on can do well on light sleep alone, but it does serve some of the purposes of sleep and it makes me feel more rested and restored than not sleeping at all. I often get only stage 1 sleep when I’m trying to sleep on a plane. For me, knowing and trusting that stage 1 sleep is effective helps a lot with relaxing and continuing to get some sleep instead of none at all.

Staying up later

Does anyone really need advice on staying up later? I think most people get lots of practice at this. Short version: do interesting, exciting things, take stimulants, get bright light, listen to exciting music, talk to people, read thrillers, eat food. I will also do annoying frustrating work like accounting to keep me from getting too relaxed and feeling sleepy.

Your tips?

Do you have any tips for adjusting your time zone? Leave them in the comments!


Tagged: advice, travel

Rondam RamblingsDavid Dao did nothing wrong

I am dumbfounded that this is even in dispute any more.  Maybe an analogy will help. Consider the following situation: you have rented an apartment.  You have signed a lease.  You have paid your first month's rent.  You have moved in.  You are putting your artwork up on the wall when there is a knock at the door. It is Jim from the management company.  He explains to you that there has been a

Cory DoctorowRead: Chapter 3 of Walkaway, in which a university rises from the ashes

There’s only 8 days until the publication of Walkaway (stil time to pre-order signed hardcovers: US, UK), and Tor.com has just published a sneak peek at chapter 3: “Takeoff.”

I’m getting ready to hit the road and tour with the book: 20+ cities in the US and Canada are announced, with more to come in the UK!

Chapter 3 starts with a visit to the bombed out remains of Walkaway U, a guerrilla scientific research facility that’s been taken out by Hellfire missiles.


The ashes of Walkaway U were around Iceweasel. It was an unsettled climate-ish day, when cloudbursts swung up out of nowhere, drenched everything, and disappeared, leaving blazing sun and the rising note of mosquitoes. The ashes were soaked and now baked into a brick-like slag of nanofiber insulation and heat sinks, structural cardboard doped with long-chain molecules that off-gassed something alarmingly, and undifferentiated black soot of things that had gotten so hot in the blaze that you could no longer tell what they’d been.

There were people in that slag. The sensor network at WU had survived long enough to get alarmed about passed-out humans dotted around, trapped by blazes or gases. There was charred bone in the stuff that crept around her mask and left a burnt toast taste on her tongue. She’d have gagged if it hadn’t been for the Meta she’d printed before she hit the road.

The Banana and Bongo was bigger than the Belt and Braces had ever been— seven stories, three workshops, and real stables for a variety of vehicles from A.T.V. trikes to mecha-walkers to zepp bumblers, which consumed Etcetera for more than two years, as he flitted through the sky, couch-surfing at walkaway camps and settlements across the continent. She’d thought about taking a mecha to the uni, because it was amazing to eat the countryside in one, the suit’s wayfinders and lidar finding just the right place to plant each of its mighty feet, gyros and ballast dancing with gravity to keep it upright over the kilometers.

But mechas had no cargo space, so she’d taken a trike with balloon tires as big as tractor wheels, tugging a train of all-terrain cargo pods of emergency gear. It took four hours to reach the university, by which time, the survivors had scattered. She lofted network-node bumblers on a coverage pattern, looking for survivors’ radio emissions. The bumblers self-inflated, but it was still sweaty work getting them out of their pod and into the air, and even though she worked quickly—precise Meta-quick, like a marine assembling a rifle blindfolded—everything was smeared with blowing soot by the time they were in the sky.

“Fuck this,” she said into her breather, and turned the A.T.V. and its cargotrain around in a rumbling donut. The survivors would be nearby, upwind of the ash plume, and out of range of the heat that must have risen as the campus burned. She’d seen a demo of a heat-sunk building going up before. It had been terrifying. In theory, graphene-doped walls wicked away the heat, bringing it to the surface in a shimmer, keeping the area around the fire below its flash point. The heat sink was itself less flammable than everything else they used for building materials, so if the fire went on too long, the heat sinks heated up to the flash point of the walls, and the entire building went up in a near-simultaneous whoom. In theory, you couldn’t get to those temperatures unless eight countermeasures all failed, strictly state-actor-level arson stuff.


Walkaway: “Takeoff” [Cory Doctorow/Tor.com]

Planet DebianSylvain Beucler: Practical basics of reproducible builds 3

On my quest to generate reproducible standalone binaries for GNU FreeDink, I met new friends but currently lie defeated by an unexpected enemy...

Episode 1:

  • compiler version needs to be identical and recorded
  • build options and their order need to be identical and recorder
  • build path needs to be identical and recorded (otherwise debug symbols - and BuildIDs - change)
  • diffoscope helps checking for differences in build output

Episode 2:

  • use -Wl,--no-insert-timestamp for .exe (with old binutils 2.25 caveat)
  • no need to set a build path for stripped .exe (no ELF BuildID)
  • reprotest helps checking build variations automatically
  • MXE stack is apparently deterministic enough for a reproducible static build
  • umask needs to be identical and recorded
  • file timestamps needs to be set and recorded (more on this in a future episode)

First, the random build differences when using -Wl,--no-insert-timestamp were explained.
peanalysis shows random build dates:

$ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello && analysePE.py hello | tee /tmp/hello.log-$(date +%s); sleep 1' 'hello'
$ diff -au /tmp/hello.log-1*
--- /tmp/hello.log-1490950327   2017-03-31 10:52:07.788616930 +0200
+++ /tmp/hello.log-1523203509   2017-03-31 10:52:09.064633539 +0200
@@ -18,7 +18,7 @@
 found PE header (size: 20)
     machine: i386
     number of sections: 17
-    timedatestamp: -1198218512 (Tue Jan 12 05:31:28 1932)
+    timedatestamp: 632430928 (Tue Jan 16 09:15:28 1990)
     pointer to symbol table: 4593152 (0x461600)
     number of symbols: 11581 (0x2d3d)
     size of optional header: 224
@@ -47,7 +47,7 @@
     Win32VersionValue: 0
     size of image (memory): 4640768
     size of headers (offset to first section raw data): 1536
-    checksum (for drivers): 4927867
+    checksum (for drivers): 4922616
     subsystem: 3
         win32 console binary
     DllCharacteristics: 0

Stephen Kitt mentioned 2 simple patches (1 2) fixing uninitialized memory in binutils.

These patches fix the variation and were submitted to MXE (pull request).


Next was playing with compiler support for SOURCE_DATE_EPOCH (which e.g. sets __DATE__ macros).
The FreeDink DFArc frontend historically displays a build date in the About box:

    "Build Date: %s\n", ..., __TDATE__

sadly support is only landing upstream in GCC 7 :/
I had to remove that date.


Now comes the challenging parts.

All my tests with reprotest checked. I started writing a reproducible build environment based on Docker (git browse).
At first I could not run reprotest in the container, so I reworked it with SSH support, and reprotest validated determinism.
(I also generate a reproducible .zip archive, more on that later.)

So far so good, but were the release identical when running reprotest successively on the different environments?
(reminder: this is a .exe build that is insensitive to varying path, hence consistent in a full reprotest)

$ sha256sum *.zip
189d0ca5240374896c6ecc6dfcca00905ae60797ab48abce2162fa36568e7cf1  freedink-109.0-bin-buildsh.zip
e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9  freedink-109.0-bin-docker.zip
e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9  freedink-109.0-bin-reprotest-docker.zip
37007f6ee043d9479d8c48ea0a861ae1d79fb234cd05920a25bb3db704828ece  freedink-109.0-bin-reprotest-null.zip

Ouch! Even though both the Docker and my host are running Stretch, there are differences.


For the two host builds (direct and reprotest), there is a subtle but simple difference: HOME.
HOME is invariably non-existant in reprotest, while my normal compilation environment has an existing home (duh!).

This caused a subtle bug when cross-compiling with mingw and wine-binfmt:

  • existing home: ./configure attempts to run conftest.exe, wine can create ~/.wine, conftest.exe runs with binfmt emulation, configure assumes:
  checking whether we are cross compiling... no
  • non-existing home: ./configure attempts to run conftest.exe, wine can't create ~/.wine, conftest.exe fails, configure assumes:
  checking whether we are cross compiling... yes

The respective binaries were very different notably due to a different config.h.
This can be fixed by specifying --build in addition to --host when calling ./configure.

I suggested reprotest have one of the tests with a valid HOME (#860428).


Now comes the big one, after the fix I still got:

$ sha256sum *.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-buildsh.zip
cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb  freedink-109.0-bin-docker.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-reprotest-chroot.zip
cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb  freedink-109.0-bin-reprotest-docker.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-reprotest-null.zip

There is consistency on my host, and consistency within docker, but both are different.
Moreover, all the .o files were identical, so something must have gone wrong when compiling the libs, that is MXE.

After many checks it appears that libstdc++.a is different.
Just overwriting it gets me a consistent FreeDink release on all environments.
Still, when rebuilding it (make gcc), libstdc++.a always has the same environment-dependent checksum.

45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a  # host
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a  # docker

The 2 libraries are much different, there's barely any blue in hexcompare.

At that time, I realized that the Docker "official" Debian are not really official, as Joey explains.
Could it be that Docker maliciously tampered with the compiler as Ken Thompson warned in 1984??

Well before jumping to conclusion let's mix & match.

  • First I rsync a copy of my Docker filesystem and run it in a host chroot with a reset environment.
$ sudo env -i /usr/sbin/chroot chroot-docker/
$ exec bash -l
$ cd /opt/mxe
$ touch src/gcc.mk
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
$ make gcc
[build]     gcc                    i686-w64-mingw32.static
[done]      gcc                    i686-w64-mingw32.static                                 2709464 KiB    7m2.039s
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
# consistent with host builds
  • Then I import my previous reprotest chroot (plain debootstrap) in Docker:
$ sudo tar -C chroot -c . | docker import - chroot-debootstrap
$ docker run -ti chroot-debootstrap /bin/bash
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
$ touch src/gcc.mk
$ make gcc
[build]     gcc                    i686-w64-mingw32.static
[done]      gcc                    i686-w64-mingw32.static                                 2709412 KiB    7m6.608s
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
# consistent with docker builds

So, AFAICS when building with:

  • exactly the same kernel
  • exactly the same GCC sources
  • exactly the same host binaries

then depending on whether running in a container or not we get a consistent but different libstdc++.a.

This kind of issue is not detected with a simple reprotest build, as it only tests variations within a fixed build environment.
This is quite worrisome, I intend to use a container to control my build environment, but I can't guarantee that the container technology will be exactly the same 5 years from now.

All my setup is simple and available for inspection at https://git.savannah.gnu.org/cgit/freedink.git/tree/autobuild/freedink-w32-snapshot/.

I'd very much welcome enlightenment :)

Planet DebianNorbert Preining: Systemd again (or how to obliterate your system)

Ok, I have been silent about systemd and its being forced onto us in Debian like force-feeding Foie gras gooses. I have complained about systemd a few times (here and here), but what I read today really made me loose my last drips of trust I had in this monster-piece of software.

If you are up for some really surprising read about the main figure behind systemd, enjoy this github issue. It’s about a bug that simply does the equivalent of rm -rf / in some cases. The OP gave clear indications, the bug was fixes immediately, but then a comment from the God Poettering himself appeared that made the case drip over:

I am not sure I’d consider this much of a problem. Yeah, it’s a UNIX pitfall, but “rm -rf /foo/.*” will work the exact same way, no?Lennart Poettering, systemd issue 5644

Well, no, a total of 1min would have shown him that this is not the case. But we trust this guy the whole management of the init process, servers, logs (and soon our toilet and fridge management, X, DNS, whatever you ask for).

There are two issues here: One is that such a bug is lurking in systemd since probably years. The reason is simple – we pay with these kinds of bugs for the incredible complexity increase of the init process which takes over too much services. Referring back to the Turing Award lecture given by Hoare, we see that systemd took the later path:

I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. Antony Hoare, Turing Award Lecture 1980

The other issue is how systemd developers deal with bug reports. I have reported several cases here, this is just another one: Close the issue for comments, shut up, put it under the carpet.

(Image credit: The musings of an Indian Faust)

Planet DebianRoss Gammon: My March 2017 Activities

March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn’t a lot of spare time.

Debian

  •  Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
  • Uploaded the latest version of abcmidi (also to experimental).
  • Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
  • Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.

Ubuntu

  • Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
  • Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won’t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
  • Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.

Other

  • Measured up the new model railway layout and documented it in xtrkcad.
  • Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
  • Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
  • Started looking at creating a Django website to store and publish my One Name Study sources (indexes).  Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.

Plan status from last month & update for next month

Debian

For the Debian Stretch release:

  • Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress

Generally:

  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. – In Progress
  • Begin working again on all the new stuff I want packaged in Debian.

Ubuntu

  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – In progress
  • Test Len’s work on ubuntustudio-controls – Done
  • Do the Ubuntu Studio Zesty 17.04 Final Beta release. – Done
  • Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.

Other

  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress

Sociological ImagesWomen who perform femininity are judged to be less suited to science

Sexism in American society has been on the decline. Obstacles to female-bodied people excelling in previously male-only occupations and hobbies have lessened. And women have thrived in these spaces, sometimes even overtaking men both quantitatively and qualitatively.

Another kind of bias, though, has gotten worse: the preference for masculinity over femininity. Today we like our men manly, just like we used to, but we like our women just a little bit manly, too. This is true especially when women expect to compete with men in masculine arenas.

A recent study by a team of psychologists, led by Sarah Banchefsky, collected photographs of 40 male and 40 female scientists employed in STEM departments of US universities. 50 respondents were told they were participating in a study of “first impressions” and were asked to rate each person according to how masculine or feminine they appeared. They were not told their occupation. They were then asked to guess as to the likelihood that each person was a scientist, then the likelihood that each was an early childhood educator.

Overall, women were rated as more feminine than men and less likely to be scientists. Within the group of women, however, perceived femininity was also negatively correlated with the estimated likelihood of being a scientist and positively correlated with the likelihood of being an educator. In other words, both having a female body and appearing feminine was imagined to make a woman less inclined to or suited to science. The same results were not found for men.

5

Banchefsky and her colleagues conclude that “subtle variations in gendered appearance alter perceptions that a given woman is a scientist” and this has important implications for their careers:

First, naturally feminine-appearing young women and those who choose to emphasize their femininity may not be encouraged or given opportunities to become scientists as a result of adults’ beliefs that feminine women are not well-suited to the occupation.

Second, feminine-appearing women who are already scientists may not be taken as seriously as more masculine-appearing ones. They may have to overperform relative to their male and masculine female peers to be recognized as equally competent. Femininity may, then, cost them job opportunities, promotions, awards, grants, and valuable collaboration.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramSurveillance and our Insecure Infrastructure

Since Edward Snowden revealed to the world the extent of the NSA's global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other -- smaller and poorer -- more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there's a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries -- mostly European -- that allow those arms manufacturers to sell those systems.

There's a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they're used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There's no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone -- whether it's a country's secret police, a rival national intelligence organization, or criminal group -- to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn't static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they're also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators -- Senator Ron Wyden and Rep Ted Lieu -- sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country's insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn't doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it's also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Worse Than FailureWhat Equals Equals

Lemon velvet pudding

Monday morning, 10:00AM. As per usual, today's protagonist, Merv, got some coffee and settled in for his usual Monday morning routine of checking Facebook and trying to drag his brain into some semblance of gear. As he waited, the least interesting conversation ever floated to his ears from the hallway:

"It's like, yak butter, I guess? I put it in my coffee, it's supposed to do wonders."

"No way, man, I can't do dairy."

"Yeah, but it's not dairy, it's yak."

"Like, yak meat?"

"No, like, butter from yak milk."

"Then I can't have it. I'm allergic to dairy."

Merv was so stunned by the inanity of the conversation that his groggy brain didn't clue in on the obvious sign. Of course, the non-dairy drinker would be Sergey. Normally, Merv would have avoided the man this early on a Monday. He had a way of coming up with the weirdest, most bizzare problems when he debugged code, and he always ran straight to Merv to untangle them.

"Merv!" Sergey appeared, leaned into his cube. "Just the guy I was looking for. Do you have a minute?"

Begrudgingly, Merv conceded that he did, in fact, have a minute. And so, a few moments later, he found himself standing behind Sergey's chair, trying to remember what exactly this product did while Sergey explained.

"See, this data here, watch carefully while I run it—there it is. See that? It changed. I don't know why or when, but something's overwriting the data I put there. So I figured it must be here, but look, there's no assignment in this method, or this one."

"Can't you just debug through it?" asked a sleepy Merv.

"That's the weird part: it doesn't do it if I build in debug mode."

Merv frowned. "Run that by me again?"

So Sergey did. Of course, it was silly to step through a release build. There were no debugging symbols, so it was hard to tell what was going on. Merv showed him how to enable debugging symbols in a release build, but the compiler's optimizations made it hard to step through something like this. The more they dug, the weirder the behavior seemed, and the saltier the language got around the monitor.

Could it be an optimization bug in the compiler? Maybe memory values weren't being set correctly in the release build? Was it a race condition where the debugger was changing the timing? Nothing seemed right. Finally, as Merv's brain shifted into high gear, he looked over the exact lines the change was being made in.

"Here, the copy is made. I can see the value. But as we step through the next few lines, it changes, without being touched."

The code was just a simple assignment using a basic equals sign. Feeling a headache coming on, Merv tapped the equals on screen. "What exactly does that copy assignment operator do, for this class?"

They went digging. What they found was beyond the pale: macro magic changed how the function worked depending on how it was compiled. In debug mode, it was a deep copy. In release mode, it was a shallow copy; the returned object was little more than a pointer to the old one. So in release mode, changing the original (off in another thread) also changed the copy, while in debug mode, they were separate objects with their own lifetimes.

"I need more coffee," Merv muttered.

"Me too, buddy," replied Sergey.

Neither of them budged.

"Who wrote this?" demanded Merv.

The commit logs revealed the name: Patrice, who had once committed a 3000 line monstrosity all in one commit that Merv had had to rewrite from scratch. That had taken him four mugs of coffee to get through. Faced with the prospect of digging through the whole codebase to every assignment and guess if they wanted a deep or shallow copy, Merv knew the right answer: he'd have to put in a catering order with the local coffeehouse. He'd break the machine before he was done.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianRussell Coker: More KVM Modules Configuration

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

Planet DebianNorbert Preining: Calibre on Debian

Calibre is the prime open source e-book management program, but the Debian releases often lag behind the official releases. Furthermore, the Debian packages remove support for rar packed e-books, which means that several comic book formats cannot be handled.

Thus, I have published a local repository targeting Debian/sid of calibre with binaries for amd64 where rar is enabled and as far as possible the latest version is included.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

Rondam RamblingsCivil disobedience and Godwin's law

Towards the end of a spirited discussion on my last post, occasional guest-blogger and long-time reader Don wrote: [T]he pilot of an airline telling you to get off his plane, is nothing like allowing the state to take you from your home and gas you to death simply because of the circumstances of your birth. You can resist one, and not the other, without much moral confusion. I'm ashamed at you

,

Harald WelteThings you find when using SCTP on Linux

Observations on SCTP and Linux

When I was still doing Linux kernel work with netfilter/iptables in the early 2000's, I was somebody who actually regularly had a look at the new RFCs that came out. So I saw the SCTP RFCs, SIGTRAN RFCs, SIP and RTP, etc. all released during those years. I was quite happy to see that for new protocols like SCTP and later DCCP, Linux quickly received a mainline implementation.

Now most people won't have used SCTP so far, but it is a protocol used as transport layer in a lot of telecom protocols for more than a decade now. Virtually all protocols that have traditionally been spoken over time-division multiplex E1/T1 links have been migrated over to SCTP based protocol stackings.

Working on various Open Source telecom related projects, i of course come into contact with SCTP every so often. Particularly some years back when implementing the Erlang SIGTAN code in erlang/osmo_ss7 and most recently now with the introduction of libosmo-sigtran with its OsmoSTP, both part of the libosmo-sccp repository.

I've also hard to work with various proprietary telecom equipment over the years. Whether that's some eNodeB hardware from a large brand telecom supplier, or whether it's a MSC of some other vendor. And they all had one thing in common: Nobody seemed to use the Linux kernel SCTP code. They all used proprietary implementations in userspace, using RAW sockets on the kernel interface.

I always found this quite odd, knowing that this is the route that you have to take on proprietary OSs without native SCTP support, such as Windows. But on Linux? Why? Based on rumors, people find the Linux SCTP implementation not mature enough, but hard evidence is hard to come by.

As much as it pains me to say this, the kind of Linux SCTP bugs I have seen within the scope of our work on Osmocom seem to hint that there is at least some truth to this (see e.g. https://bugzilla.redhat.com/show_bug.cgi?id=1308360 or https://bugzilla.redhat.com/show_bug.cgi?id=1308362).

Sure, software always has bugs and will have bugs. But we at Osmocom are 10-15 years "late" with our implementations of higher-layer protocols compared to what the mainstream telecom industry does. So if we find something, and we find it even already during R&D of some userspace code, not even under load or in production, then that seems a bit unsettling.

One would have expected, with all their market power and plenty of Linux-based devices in the telecom sphere, why did none of those large telecom suppliers invest in improving the mainline Linux SCTP code? I mean, they all use UDP and TCP of the kernel, so it works for most of the other network protocols in the kernel, but why not for SCTP? I guess it comes back to the fundamental lack of understanding how open source development works. That it is something that the given industry/user base must invest in jointly.

The leatest discovered bug

During the last months, I have been implementing SCCP, SUA, M3UA and OsmoSTP (A Signal Transfer Point). They were required for an effort to add 3GPP compliant A-over-IP to OsmoBSC and OsmoMSC.

For quite some time I was seeing some erratic behavior when at some point the STP would not receive/process a given message sent by one of the clients (ASPs) connected. I tried to ignore the problem initially until the code matured more and more, but the problems remained.

It became even more obvious when using Michael Tuexen's m3ua-testtool, where sometimes even the most basic test cases consisting of sending + receiving a single pair of messages like ASPUP -> ASPUP_ACK was failing. And when the test case was re-tried, the problem often disappeared.

Also, whenever I tried to observe what was happening by meas of strace, the problem would disappear completely and never re-appear until strace was detached.

Of course, given that I've written several thousands of lines of new code, it was clear to me that the bug must be in my code. Yesterday I was finally prepare to accept that it might actually be a Linux SCTP bug. Not being able to reproduce that problem on a FreeBSD VM also pointed clearly into this direction.

Now I could simply have collected some information and filed a bug report (which some kernel hackers at RedHat have thankfully invited me to do!), but I thought my use case was too complex. You would have to compile a dozen of different Osmocom libraries, configure the STP, run the scheme-language m3ua-testtool in guile, etc. - I guess nobody would have bothered to go that far.

So today I tried to implement a test case that reproduced the problem in plain C, without any external dependencies. And for many hours, I couldn't make the bug to show up. I tried to be as close as possible to what was happening in OsmoSTP: I used non-blocking mode on client and server, used the SCTP_NODELAY socket option, used the sctp_rcvmsg() library wrapper to receive events, but the bug was not reproducible.

Some hours later, it became clear that there was one setsockopt() in OsmoSTP (actually, libosmo-netif) which enabled all existing SCTP events. I did this at the time to make sure OsmoSTP has the maximum insight possible into what's happening on the SCTP transport layer, such as address fail-overs and the like.

As it turned out, adding that setsockopt for SCTP_FLAGS to my test code made the problem reproducible. After playing around which of the flags, it seems that enabling the SENDER_DRY_EVENT flag makes the bug appear.

You can find my detailed report about this issue in https://bugzilla.redhat.com/show_bug.cgi?id=1442784 and a program to reproduce the issue at http://people.osmocom.org/laforge/sctp-nonblock/sctp-dry-event.c

Inside the Osmocom world, luckily we can live without the SENDER_DRY_EVENT and a corresponding work-around has been submitted and merged as https://gerrit.osmocom.org/#/c/2386/

With that work-around in place, suddenly all the m3ua-testtool and sua-testtool test cases are reliably green (PASSED) and OsmoSTP works more smoothly, too.

What do we learn from this?

Free Software in the Telecom sphere is getting too little attention. This is true even those small portions of telecom relevant protocols that ended up in the kernel like SCTP or more recently the GTP module I co-authored. They are getting too little attention in development, even more lack of attention in maintenance, and people seem to focus more on not using it, rather than fixing and maintaining what is there.

It makes me really sad to see this. Telecoms is such a massive industry, with billions upon billions of revenue for the classic telecom equipment vendors. Surely, they would be able to co-invest in some basic infrastructure like proper and reliable testing / continuous integration for SCTP. More recently, we see millions and more millions of VC cash burned by buzzword-flinging companies doing "NFV" and "SDN". But then rather reimplement network stacks in userspace than to fix, complete and test those little telecom infrastructure components which we have so far, like the SCTP protocol :(

Where are the contributions to open source telecom parts from Ericsson, Nokia (former NSN), Huawei and the like? I'm not even dreaming about the actual applications / network elements, but merely the maintenance of something as basic as SCTP. To be fair, Motorola was involved early on in the Linux SCTP code, and Huawei contributed a long series of fixes in 2013/2014. But that's not the kind of long-term maintenance contribution that one would normally expect from the primary interest group in SCTP.

Finally, let me thank to the Linux SCTP maintainers. I'm not complaining about them! They're doing a great job, given the arcane code base and the fact that they are not working for a company that has SCTP based products as their core business. I'm sure the would love more support and contributions from the Telecom world, too.

Planet DebianAntoine Beaupré: Montreal Bug Squashing Party report

Un sommaire de cet article est également traduit vers le français, merci!

Last friday, a group of Debian users, developers and enthusiasts met at Koumbit.org offices for a bug squashing party. We were about a dozen people of various levels: developers, hackers and users.

I gave a quick overview of Debian packaging using my quick development guide, which proved to be pretty useful. I made a deb.li link (https://deb.li/quickdev) for people to be able to easily find the guide on their computers.

Then I started going through a list of different programs used to do Debian packaging, to try and see the level of the people attending:

  • apt-get install - everyone knew about it
  • apt-get source - everyone paying attention
  • dget - only 1 knew about it
  • dch - 1
  • quilt - about 2
  • apt-get build-dep - 1
  • dpkg-buildpackage - only 3 people
  • git-buildpackage / gitpkg - 1
  • sbuild / pbuilder
  • dput - 1
  • rmadison - 0 (the other DD wasn't paying attention anymore)

So mostly skilled Debian users (they know apt-get source) but not used to packaging (they don't know about dpkg-buildpackage). So I went through the list again and explained how they all fit together and could be used to work on Debian packages in the context of a Debian release bug squashing party. This was the fastest crash course in Debian packaging I have ever given (and probably the first too) - going through those tools in about 30 minutes. I was happy to have the guide that people could refer to later in the back.

The first question after the presentation was "how do we find bugs"? which led me to add links to the UDD bugs page and release-critical bugs page. I also explained the key links on top of the UDD page to find specific sets of bugs, and explained the useful "patch" filter that allows to select bugs with our without patch.

I guess that maybe half of the people were able to learn new, or improve their skills to make significant contributions or test actual patches. Other learned how to hunt and triage bugs in the BTS.

Update: sorry for the wording: all contributions were really useful, thanks and apologies to bug hunters!!

I myself learned how to use sbuild thanks to the excellent sbuild wiki page which I improved upon. A friend was able to pick up sbuild very quickly and use it to build a package for stretch, which I find encouraging: my first experience with pbuilder was definitely not as good. I have therefore starting the process of switching my build chroots to sbuild, which didn't go so well on Jessie because I use a backported kernel, and had to use the backported sbuild as well. That required a lot of poking around, so I ended up just using pbuilder for now, but I will definitely switch on my home machine, and I updated the sbuild wiki page to give out more explanations on how to setup pbuilder.

We worked on a bunch of bugs, and learned how to tag them as part of the BSP, which was documented in the BSP wiki page. It seems we have worked on about 11 different bugs which is a better average than the last BSP that I organized, so I'm pretty happy with that.

More importantly, we got Debian people together to meet and talk, over delicious pizza, thanks to a sponsorship granted by the DPL. Some people got involved in the next DebConf which is also great.

On top of fixing bugs and getting people involved in Debian, my third goal was to have fun, and fun we certainly had. I didn't work on as many bugs as I expected myself, achieving only one upload in the end, but since I was answering so many questions left and right, I felt useful and that is certainly gratifying. Organization was simple enough: just get a place, send invites and get food, and the rest is just sharing knowledge and answering questions.

Thanks everyone for coming, and let's do this again soon!

Planet DebianBits from Debian: DPL elections 2017, congratulations Chris Lamb!

The Debian Project Leader elections finished yesterday and the winner is Chris Lamb!

Of a total of 1062 developers, 322 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2017 page.

The current Debian Project Leader, Mehdi Dogguy, congratulated Chris Lamb in his Final bits from the (outgoing) DPL message. Thanks, Mehdi, for the service as DPL during this last twelve months!

The new term for the project leader starts on April 17th and expires on April 16th 2018.

Planet DebianChris Lamb: Elected Debian Project Leader

I'd like to thank the entire Debian community for choosing me to represent them as the next Debian Project Leader.

I would also like to thank Mehdi for his tireless service and wish him all the best for the future. It is an honour to be elected as the DPL and I am humbled that you would place your faith and trust in me.

You can read my platform here.


Planet Linux AustraliaChris Smart: Creating an OpenStack Ironic deploy image with Buildroot

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.

,

Planet DebianDirk Eddelbuettel: Rcpp now used by 1000 CRAN packages

800 Rcpp packages

Moments ago Rcpp passed a big milestone as there are now 1000 packages on CRAN depending on it (as measured by Depends, Imports and LinkingTo, but excluding Suggests). The graph is on the left depicts the growth of Rcpp usage over time.

One easy way to compute such reverse dependency counts is the tools::dependsOnPkgs() function that was just mentioned in yesterday's R^4 blog post. Another way is to use the reverse_dependencies_with_maintainers() function from this helper scripts file on CRAN. Lastly, devtools has a function revdep() but it has the wrong default parameters as it includes Suggests: which you'd have to override to get the count I use here (it currently gets 1012 in this wider measure).

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July, 800 packages last October and 900 packages early January. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, and nine percent mid-December 2016. Ten percent is next; we may get there during the summer.

1000 user packages is a really large number. This puts a whole lot of responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Don MartiTraffic sourcing web obfuscator?

(This is an answer to a question on Twitter. Twitter is the new blog comments (for now) and I'm more likely to see comments there than to have time to set up and moderate comments here.)

Adfraud is an easy way to make mad cash, adtech is happily supporting it, and it all works because the system has enough layers between CMO and fraud hacker that everybody can stay as clean as they need to. Users bear the privacy risks of adfraud, legit publishers pay for it, and adtech makes more money from adfraud than fraud hackers do. Adtech doesn't have to communicate or coordinate with adfraud, just set up a fraud-friendly system and let the actual fraud hackers go to work. Bad for users, people who make legit sites, and civilization in general.

But one piece of good news is that adfraud can change quickly. Adfraud hackers don't have time to get stuck in conventional ways of doing things, because adfraud is so lucrative that the high-skill players don't have to stay in it for very long. The adfraud hackers who were most active last fall have retired to run their resorts or recording studios or wineries or whatever.

So how can privacy tools get a piece of the action?

One random idea is for an obfuscation tool to participate in the market for so-called sourced traffic. Fraud hackers need real-looking traffic and are willing to pay for it. Supplying that traffic is sketchy but legal. Which is perfect, because put one more layer on top of it and it's not even sketchy.

And who needs to know if they're doing a good job at generating real-looking traffic? Obfuscation tool maintainers. Even if you write a great obfuscation tool, you never really know if your tricks for helping users beat surveillance are actually working, or if your tool's traffic is getting quietly identified on the server side.

In proposed new privacy tool model, outsourced QA pays YOU!

Set up a market where a Perfectly Legitimate Site that is looking for sourced traffic can go to buy pageviews, I mean buy Perfectly Legitimate Data on how fast a site loads from various home Internet connections. When the obfuscation tool connects to its server for an update, it gets a list of URLs to visit—a mix of random, popular sites and paying customers.

Set a minimum price for pageviews that's high enough to make it cost-ineffective for DDoS. Don't allow it to be used on random sites, only those that the buyer controls. Make them put a secret in an unlinked-to URL or something. And if an obfuscation tool isn't well enough sandboxed to visit a site that's doing traffic sourcing, it isn't well enough sandboxed to surf the web unsupervised at all.

Now the obfuscation tool maintainer will be able to to tell right away if the tool is really generating realistic traffic, by looking at the market price. The maintainer will even be able to tell whose tracking the tool can beat, by looking at which third-party resources are included on the pages getting paid-for traffic. And the whole thing can be done by stringing together stuff that IAB members are already doing, so they would look foolish to complain about it.

Planet DebianGunnar Wolf: On Dmitry Bogatov and empowering privacy-protecting tools

There is a thorny topic we have been discussing in nonpublic channels (say, the debian-private mailing list... It is impossible to call it a private list if it has close to a thousand subscribers, but it sometimes deals with sensitive material) for the last week. We have finally confirmation that we can bring this topic out to the open, and I expect several Debian people to talk about this. Besides, this information is now repeated all over the public Internet, so I'm not revealing anything sensitive. Oh, and there is a statement regarding Dmitry Bogatov published by the Tor project — But I'll get to Tor soon.

One week ago, the 25-year old mathematician and Debian Maintainer Dmitry Bogatov was arrested, accused of organizing riots and calling for terrorist activities. Every evidence so far points to the fact that Dmitry is not guilty of what he is charged of — He was filmed at different places at the times where the calls for terrorism happened.

It seems that Dmitry was arrested because he runs a Tor exit node. I don't know the current situation in Russia, nor his political leanings — But I do know what a Tor exit node looks like. I even had one at home for a short while.

What is Tor? It is a network overlay, meant for people to hide where they come from or who they are. Why? There are many reasons — Uninformed people will talk about the evil wrongdoers (starting the list of course with the drug sellers or child porn distributors). People who have taken their time to understand what this is about will rather talk about people for whom free speech is not a given; journalists, political activists, whistleblowers. And also, about regular people — Many among us have taken the habit of doing some of our Web surfing using Tor (probably via the very fine and interesting TAILS distribution — The Amnesiac Incognito Live System), just to increase the entropy, and just because we can, because we want to preserve the freedom to be anonymous before it's taken away from us.

There are many types of nodes in Tor; most of them are just regular users or bridges that forward traffic, helping Tor's anonymization. Exit nodes, where packets leave the Tor network and enter the regular Internet, are much scarcer — Partly because they can be quite problematic to people hosting them. But, yes, Tor needs more exit nodes, not just for bandwidth sake, but because the more exit nodes there are, the harder it is for a hostile third party to monitor a sizable number of them for activity (and break the anonymization).

I am coincidentially starting a project with a group of students of my Faculty (we want to breathe life again into LIDSOL - Laboratorio de Investigación y Desarrollo de Software Libre). As we are just starting, they are documenting some technical and social aspects of the need for privacy and how Tor works; I expect them to publish their findings in El Nigromante soon (which means... what? ☺ ), but definitively, part of what we want to do is to set up a Tor exit node at the university — Well documented and with enough academic justification to avoid our network operation area ordering us to shut it down. Lets see what happens :)

Anyway, all in all — Dmitry is in for a heavy time. He has been detained pre-trial at least until June, and he faces quite serious charges. He has done a lot of good, specialized work for the whole world to benefit. So, given I cannot do more, I'm just speaking my mind here in this space.

[Update] Dmitry's case has been covered in LWN. There is also a statement concerning the arrest of Dmitry Bogatov by the Debian project. This case is also covered at The Register.

Planet DebianDirk Eddelbuettel: #5: Easy package information

Welcome to the fifth post in the recklessly rambling R rants series, or R4 for short.

The third post showed an easy way to follow R development by monitoring (curated) changes on the NEWS file for the development version r-devel. As a concrete example, I mentioned that it has shown a nice new function (tools::CRAN_package_db()) coming up in R 3.4.0. Today we will build on that.

Consider the following short snippet:

library(data.table)

getPkgInfo <- function() {
    if (exists("tools::CRAN_package_db")) {
        dat <- tools::CRAN_package_db()
    } else {
        tf <- tempfile()
        download.file("https://cloud.r-project.org/src/contrib/PACKAGES.rds", tf, quiet=TRUE)
        dat <- readRDS(tf)              # r-devel can now readRDS off a URL too
    }
    dat <- as.data.frame(dat)
    setDT(dat)
    dat
}

It defines a simple function getPkgInfo() as a wrapper around said new function from R 3.4.0, ie tools::CRAN_package_db(), and a fallback alternative using a tempfile (in the automagically cleaned R temp directory) and an explicit download and read of the underlying RDS file. As an aside, just this week the r-devel NEWS told us that such readRDS() operations can now read directly from URL connection. Very nice---as RDS is a fantastic file format when you are working in R.

Anyway, back to the RDS file! The snippet above returns a data.table object with as many rows as there are packages on CRAN, and basically all their (parsed !!) DESCRIPTION info and then some. A gold mine!

Consider this to see how many package have a dependency (in the sense of Depends, Imports or LinkingTo, but not Suggests because Suggests != Depends) on Rcpp:

R> dat <- getPkgInfo()
R> rcppRevDepInd <- as.integer(tools::dependsOnPkgs("Rcpp", recursive=FALSE, installed=dat))
R> length(rcppRevDepInd)
[1] 998
R>

So exciting---we will hit 1000 within days! But let's do some more analysis:

R> dat[ rcppRevDepInd, RcppRevDep := TRUE]  # set to TRUE for given set
R> dat[ RcppRevDep==TRUE, 1:2]
           Package Version
  1:      ABCoptim  0.14.0
  2: AbsFilterGSEA     1.5
  3:           acc   1.3.3
  4: accelerometry   2.2.5
  5:      acebayes   1.3.4
 ---                      
994:        yakmoR   0.1.1
995:  yCrypticRNAs  0.99.2
996:         yuima   1.5.9
997:           zic     0.9
998:       ziphsmm   1.0.4
R>

Here we index the reverse dependency using the vector we had just computed, and then that new variable to subset the data.table object. Given the aforementioned parsed information from all the DESCRIPTION files, we can learn more:

R> ## likely false entries
R> dat[ RcppRevDep==TRUE, ][NeedsCompilation!="yes", c(1:2,4)]
            Package Version                                                                         Depends
 1:         baitmet   1.0.0                                                           Rcpp, erah (>= 1.0.5)
 2:           bea.R   1.0.1                                                        R (>= 3.2.1), data.table
 3:            brms   1.6.0                     R (>= 3.2.0), Rcpp (>= 0.12.0), ggplot2 (>= 2.0.0), methods
 4: classifierplots   1.3.3                             R (>= 3.1), ggplot2 (>= 2.2), data.table (>= 1.10),
 5:           ctsem   2.3.1                                           R (>= 3.2.0), OpenMx (>= 2.3.0), Rcpp
 6:        DeLorean   1.2.4                                                  R (>= 3.0.2), Rcpp (>= 0.12.0)
 7:            erah   1.0.5                                                               R (>= 2.10), Rcpp
 8:             GxM     1.1                                                                              NA
 9:             hmi   0.6.3                                                                    R (>= 3.0.0)
10:        humarray     1.1 R (>= 3.2), NCmisc (>= 1.1.4), IRanges (>= 1.22.10),\nGenomicRanges (>= 1.16.4)
11:         iNextPD   0.3.2                                                                    R (>= 3.1.2)
12:          joinXL   1.0.1                                                                    R (>= 3.3.1)
13:            mafs   0.0.2                                                                              NA
14:            mlxR   3.1.0                                                           R (>= 3.0.1), ggplot2
15:    RmixmodCombi     1.0              R(>= 3.0.2), Rmixmod(>= 2.0.1), Rcpp(>= 0.8.0), methods,\ngraphics
16:             rrr   1.0.0                                                                    R (>= 3.2.0)
17:        UncerIn2     2.0                          R (>= 3.0.0), sp, RandomFields, automap, fields, gstat
R> 

There are a full seventeen packages which claim to depend on Rcpp while not having any compiled code of their own. That is likely false---but I keep them in my counts, however relunctantly. A CRAN-declared Depends: is a Depends:, after all.

Another nice thing to look at is the total number of package that declare that they need compilation:

R> ## number of packages with compiled code
R> dat[ , .(N=.N), by=NeedsCompilation]
   NeedsCompilation    N
1:               no 7625
2:              yes 2832
3:               No    1
R>

Isn't that awesome? It is 2832 out of (currently) 10458, or about 27.1%. Just over one in four. Now the 998 for Rcpp look even better as they are about 35% of all such packages. In order words, a little over one third of all packages with compiled code (which may be legacy C, Fortran or C++) use Rcpp. Wow.

Before closing, one shoutout to Dirk Schumacher whose thankr which I made the center of the last post is now on CRAN. As a mighty fine and slim micropackage without external dependencies. Neat.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

CryptogramFriday Squid Blogging: Chilean Squid Producer Diversifies

In another symptom of climate change, Chile's largest squid producer "plans to diversify its offering in the future, selling sea urchin, cod and octopus, to compensate for the volatility of giant squid catches...."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianLaura Arjona Reina: Underestimating Debian

I had two issues in the last days that lead me a bit into panic until they got solved. In both cases the issue was external to Debian but I first thought that the problem was in Debian. I’m not sure why I had those thoughts, I should be more confident in myself, this awesome operating system, and the community around it! The good thing is that I’ll be more confident from now on, and I’ve learned that hurry is not a good friend, and I should face my computer “problems” (and everything in life, probably) with a bit more patience (and backups).

Issue 1: Corrupt ext partition in a laptop

I have a laptop at home with dual boot Windows 7 + Debian 9 (Stretch). I rarely boot the Windows partition. When I do, I do whatever I need to do/test there, then install updates, and then shutdown the laptop or reboot in Debian to feel happy again when using computers.

Some months ago I noticed that booting in Debian was not possible and I was left in an initramfs console that was suggesting to e2fsck /dev/sda6 (my Debian partition). Then I ran e2fsck, say “a” to fix all the issues found, and the system was booting properly. This issue was a bit scary-looking because of the e2fsck output making screen show random numbers and scrolling quickly for 1 or 2 minutes, until all the inodes or blocks or whatever were fixed.

I thought about the disk being faulty, and ran badblocks, but faced the former boot issue again some time after, and then decided to change the disk (then I took the opportunity to make backups, and install a fresh Debian 9 Stretch in the laptop, instead of the Debian 8 stable that was running).

The experience with Stretch has been great since then, but some days ago I faced the boot issue again. Then I realised that maybe the issue was appearing when I booted Debian right after using Windows (and this was why it was appearing not very often in my timeline 😉 ). Then I payed more attention to the message that I was receiving in the console

Superblock checksum does not match superblock while trying to open /dev/sda6
 /dev/sda6:
 The superblock could not be read or does not describe a valid ext2/ext3/ext4
 filesystem. If the device is valid and it really contains an ext2/ext3/ext4
 filesystem (and not swap or ufs or something else), then the superblock
 is corrupt, and you might try running e2fsck with an alternate superblock:
 e2fsck -b 8193
 or
 e2fsck -b 32768

and searched about it, and also asked about it to my friends in the redeslibres XMPP chat room 🙂

I found this question in the AskUbuntu forum that was exactly my issue (I had ext2fsd installed in Windows). My friends in the XMPP room friendly yelled “booo!” at me for letting Windows touch my ext partitions (I apologised, it will never happen again!). I now consistently could reproduce the issue (boot Windows, then boot Debian, bang!: initramfs console, e2fsck, reboot Debian, no problem, boot Windows, boot Debian, again the problem, etc). I uninstalled the ext2fsd program and tried to reproduce the issue, and I couldn’t reproduce it. So happy end.

Issue 2: Accessing Android internal memory to backup files

The other issue was with my tablet running Android 4.0.4. It was facing a charge issue, and I wanted to backup the files there before sending it to repair. I connected the tablet with USB to my laptop, and enabled USB debugging. The laptop recognized a MZ604 ‘camera’ connected, but Dolphin (the file browser of my KDE Plasma desktop) could not show the files.

I looked at the settings in the tablet to try to find the setting that allowed me to switch between camera/MTP when connecting with USB, but couldn’t find it. I guessed that the tablet was correctly configured because I recall having made a backup some months ago, with no hassle… (in Debian 8). I checked that my Debian (9) had installed the needed packages:

 ii kio-mtp 0.75+git20140304-2 amd64 access to MTP devices for applications using the KDE Platform
 ii libmtp-common 1.1.12-1 all Media Transfer Protocol (MTP) common files
 ii libmtp-runtime 1.1.12-1+b1 amd64 Media Transfer Protocol (MTP) runtime tools
 ii libmtp9:amd64 1.1.12-1+b1 amd64 Media Transfer Protocol (MTP) library

So I had no idea about what was going on. Then I suspected some problem in my Debian (maybe I was needing some driver for the Motorola tablet?) and booted Windows 7 to see what happened there.

Windows detected a MZ604 device too, but couldn’t access the files either (when clicking in the device, no folders were shown). I began to search the internet to see if there were some Motorola drivers out there, and then found the clue to enable the correct settings in the Android device: you need to go to Settings > Storage, and then press the 3-dots button that makes the “Menu” function, and then appears “USB computer connection” and there, you can enable Camera or MTP. Very hidden setting! I enabled MTP, and then I could see the folders and files in my Windows system (without need of installing any additional driver), and make my backup. And of course after rebooting and trying in Debian, it worked too.

Some outcomes/conclusions

  • I have a spare hard disk for backups, tests, whatever.
  • I should make backups more often (and organize my files too). Then I wouldn’t be so nervous when facing connection or harddrive issues.
  • I won’t let my Windows touch my Debian partitions. I don’t say ext2fsd is bad, but I installed it “just in case” and in practice I never felt the need to use it. So no need to risk (again) a corrupt ext partition.
  • Having a Windows system at hand is useful some times to demonstrate myself (and maybe others) that the problems aren’t usually related to Debian or other GNU/Linux.
  • Having some more patient is useful too to demonstrate myself (and maybe others) that the problems aren’t usually related to Debian or other GNU/Linux.
  • Maybe I should put aside some money in my budget for collateral damages of my computer tinkering, or renew hardware at some time (before it definitely breaks, and not after). For example if I had renewed this tablet (it’s a good one, but from 2011, Android 4, and the screen is broken, and it was not charging since one year, we were using it only plugged to AC), then my family wouldn’t care if I “break the old tablet” trying to unlock its bootloader or install Debian on it or whatever. The same for my husband’s laptop (the one with the dual boot), it’s an old machine already, but it’s his only computer. I already felt risky installing Debian testing on it! (I installed it in end-january, right before the full-freeze).
  • OTOH, even thinking about renewing hardware made me headache. My family show advertisements from the shopping mall and I don’t know if I can install Debian without nonfree blobs, or Replicant or LineageOS on those devices. I don’t know the max volume that the ringtone reaches, and the max volume of the laptop speakers, or the lower possible brightness of the screens. I’m picky about laptop keyboards. I don’t like to spend much money in hardware that can be destroyed easily because it falls down from my hand to the floor, or I accidentally throw coffee on it. So I end enlarging the life of my current hardware, even if I don’t like it much, either…

Comments?

You can comment on this post using this pump.io thread.


Filed under: My experiences and opinion Tagged: Android, Debian, English, KDE, Libre software for Windows

LongNowTiffany Shlain Presents 50/50 Day

“Women have always been an equal part of the past,” the American feminist Gloria Steinem once said. “We just haven’t been a part of history.” On May 10th, thousands of people will gather at events across the globe to discuss what it will take to get to a more gender-balanced world.

Historical female leaders from 50/50.

It’s part of the 50/50 global initiative launched by Tiffany Shlain, the Emmy-nominated filmmaker and founder of the Webby Awards. The centerpiece film for the global conversation will be 50/50, a 26-minute documentary by Shlain that explores the 10,000 year history of women and power.

Long Now is partnering with Tiffany Slain for the San Francisco events on 50/50 Day. If you’re inspired to get involved or host a screening, sign up here. You can watch the full 50/50 documentary here.

Back in 02010, Shlain participated in our “Long Conversation” event in dialogue with environmentalist Paul Hawken and also with game designer Jane McGonigal.

Krebs on SecurityShoney’s Hit By Apparent Credit Card Breach

It’s Friday, which means it’s time for another episode of “Which Restaurant Chain Got Hacked?” Multiple sources in the financial industry say they’ve traced a pattern of fraud on customer cards indicating that the latest victim may be Shoney’s, a 70-year-old restaurant chain that operates primarily in the southern United States.

Image: Thomas Hawk, Flickr.

Image: Thomas Hawk, Flickr.

Shoney’s did not respond to multiple requests for comment left with the company and its outside public relations firm over the past two weeks.

Based in Nashville, Tenn., the privately-held restaurant chain includes approximately 150 company-owned and franchised locations in 17 states from Maryland to Florida in the east, and from Missouri to Texas in the West — with the northernmost location being in Ohio, according to the company’s Wikipedia page.

Sources in the financial industry say they’ve received confidential alerts from the credit card associations about suspected breaches at dozens of those locations, although it remains unclear whether the problem is limited to those locations or if it extends company-wide. Those same sources say the affected locations were thought to have been breached between December 2016 and early March 2017.

It’s also unclear whether the apparent breach affects corporate-owned or franchised stores — or both. In last year’s card breach involving hundreds of Wendy’s restaurants, only franchised locations were thought to have been impacted. In the case of the intrusion at Arby’s, on the other hand, only corporate stores were affected.

The vast majority of the breaches involving restaurant and hospitality chains over the past few years have been tied to point-of-sale devices that were remotely hacked and seeded with card-stealing malicious software.

Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register. Thieves can then sell the data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to buy gift cards and high-priced goods from big-box stores like Target and Best Buy.

Many retailers are now moving to install card readers that can handle transactions from more secure chip-based credit and debit cards, which are far more expensive for thieves to clone. Malware that makes it onto point-of-sale devices capable of processing chip card transactions can still intercept data from a customer’s chip-enabled card, but that information cannot later be used to create a cloned physical copy of the card.

Update, April 16, 2017, 10:05 p.m. ET: After this story was published, an Atlanta-based company called Best American Hospitality Corp. published a press release claiming responsibility for a card breach impacting dozens of Shoney’s locations. Here’s the company’s notice about this incident, which lists the locations thought to have been compromised so far.

Sociological ImagesThe ugly secret behind the “Model Search”

Flashback Friday.

Sociologists are lucky to have amongst them a colleague who is doing excellent work on the modeling industry and, in doing so, offering us all a rare sophisticated glimpse into its economic and cultural logics. We’ve featured Ashley Mears‘ work twice in posts discussing the commodification of models’ bodies and the different logics of high end and commercial fashion.

In a post at Jezebel, Mears exposes the Model Search. Purportedly an opportunity for model hopefuls to be discovered, Mears argues that it functions primarily as a networking opportunity for agents, who booze and schmooze it up with each other, while being alternatively bored and disgusted by the girls and women who pay to be there.

“Over a few days,” Mears explains:

…thousands arrived to impress representatives from over 100 international modeling and talent agencies. In the modeling showcase alone, over 500 people ages 13-25 strutted down an elevated runway constructed in the hotel’s ballroom, alongside which rows of agents sat and watched.

2013 International Model and Talent Search; photo by AJ Batac.

But the agents are not particularly interested in scouting.  In shadowing them during the event, Mears finds that they “actually find it all rather boring and tasteless.”  Pathetic, too.

Mears explains:

The saddest thing at a model search contest is not the sight of girls performing womanhood defined as display object. Nor is it their exceedingly slim chances to ever be the real deal. What’s really sad is the state of the agents: they sit with arms folded, yawning regularly, checking their BlackBerrys. After a solid two hours, Allie has seen over 300 contestants. She’s recorded just eight numbers for callbacks.

Meanwhile, agents ridicule the wannabe runway, from the “hooker heels” to the outfit choices. About their physiques, [one agent recounts,] “I’ve never seen so many out of shape bodies.”

While model hopefuls are trading sometimes thousands of dollars for a 30-second walk down the runway, the agents are biding their time until they can head to the hotel bar to “…gossip, network, and commence the delicate work of negotiating the global trade in models…” One agent explains:

To be honest it’s just a networking event. The girls, most of them don’t even have the right measurements. For most of them, today is going to be a wake-up call.

Indeed, networking is the real point of the event.  The girls and women who come with dreams of being a model are largely, and unwittingly, emptying their pockets to subsidize the schmooze.

To add insult to injury, what many of the aspiring models don’t know is that, for “…$5,000 cheaper, any hopeful can walk into an agency’s ‘Open Call’ for an evaluation.”

I encourage you to read Mears’ much longer exposé at Jezebel.

Originally posted in 2010.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramNew C++ Secure Coding Standard

Carnegie Mellon University has released a comprehensive list of C++ secure-coding best practices.

Worse Than FailureError'd: Would You Mind?

"Since clicking 'Yes, I mind', took me to the review page, I left a one star review," writes Pascal.

 

"As we all know, nanotechnology is huge, but don't apply nano power technology to your mind!" writes Martin

 

Conrad R. wrote, "I will not use the Secret DOM. I will not use the Secret DOM. I MUST not use the Secret DOM...or else!"

 

"Well, sorry folks, looks like the Air Force ran out of jobs," writes Scott.

 

"You can't argue with facts, because you can't argue with facts," Ben S. wrote.

 

Shahim M. wrote, "But I only have a 'Normal 2 true false null' degree, what do I do?"

 

"While trying to set my initial password for a client I'm working with, I encountered an error detailing exactly what I was trying to do in the first place," writes Alex F.

 

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.7.800.2.0

armadillo image

A new RcppArmadillo version 0.7.800.2.0 is now on CRAN.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 318 other packages on CRAN -- an increase of 20 just since the last CRAN release of 0.7.600.1.0 in December!

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.800.2.0 (2017-04-12)

  • Upgraded to Armadillo release 7.800.2 (Rogue State Redux)

    • The Armadillo license changed to Apache License 2.0
  • The DESCRIPTION file now mentions the Apache License 2.0, as well as the former MPL2 license used for earlier releases.

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • The fastLm example was updated

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 190 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Antoine Beaupré did 19 hours (out of 14.75h allocated + 10 remaining hours, thus keeping 5.75 extra hours for April).
  • Balint Reczey did nothing (out of 14.75 hours allocated + 2.5 hours remaining) and gave back all his unused hours. He took on a new job and will stop his work as LTS paid contributor.
  • Ben Hutchings did 14.75 hours.
  • Brian May did 10 hours.
  • Chris Lamb did 14.75 hours.
  • Emilio Pozuelo Monfort did 11.75 hours (out of 14.75 hours allocated + 0.5 hours remaining, thus keeping 3.5 hours for April).
  • Guido Günther did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for April).
  • Hugo Lefeuvre did 4 hours (out of 13.5 hours allocated, thus keeping 9.5 extra hours for April).
  • Jonas Meurer did 11.25 hours (out of 14.75 hours allocated, thus keeping 3.5 extra hours for April).
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 23.75 hours (out of 14.75h allocated + 9 hours remaining).
  • Raphaël Hertzog did 15 hours (out of 10 hours allocated + 6.25 hours remaining, thus keeping 1.25 hours for April).
  • Roberto C. Sanchez did 21.5 hours (out of 14.75 hours allocated + 7.75 hours remaining, thus keeping 1 extra hour for April).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours has been unchanged but will likely decrease slightly next month as one sponsor will not renew his support (because they have switched to CentOS).

The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 40. The number of open issues continued its slight increase… not worrisome yet but we need to keep an eye on this situation.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianFrancois Marier: Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:

#!/bin/bash

/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"

pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
fi
popd > /dev/null

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

CryptogramAttack vs. Defense in Nation-State Cyber Operations

I regularly say that, on the Internet, attack is easier than defense. There are a bunch of reasons for this, but primarily it's 1) the complexity of modern networked computer systems and 2) the attacker's ability to choose the time and method of the attack versus the defender's necessity to secure against every type of attack. This is true, but how this translates to military cyber-operations is less straightforward. Contrary to popular belief, government cyberattacks are not bolts out of the blue, and the attack/defense balance is more...well...balanced.

Rebecca Slayton has a good article in International Security that tries to make sense of this: "What is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment." In it, she points out that launching a cyberattack is more than finding and exploiting a vulnerability, and it is those other things that help balance the offensive advantage.

Worse Than FailureAll You Zombies…

We've all been approached for jobs where the job description was merely an endless list of buzzwords across disciplines, and there was no real way to figure out what was actually the top couple of relevant skills. The head hunter is usually of no help as they're rarely tech-savvy enough to understand what the buzzwords mean. The phone screen is often misleading as they always say that one or two skills are the important ones, and then reject candidates because they don't have expertise in some ancillary skill.

A sign, dated March 9, 1982, welcoming travelers from the future

Teddy applied for a position at a firm that started out as a telco but morphed into a business service provider. The job was advertised as looking for people with at least 15-20 years of experience in designing complex systems, and Java programming. The phone screen confirmed the advert and claims of the head hunter. "This is a really great opportunity," the head hunter proclaimed.

Then it was time for the interview. The interview was the sort where you meet with the manager, a peer of the manager, and several senior members of the team, repeating your work history for each one.

There was a coding exercise to see if you could convert an int to a Roman numeral and vice versa.

Each person asked some simplistic architectural design type questions...

   What is Big-O notation?
   How would you compare three sets of numbers to see if any number was common to all three?
   How would you count the number of unique words in the input and report the tally?

OK, so they have verified that you went to school and at least sort of paid attention.

Then some more complex architectural questions were added: How would you design a phone-call pricing system, and what sorts of things would you need to consider? How would you add in a new module to provide additional functionality without breaking the existing system? How would you test it all?

After several hours of this, one of the interviewers admitted that they were looking for someone who has at least 10-15 years technical experience with all of the latest technologies.

"Wait a minute, most new technologies are, by definition, new!", Teddy protested. "They haven't been around ten years?" He questioned how they expected anyone to have that many years of familiarity with anything that hasn't existed that long? Also, wasn't this supposed to be a project that required Java developers?

The interviewer nodded. "That's exactly the problem we've been facing. We simply can't find anyone who had more than a decade of experience with latest technologies." While Java was relevant, they were phasing it out in favor of the flavor of the day.

"Perhaps," Teddy suggested, "you should be looking for a candidate who has many years in the field, doing work that's the same level of complexity and kind of work as you do here." Anyone with that level of expertise should be able to pick up the new technologies as they required.

"We don't *want* to train someone," the interveiwer said, "especially a veteran. We want to hire someone with more than ten years of experience in the latest technologies. We want to build our system using the latest tools."

Teddy warned that if they did one-off projects using different technologies, it'd be fun for the developers, but it would be virtually impossible to swap people in and out as nobody would know the technology used on the next project. Perhaps it might be more prudent to pick a few mainstay technologies as their base platform and make sure that everyone is an expert in at least those skills to effect redundancy across the organization.

The interview ended shortly thereafter, and Teddy never expected to hear from them again. Several months later the same job was still posted. Teddy knew this because another head hunter approached him with the exact same job description. "This is a really great opportunity," the head hunter proclaimed.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiInteresting stuff on the Internet

Just some mindless link propagation to tweak making the links on my blog the right shade of blue.

Good news: Portugal Pushes Law To Partially Ban DRM, Allow Circumvention

Study finds Pokémon Go players are happier and The More You Use Facebook, the Worse You Feel. Get your phone charged up, get off Facebook, and get out there.

If corporations are people, you wouldn't be mean to a person, would you? Managing for the Long Term

Yay, surprise presents for Future Me! Why Kickstarter Decided To Radically Transform Its Business Model

Skateboarding obviously doesn't cause hip fractures, because the age groups least likely to skateboard break their hips the most! Something is breaking American politics, but it's not social media

From Spocko, pioneer of Internet brand safety campaigns: Values: Brand, Corporate & Bill O’Reilly’s

In Spite of People Having Meetings, Bears Still Shit in the Woods: In Spite Of The Crackdown, Fake News Publishers Are Still Earning Money From Major Ad Networks

There's another dead bishop on the landing. Alabama Senate OK's church police bill

Productivity is awesome: How to Avoid Distractions and Finish What You

Computer Science FTW: Corrode update: control flow translation correctness

More good news: Kentucky Coal Mining Museum converts to solar power

This is going to be...fun. Goldman Sachs: VC Dry Powder Hits Record Highs

If you want to prep for a developer job interview, here's some good info: Hexing the technical interview

Planet DebianMichal Čihař: Weblate 2.13.1

Weblate 2.13.1 has been released quickly after 2.13. It fixes few minor issues and possible upgrade problem.

Full list of changes:

  • Fixed listing of managed projects in profile.
  • Fixed migration issue where some permissions were missing.
  • Fixed listing of current file format in translation download.
  • Return HTTP 404 when trying to access project where user lacks privileges.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate | 0 comments

,

CryptogramFourth WikiLeaks CIA Attack Tool Dump

WikiLeaks is obviously playing their Top Secret CIA data cache for as much press as they can, leaking the documents a little at a time. On Friday they published their fourth set of documents from what they call "Vault 7":

27 documents from the CIA's Grasshopper framework, a platform used to build customized malware payloads for Microsoft Windows operating systems.

We have absolutely no idea who leaked this one. When they first started appearing, I suspected that it was not an insider because there wasn't anything illegal in the documents. There still isn't, but let me explain further. The CIA documents are all hacking tools. There's nothing about programs or targets. Think about the Snowden leaks: it was the information about programs that targeted Americans, programs that swept up much of the world's information, programs that demonstrated particularly powerful NSA capabilities. There's nothing like that in the CIA leaks. They're just hacking tools. All they demonstrate is that the CIA hoards vulnerabilities contrary to the government's stated position, but we already knew that.

This was my guess from March:

If I had to guess right now, I'd say the documents came from an outsider and not an insider. My reasoning: One, there is absolutely nothing illegal in the contents of any of this stuff. It's exactly what you'd expect the CIA to be doing in cyberspace. That makes the whistleblower motive less likely. And two, the documents are a few years old, making this more like the Shadow Brokers than Edward Snowden. An internal leaker would leak quickly. A foreign intelligence agency -- like the Russians -- would use the documents while they were fresh and valuable, and only expose them when the embarrassment value was greater.

But, as I said last month, no one has any idea: we're all guessing. (Well, to be fair, I hope the CIA knows exactly who did this. Or, at least, exactly where the documents were stolen from.) And I hope the inability of either the NSA or CIA to keep its own attack tools secret will cause them to rethink their decision to hoard vulnerabilities in common Internet systems instead of fixing them.

News articles.

EDITED TO ADD (4/12): An analysis.

Sociological ImagesDoes less policing lead to more crime? A natural experiment says no

Originally posted at Montclair Socioblog.

Does crime go up when cops, turtle-like, withdraw into their patrol cars, when they abandon “proactive policing” and respond only when called?

In New York we had the opportunity to test this with a natural experiment. Angry at the mayor, the NYPD drastically cut back on proactive policing starting in early December of 2014. The slowdown lasted through early January. This change in policing – less proactive, more reactive – gave researchers Christopher Sullivan and Zachary O’Keeffe an opportunity to look for an effect. (Fair warning: I do not know if their work has been published yet in any peer-reviewed journal.)

First, they confirmed that cops had indeed cut back on enforcing minor offenses. In the graphs below, the yellow shows the rate of enforcement in the previous year (July 2013 to July 2014) when New York cops were not quite so angry at the mayor. The orange line shows the next year. The cutback in enforcement is clear. The orange line dips drastically; the police really did stop making arrests for quality-of-life offenses.

.

Note also that even after the big dip, enforcement levels for the rest of the year remained below those of the previous year, especially in non-White neighborhoods.Sullivan and O’Keeffe also looked at reported crime to see if the decreased enforcement had emboldened the bad guys. The dark blue line shows rates for the year that included the police cutback; the light blue line shows the previous year.

 .

No effect. The crime rates in those winter weeks of reduced policing and after look very much like the crime rates of the year before.

It may be that a few weeks is not enough time for a change in policing to affect serious crime. Certainly, proponents of proactive policing would argue that what attracts predatory criminals to an area is not a low number of arrests but rather the overall sense that this is a place were bad behavior goes unrestrained. Changing the overall character of a neighborhood – for better or worse – takes more than a few weeks.

I have the impression that many people, when they think about crime, use a sort of cops-and-robbers model: cops prevent crime and catch criminals; the more active the cops, the less active the criminals. There may be some truth in that model but, if nothing else, the New York data shows that the connection between policing and crime is not so immediate or direct.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityCritical Security Updates from Adobe, Microsoft

Adobe and Microsoft separately issued updates on Tuesday to fix a slew of security flaws in their products. Adobe patched dozens of holes in its Flash Player, Acrobat and Reader products. Microsoft pushed fixes to address dozens of vulnerabilities in Windows and related software.

brokenwindowsThe biggest change this month for Windows users and specifically for people responsible for maintaining lots of Windows machines is that Microsoft has replaced individual security bulletins for patches with a single “Security Update Guide.”

This change follows closely on the heels of a move by Microsoft to bar home users from selectively downloading specific updates and instead issuing all monthly updates as one big patch blob.

Microsoft’s claims that customers have been clamoring for this consolidated guide notwithstanding, many users are likely to be put off by the new format, which seems to require a great deal more clicking and searching than under the previous rubric. In any case, Microsoft has released a FAQ explaining what’s changed and what folks can expect under the new arrangement.

By my count, Microsoft’s patches this week address some 46 security vulnerabilities, including flaws in Internet Explorer, Microsoft Edge, Windows, Office, Visual Studio for Mac, .NET Framework, Silverlight and Adobe Flash Player.

At least two of the critical bugs fixed by Microsoft this month are already being exploited in active attacks, including a weakness in Microsoft Word that is showing up in attacks designed to spread the Dridex banking trojan.

Finally, a heads up for any Microsoft users still running Windows Vista: This month is slated to be the last that Vista will receive security updates. Vista was first released to consumers more than ten years ago — in January 2007 — so if you’re still using Vista it might be time to give a more modern OS a try (doesn’t have to be Windows…just saying).

As it is wont to do on Microsoft’s Patch Tuesday, Adobe pushed its own batch of security patches. The usual “critical” update for Flash Player fixes at least seven flaws. The newest version is v. 25.0.0.148 for Windows, Mac and Linux systems.

As loyal readers here no doubt already know, I dislike Flash because it’s full of security holes, is a favorite target of drive-by malware exploits, and isn’t really necessary to be left installed or turned on all the time anymore.

Hence, if you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

Adobe also issued security fixes for its Photoshop, Adobe Reader and Acrobat software packages. The Reader/Acrobat updates address a whopping 47 security holes in these products, so if you’ve got either program installed please take a moment to update.

As ever, please leave a note in the comment section if you run into any difficulties downloading or installing any of these patches.

CryptogramResearch on Tech-Support Scams

Interesting paper: "Dial One for Scam: A Large-Scale Analysis of Technical Support Scams":

Abstract: In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then "diagnose the problem", before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.

In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by
technical support scams.

BoingBoing post.

Worse Than FailureCodeSOD: print_a_idiot()

Cédric runs the backend for a video streaming service. Since video streaming, even in modern HTML5, is still a bit of a mess, they have to be able to provide many different stream formats. So, for example, the JSON data might look like this:

{"DASH":"https:\/\/anonymised-wtfdash.akamaihd.invalid\/path\/to\/film.mpd",
"HLS":"https:\/\/anonymised-wtfhls.akamaihd.invalid\/path\/to\/film.mp4\/master.m3u8",
"PD":"https:\/\/anonymised-wtfpd.akamaihd.invalid\/path\/to\/film.mp4"}

They had a problem, however. Many “clever” front-end programmers, especially the ones building Flash front-ends, viewed this data not as JSON, but as a string. Thus they’d try using substring calls to pick out the specific slice of the data they needed, and then their code would break when the length of one of the URLs changed, or a new entry was added. So Cédric “fixed” this problem by randomizing the order of the keys, thus forcing them to parse the data as JSON.

Having idiot proofed his service, an advanced version of Idiot was released a few weeks later, with new features.

  frame 9 {
    function print_a(obj, indent) {
      if (indent == null) {
        indent = '';
      }
      var v2 = '';
      for (item in obj) {
        if (typeof obj[item] == 'object') {
          v2 += indent + '[' + item + '] => Objectn';
        } else {
          v2 += indent + '[' + item + '] => ' + obj[item] + 'n';
        }
        v2 += print_a(obj[item], indent + '   ');
      }
      return v2;
    }

    var jsonFetcher = new LoadVars();
    jsonFetcher.onLoad = function (success) {
      if (!success) {
        trace('Error connecting to server.');
      }
    };

    jsonFetcher.onData = function (thedata) {
      try {
        var v1 = JSON.parse(thedata);
        testvar = print_a(v1);
      }
      catch (ex) {
        trace(ex.name + ':' + ex.message + ':' + ex.at + ':' + ex.text);
      }
    };

    jsonFetcher.load(loadVideo);
  }

  frame 17 {
    index = testvar.indexOf('[PD] => ');
  }

  frame 18 {
    index += 8;
  }

  frame 19 {
    index2 = testvar.indexOf('.mp4n');
  }

  frame 20 {
    index2 -= index;
  }

  frame 21 {
    var mySubstring = new String();
    mySubstring = testvar.substr(index, index2);
  }

  frame 22 {
    mySubstring2 = str_replace('\n', '', mySubstring);
  }

  frame 23 {
    mySubstring2 = mySubstring + '.mp4';
    trace('mySubstring2: ' + mySubstring2);
  }

  frame 25 {
    Videoplayer.contentPath = mySubstring2;
    Videoplayer.volume = _root.VideoVolume;
  }

The function print_a is… special. Given an object, its job is to format it as a pretty string, complete with line-breaks. The goal, obviously, is to allow them to use string slicing to pick out the specific fields they care about (instead of just accessing them directly), but along the way, the line breaks in their print_a function went from being “\n” to just “n”. Notice how they attempt to strip the “\n”s in frame 22. Similarly, through frame 19 to 21, they attempt to trim the “.mp4n” off the end (which they just put there), so that in frame 23, they can just pop back on a “.mp4”.

All of this, amazingly, worked. Unfortunately, you can see that the request was sent on frame 9. The results of the request were checked on frame 17. Since this is video, we can assume a frame-rate of 29.97 frames per second, which means that if the request isn’t serviced in about a quarter of a second, testvar isn’t going to have any data, and everything after it is going to fail, so by frame 25, there’s no video URL to load.

Of course, to start this process, they use JSON.parse, making 90% of this code utterly redundant.

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet DebianVincent Bernat: Proper isolation of a Linux bridge

TL;DR: when configuring a Linux bridge, use the following commands to enforce isolation:

# bridge vlan del dev br0 vid 1 self
# echo 1 > /sys/class/net/br0/bridge/vlan_filtering

A network bridge (also commonly called a “switch”) brings several Ethernet segments together. It is a common element in most infrastructures. Linux provides its own implementation.

A typical use of a Linux bridge is shown below. The hypervisor is running three virtual hosts. Each virtual host is attached to the br0 bridge (represented by the horizontal segment). The hypervisor has two physical network interfaces:

  • eth0 is attached to a public network providing various services for the virtual hosts (DHCP, DNS, NTP, routers to Internet, …). It is also part of the br0 bridge.
  • eth1 is attached to an infrastructure network providing various services to the hypervisor (DNS, NTP, configuration management, routers to Internet, …). It is not part of the br0 bridge.

Typical use of Linux bridging with virtual machines

The main expectation of such a setup is that while the virtual hosts should be able to use resources from the public network, they should not be able to access resources from the infrastructure network (including resources hosted on the hypervisor itself, like a SSH server). In other words, we expect a total isolation between the green domain and the purple one.

That’s not the case. From any virtual host:

# ip route add 192.168.14.3/32 dev eth0
# ping -c 3 192.168.14.3
PING 192.168.14.3 (192.168.14.3) 56(84) bytes of data.
64 bytes from 192.168.14.3: icmp_seq=1 ttl=59 time=0.644 ms
64 bytes from 192.168.14.3: icmp_seq=2 ttl=59 time=0.829 ms
64 bytes from 192.168.14.3: icmp_seq=3 ttl=59 time=0.894 ms

--- 192.168.14.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.644/0.789/0.894/0.105 ms

Why?

There are two main factors of this behavior:

  1. A bridge can accept IP traffic. This is a useful feature if you want Linux to act as a bridge and provide some IP services to bridge users (a DHCP relay or a default gateway). This is usually done by configuring the IP address on the bridge device: ip addr add 192.0.2.2/25 dev br0.
  2. An interface doesn’t need an IP address to process incoming IP traffic. Additionally, by default, Linux accepts to answer ARP requests independently from the incoming interface.

Bridge processing

After turning an incoming Ethernet frame into a socket buffer, the network driver transfers the buffer to the netif_receive_skb() function. The following actions are executed:

  1. copy the frame to any registered global or per-device taps (e.g. tcpdump),
  2. evaluate the ingress policy (configured with tc),
  3. hand over the frame to the device-specific receive handler, if any,
  4. hand over the frame to a global or device-specific protocol handler (e.g. IPv4, ARP, IPv6).

For a bridged interface, the kernel has configured a device-specific receive handler, br_handle_frame(). This function won’t allow any additional processing in the context of the incoming interface, except for STP and LLDP frames or if “brouting” is enabled1. Therefore, the protocol handlers are never executed in this case.

After a few additional checks, Linux will decide if the frame has to be locally delivered:

  • the entry for the target MAC in the FDB is marked for local delivery, or
  • the target MAC is a broadcast or a multicast address.

In this case, the frame is passed to the br_pass_frame_up() function. A VLAN-related check is optionally performed. The socket buffer is attached to the bridge interface (br0) instead of the physical interface (eth0), is evaluated by Netfilter and sent back to netif_receive_skb(). It will go through the four steps a second time.

IPv4 processing

When a device doesn’t have a protocol-independent receive handler, a protocol-specific handler will be used:

# cat /proc/net/ptype
Type Device      Function
0800          ip_rcv
0011          llc_rcv [llc]
0004          llc_rcv [llc]
0806          arp_rcv
86dd          ipv6_rcv

Therefore, if the Ethernet type of the incoming frame is 0x800, the socket buffer is handled by ip_rcv(). Among other things, the three following steps will happen:

  • If the frame destination address is not the MAC address of the incoming interface, not a multicast one and not a broadcast one, the frame is dropped (“not for us”).
  • Netfilter gets a chance to evaluate the packet (in a PREROUTING chain).
  • The routing subsystem will decide the destination of the packet in ip_route_input_slow(): is it a local packet, should it be forwarded, should it be dropped, should it be encapsulated? Notably, the reverse-path filtering is done during this evaluation in fib_validate_source().

Reverse-path filtering (also known as uRPF, or unicast reverse-path forwarding, RFC 3704) enables Linux to reject traffic on interfaces which it should never have originated: the source address is looked up in the routing tables and if the outgoing interface is different from the current incoming one, the packet is rejected.

ARP processing

When the Ethernet type of the incoming frame is 0x806, the socket buffer is handled by arp_rcv().

  • Like for IPv4, if the frame is not for us, it is dropped.
  • If the incoming device has the NOARP flag, the frame is dropped.
  • Netfilter gets a chance to evaluate the packet (configuration is done with arptables).
  • For an ARP request, the values of arp_ignore and arp_filter may trigger a drop of the packet.

IPv6 processing

When the Ethernet type of the incoming frame is 0x86dd, the socket buffer is handled by ipv6_rcv().

  • Like for IPv4, if the frame is not for us, it is dropped.
  • If IPv6 is disabled on the interface, the packet is dropped.
  • Netfilter gets a chance to evaluate the packet (in a PREROUTING chain).
  • The routing subsystem will decide the destination of the packet. However, unlike IPv4, there is no reverse-path filtering2.

Workarounds

There are various methods to fix the situation.

We can completely ignore the bridged interfaces: as long as they are attached to the bridge, they cannot process any upper layer protocol (IPv4, IPv6, ARP). Therefore, we can focus on filtering incoming traffic from br0.

It should be noted that for IPv4, IPv6 and ARP protocols, the MAC address check can be circumvented by using the broadcast MAC address.

Protocol-independent workarounds

The four following fixes will indistinctly drop IPv4, ARP and IPv6 packets.

Using VLAN-aware bridge

Linux 3.9 introduced the ability to use VLAN filtering on bridge ports. This can be used to prevent any local traffic:

# echo 1 > /sys/class/net/br0/bridge/vlan_filtering
# bridge vlan del dev br0 vid 1 self
# bridge vlan show
port    vlan ids
eth0     1 PVID Egress Untagged
eth2     1 PVID Egress Untagged
eth3     1 PVID Egress Untagged
eth4     1 PVID Egress Untagged
br0     None

This is the most efficient method since the frame is dropped directly in br_pass_frame_up().

Using ingress policy

It’s also possible to drop the bridged frame early after it has been re-delivered to netif_receive_skb() by br_pass_frame_up(). The ingress policy of an interface is evaluated before any handler. Therefore, the following commands will ensure no local delivery (the source interface of the packet is the bridge interface) happens:

# tc qdisc add dev br0 handle ffff: ingress
# tc filter add dev br0 parent ffff: u32 match u8 0 0 action drop

In my opinion, this is the second most efficient method.

Using ebtables

Just before re-delivering the frame to netif_receive_skb(), Netfilter gets a chance to issue a decision. It’s easy to configure it to drop the frame:

# ebtables -A INPUT --logical-in br0 -j DROP

However, to the best of my knowledge, this part of Netfilter is known to be inefficient.

Using namespaces

Isolation can also be obtained by moving all the bridged interfaces into a dedicated network namespace and configure the bridge inside this namespace:

# ip netns add bridge0
# ip link set netns bridge0 eth0
# ip link set netns bridge0 eth2
# ip link set netns bridge0 eth3
# ip link set netns bridge0 eth4
# ip link del dev br0
# ip netns exec bridge0 brctl addbr br0
# for i in 0 2 3 4; do
>    ip netns exec bridge0 brctl addif br0 eth$i
>    ip netns exec bridge0 ip link set up dev eth$i
> done
# ip netns exec bridge0 ip link set up dev br0

The frame will still wander a bit inside the IP stack, wasting some CPU cycles and increasing the possible attack surface. But ultimately, it will be dropped.

Protocol-dependent workarounds

Unless you require multiple layers of security, if one of the previous workarounds is already applied, there is no need to apply one of the protocol-dependent fix below. It’s still interesting to know them because it is not uncommon to already have them in place.

ARP

The easiest way to disable ARP processing on a bridge is to set the NOARP flag on the device. The ARP packet will be dropped as the very first step of the ARP handler.

# ip link set arp off dev br0
# ip l l dev br0
8: br0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 50:54:33:00:00:04 brd ff:ff:ff:ff:ff:ff

arptables can also drop the packet quite early:

# arptables -A INPUT -i br0 -j DROP

Another way is to set arp_ignore to 2 for the given interface. The kernel will only answer to ARP requests whose target IP address is configured on the incoming interface. Since the bridge interface doesn’t have any IP address, no ARP requests will be answered.

# sysctl -qw net.ipv4.conf.br0.arp_ignore=2

Disabling ARP processing is not a sufficient workaround for IPv4. A user can still insert the appropriate entry in its neighbor cache:

# ip neigh replace 192.168.14.3 lladdr 50:54:33:00:00:04 dev eth0
# ping -c 1 192.168.14.3
PING 192.168.14.3 (192.168.14.3) 56(84) bytes of data.
64 bytes from 192.168.14.3: icmp_seq=1 ttl=49 time=1.30 ms

--- 192.168.14.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.309/1.309/1.309/0.000 ms

As the check on the target MAC address is quite loose, they don’t even need to guess the MAC address:

# ip neigh replace 192.168.14.3 lladdr ff:ff:ff:ff:ff:ff dev eth0
# ping -c 1 192.168.14.3
PING 192.168.14.3 (192.168.14.3) 56(84) bytes of data.
64 bytes from 192.168.14.3: icmp_seq=1 ttl=49 time=1.12 ms

--- 192.168.14.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.129/1.129/1.129/0.000 ms

IPv4

The earliest place to drop an IPv4 packet is with Netfilter3:

# iptables -t raw -I PREROUTING -i br0 -j DROP

If Netfilter is disabled, another possibility is to enable strict reverse-path filtering for the interface. In this case, since there is no IP address configured on the interface, the packet will be dropped during the route lookup:

# sysctl -qw net.ipv4.conf.br0.rp_filter=1

Another option is the use of a dedicated routing rule. Compared to the reverse-path filtering option, the packet will be dropped a bit earlier, still during the route lookup.

# ip rule add iif br0 blackhole

IPv6

Linux provides a way to completely disable IPv6 on a given interface. The packet will be dropped as the very first step of the IPv6 handler:

# sysctl -qw net.ipv6.conf.br0.disable_ipv6=1

Like for IPv4, it’s possible to use Netfilter or a dedicated routing rule.

About the example

In the above example, the virtual host get ICMP replies because they are routed through the infrastructure network to Internet (e.g. the hypervisor has a default gateway which also acts as a NAT router to Internet). This may not be the case.

If you want to check if you are “vulnerable” despite not getting an ICMP reply, look at the guest neighbor table to check if you got an ARP reply from the host:

# ip route add 192.168.14.3/32 dev eth0
# ip neigh show dev eth0
192.168.14.3 lladdr 50:54:33:00:00:04 REACHABLE

If you didn’t get a reply, you could still have issues with IP processing. Add a static neighbor entry before checking the next step:

# ip neigh replace 192.168.14.3 lladdr ff:ff:ff:ff:ff:ff dev eth0

To check if IP processing is enabled, check the bridge host’s network statistics:

# netstat -s | grep "ICMP messages"
    15 ICMP messages received
    15 ICMP messages sent
    0 ICMP messages failed

If the counters are increasing, it is processing incoming IP packets.

One-way communication still allows a lot of bad things, like DoS attacks. Additionally, if the hypervisor happens to also act as a router, the reach is extended to the whole infrastructure network, potentially exposing weak devices (e.g. PDU) exposing an SNMP agent. If one-way communication is all that’s needed, the attacker can also spoof its source IP address, bypassing IP-based authentication.


  1. A frame can be forcibly routed (L3) instead of bridged (L2) by “brouting” the packet. This action can be triggered using ebtables

  2. For IPv6, reverse-path filtering needs to be implemented with Netfilter, using the rpfilter match

  3. If the br_netfilter module is loaded, net.bridge.bridge-nf-call-ipatbles sysctl has to be set to 0. Otherwise, you also need to use the physdev match to not drop IPv4 packets going through the bridge. 

Planet DebianDaniel Pocock: What is the risk of using proprietary software for people who prefer not to?

Jonas Öberg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used.

A question of leadership

Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype.

Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did?

When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future.

Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say.

It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it.

The many faces of proprietary software

One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer.

There is no need to give up

Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not.

I don't think Jonas' blog intended to sanction this level of complacency. Every time you come across a piece of software, it is worth considering whether a free alternative exists and whether the software is really needed at all.

An orderly migration to free software

In our professional context, most software developers come across proprietary software every day in the networks operated by our employers and their clients. Sometimes we have the opportunity to influence the future of these systems. There are many cases where telling the client to go cold-turkey on their proprietary software would simply lead to the client choosing to get advice from somebody else. The free software engineer who looks at the situation strategically may find that it is possible to continue using the proprietary software as part of a staged migration, gradually helping the user to reduce their exposure over a period of months or even a few years. This may be one of the scenarios where Jonas is sanctioning the use of proprietary software.

On a technical level, it may be possible to show the client that we are concerned about the dangers but that we also want to ensure the continuity of their business. We may propose a solution that involves sandboxing the proprietary software in a virtual machine or a DMZ to prevent it from compromising other systems or "calling home" to the vendor.

As well as technical concerns about a sudden migration, promoters of free software frequently encounter political issues as well. For example, the IT manager in a company may be five years from retirement and is not concerned about his employer's long term ability to extricate itself from a web of Microsoft licenses after he or she has the freedom to go fishing every day. The free software professional may need to invest significant time winning the trust of senior management before he is able to work around a belligerant IT manager like this.

No deal is better than a bad deal

People in the UK have probably encountered the expression "No deal is better than a bad deal" many times already in the last few weeks. Please excuse me for borrowing it. If there is no free software alternative to a particular piece of proprietary software, maybe it is better to simply do without it. Facebook is a great example of this principle: life without social media is great and rather than trying to find or create a free alternative, why not just do something in the real world, like riding motorcycles, reading books or getting a cat or dog?

Burning bridges behind you

For those who are keen to be the visionaries and leaders in a world where free software is the dominant paradigm, would you really feel satisfied if you got there on the back of proprietary solutions? Or are you concerned that taking such shortcuts is only going to put that vision further out of reach?

Each time you solve a problem with free software, whether it is small or large, in your personal life or in your business, the process you went through strengthens you to solve bigger problems the same way. Each time you solve a problem using a proprietary solution, not only do you miss out on that process of discovery but you also risk conditioning yourself to be dependent in future.

For those who hope to build a successful startup company or be part of one, how would you feel if you reach your goal and then the rug is pulled out underneath you when a proprietary software vendor or cloud service you depend on changes the rules?

Personally, in my own life, I prefer to avoid and weed out proprietary solutions wherever I can and force myself to either make free solutions work or do without them. Using proprietary software and services is living your life like a rat in a maze, where the oligarchs in Silicon Valley can move the walls around as they see fit.

Planet DebianMichal Čihař: Weblate 2.13

Weblate 2.13 has been released today pretty much on the schedule. The most important change being more fine grained access control and some smaller UI improvements. There are other new features and bug fixes as well.

Full list of changes:

  • Fixed quality checks on translation templates.
  • Added quality check to trigger on losing translation.
  • Add option to view pending suggestions from user.
  • Add option to automatically build component lists.
  • Default dashboard for unauthenticated users can be configured.
  • Add option to browse 25 random strings for review.
  • History now indicates string change.
  • Better error reporting when adding new translation.
  • Added per language search within project.
  • Group ACLs can now be limited to certain permissions.
  • The per project ALCs are now implemented using Group ACL.
  • Added more fine grained privileges control.
  • Various minor UI improvements.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments

,

Planet DebianJoey Hess: starting debug-me and a new devblog

I've started building debug-me. It's my birthday, and building a new program is kind of my birthday gift to myself, because I love starting a new program and seeing where it goes. (Also, my Patreon backers wanted me to get on with building debug-me.)

I also have a new devblog! Up until now, I've had a devblog that only covered work on git-annex. That one continues, but the new devblog is for development journaling for any project I'm working on. http://joeyh.name/devblog/

Planet DebianMatthew Garrett: Disabling SSL validation in binary apps

Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?

First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.

First, let's find where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code:
set->ssl.primary.verifypeer = TRUE;
set->ssl.primary.verifyhost = TRUE;
#ifdef USE_TLS_SRP
set->ssl.authtype = CURL_TLSAUTH_NONE;
#endif
set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth
type */
set->general_ssl.sessionid = TRUE; /* session ID caching enabled by
default */
set->proxy_ssl = set->ssl;

set->new_file_perms = 0644; /* Default permissions */
set->new_directory_perms = 0755; /* Default permissions */

TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:
objdump -d target | grep --after=20 ,420 | grep ,493

This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit:
43e864: 240301ed li v1,493
Which is promising. Looking at the surrounding code gives:
43e820: 24030001 li v1,1
43e824: a0430138 sb v1,312(v0)
43e828: 8fc20018 lw v0,24(s8)
43e82c: 24030001 li v1,1
43e830: a0430139 sb v1,313(v0)
43e834: 8fc20018 lw v0,24(s8)
43e838: ac400170 sw zero,368(v0)
43e83c: 8fc20018 lw v0,24(s8)
43e840: 2403ffff li v1,-1
43e844: ac4301dc sw v1,476(v0)
43e848: 8fc20018 lw v0,24(s8)
43e84c: 24030001 li v1,1
43e850: a0430164 sb v1,356(v0)
43e854: 8fc20018 lw v0,24(s8)
43e858: 240301a4 li v1,420
43e85c: ac4301e4 sw v1,484(v0)
43e860: 8fc20018 lw v0,24(s8)
43e864: 240301ed li v1,493
43e868: ac4301e8 sw v1,488(v0)

Towards the end we can see 493 being loaded into v1, and v1 then being copied into an offset from v0. This looks like a structure member being set to 493, which is what we expected. Above that we see the same thing being done to 420. Further up we have some more stuff being set, including a -1 - that corresponds to CURLSSH_AUTH_DEFAULT, so we seem to be in the right place. There's a zero above that, which corresponds to CURL_TLSAUTH_NONE. That means that the two 1 operations above the -1 are the code we want, and simply changing 43e820 and 43e82c to 24030000 instead of 24030001 means that our targets will be set to 0 (ie, FALSE) rather than 1 (ie, TRUE). Copy the modified binary back to the device, run it and now it happily talks to MITMProxy. Huge success.

(If the app calls Curl_setopt() to reconfigure the state of these values, you'll need to stub those out as well - thankfully, recent versions of curl include a convenient string "CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!" in this function, so if the code in question is using semi-recent curl it's easy to find. Then it's just a matter of looking for the constants that CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set to, following the jumps and hacking the code to always set them to 0 regardless of the argument)

comment count unavailable comments

TEDA new civic gathering, awarding disobedience, and the case for resettlement

As usual, the TED community has lots of news to share this week. Below, some highlights.

A new civic gathering. To cope with political anxiety after the 2016 elections, Eric Liu has started a gathering called Civic Saturday. He explained the event in The Atlantic as “a civic analogue to church: a gathering of friends and strangers in a common place to nurture a spirit of shared purpose. But it’s not about church religion or synagogue or mosque religion. It’s about American civic religion—the creed of liberty, equality, and self-government that truly unites us.” The gatherings include quiet meditation, song, readings of civic texts, and yes, a sermon. The next Civic Saturday happens April 8 in Seattle — and Eric’s nonprofit Citizens University encourages you to start your own. (Watch Eric’s TED Talk)

Medical research facilitated by apps. The Scripps Translational Science Institute is teaming up with WebMD for a comprehensive study of pregnancy using the WebMD pregnancy app.  By asking users to complete surveys and provide data on their pregnancy, the study will shed light on “one of the least studied populations in medical research,” says STSI director Dr. Eric Topol. The researchers hope the results will provide insights that medical professionals can use to avoid pregnancy complications. (Watch Eric’s TED Talk)

There’s a new type of cloud! While cloud enthusiasts have documented the existence of a peculiar, wave-like cloud formation for years, there’s been no official recognition of it until now. Back in 2009, Gavin Pretor-Pinney, of the Cloud Appreciation Society, proposed to the World Meteorological Society that they add the formation to the International Cloud Atlas, the definitive encyclopedia of clouds, which hadn’t been updated since 1987. On March 24, the Meteorological Society released an updated version of the Atlas, complete with an entry for the type of cloud that Pretor-Pinney had proposed adding. The cloud was named asperitas, meaning “roughness.” (Watch Gavin’s TED Talk)

What neuroscience can teach law. Criminal statutes require juries to assess whether or not the defendant was aware that they were committing a crime, but a jury’s ability to accurately determine the defendant’s mental state at the time of the crime is fraught with problems. Enter neuroscience. Read Montague and colleagues are using neuroimaging and machine learning techniques to study if and how brain activity differs for the two mental states. The research is in early stages, but continued research may help shed scientific light on a legally determined boundary. (Watch Read’s TED Talk)

Why we should award disobedience. After announcing the $250,000 prize last summer, the MIT Media Lab has begun to accept nominations for its first-ever Disobedience Award. Open to groups and individuals engaged in an extraordinary example of constructive disobedience, the prize honors work that undermines traditional structures and institutions in a positive way, from politics and science to advocacy and art. “You don’t change the world by doing what you’re told,” Joi Ito notes, a lesson that has been a long-held practice for the MIT group, who also recently launched their own initiative for space exploration. Nominations for the award are open now through May 1. (Watch Joi’s TED Talk)

The next generation of biotech entrepreneurs. The Innovative Genomics Institute, led by Jennifer Doudna, announced the winners of its inaugural Entrepreneurial Fellowships. Targeted at early-career scientists, the fellowship provides research funding plus business training and mentorship, an entrepreneurial focus that helps scientists create practical impact through commercialization of their work. “I’ve seen brilliant ideas that fizzle out because startup companies just can’t break into the competitive biotechnology scene,” Doudna says. “With more time to develop their ideas and technology, our fellows will have the head start needed to earn the confidence of investors.” (Watch Jennifer’s TED Talk)

The case for resettlement. Since the 1980s, the dominant international approach for the resettlement of refugees has been the humanitarian silo, a camp often located in countries that border war zones. But such host countries are often ill-equipped to bear the brunt. Indeed, many countries place severe restrictions on refugee participation within their communities and labor markets, creating what Alexander Betts describes in The Guardian as an indefinite, even unavoidable, dependency on aid. In this thought-provoking excerpt of his co-authored book, Betts outlines an economic argument for refugee resettlement, arguing that “refugees need to be understood as much in terms of development and trade as humanitarianism.” (Watch Alexander’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.


TEDA spacecraft’s final mission … and other news from TED speakers

Please enjoy your weekly roundup of TED-related news:

Good luck and farewell to the Cassini spacecraft. Launched 20 years ago, NASA’s Cassini spacecraft will begin its final mission on April 26.  The spacecraft will embark on a series of 22 dives through the space between Saturn and its rings, transmitting data that may help us understand the origins of Saturn’s rings and the makeup of the planet, explained NASA’s James Green. After completing the dives, Cassini will run out of fuel and disintegrate over the ringed planet on September 15, 2017. (Watch James’ TED Talk)

Rwanda joins AIMS. The African Institute for Mathematical Sciences has a new campus in Kigali, Rwanda! Launched as part of an agreement with the Government of Rwanda’s Ministry of Education, the Kigali campus marks the sixth country of expansion for AIMS, founded by TED Prize winner Neil Turok, whose centers of excellence also stretch across Cameroon, Ghana, Senegal, South Africa and Tanzania. (Watch Neil’s TED Talk)

A newly discovered dinosaur. Jack Horner can add having a dinosaur named after him to his résumé. The recently discovered Daspletosaurus horneri, or “Horner’s frightful lizard,” lived in Montana around 75 million years ago and is a cousin of the T. Rex. It stood at 2.2 meters tall and, as its name hints, it had a large horn behind each eye. A scaly face dotted with tactile sensory organs (similar to the ones modern crocodiles have) provided their snouts with sensitivity similar to fingertips. Its discovery provides new insight into how tyrannosaurids evolved. This new species appears to have evolved directly from its sister species, Daspletosaurus torosus. The finding supports the theory of anagenesis, or direct evolution without branching, in which a species changes enough over time from its ancestral form to become a new species. (Watch Jack’s TED Talk)

Changing the economics of an illegal economy. At a recent hearing in front of the Senate Commerce, Science and Transportation Committee, Caleb Barlow discussed the state of cybercrime and the ways in which new technologies help not only to reduce such crime, but also address the skills gap that exists within the cybersecurity workforce. Invited as part of the Senate’s review of emerging technologies and their impact on the future of cybersecurity, Barlow argues that one of the most alarming aspects of cybercrime involves the manipulation of data by hackers, where “we move beyond stolen information and money to an even more damaging issue: a loss of trust.” Barlow concludes that massive coordination by criminals today requires an equally organized mode of response by cybersecurity experts, who must embrace collaborative practices like threat sharing in order to properly manage their cybersecurity. (Watch Caleb’s TED Talk)

Awards aplenty for TED Prize winner Raj Panjabi. The winner of the 2017 TED Prize has racked up a stack of additional honors over the past month. He was named one of the four recipients of the 2017 Skoll Award for Social Entrepreneurship for bringing lifesaving health care to remote regions of Liberia, and he spoke yesterday at the Skoll World Forum in Oxford, England, about his work with Last Mile Health. In addition, Raj also landed in spot #28 on Fortune‘s list of “The World’s Greatest Leaders for 2017,” and became one of the Schwab Foundation Social Entrepreneurs of the Year for 2017. Raj will share his TED Prize wish for the world at the TED2017 conference in Vancouver on April 25. Find out how to watch live through TED Cinema Experience. (Keep an eye out for Raj’s TED talk!)

The future of medical imaging. Moving on from Facebook and Oculus, Mary Lou Jepsen has founded Openwater, a startup working to turn MRI-quality imaging into simple, wearable technology. Unlike MRIs, the startup uses near-infrared light for its imaging and if successful, the technology has incredible implications for the diagnosis and treatment of diseases. Their hope is to create a device that enables users to receive detailed information about their brains and bodies in real-time, such as clogged arteries, internal bleeding, and neurological disorders. The company is still in R&D to determine what their first product will be, but Jepsen spoke in depth about the startup at South by Southwest 2017. (Watch Mary Lou’s TED Talk)

Attacking counterfeit with neuroscience. In collaboration with the European Central Bank, David Eagleman has helped create a new currency design for the European Union, one that lets anyone spot a fake. Displayed on the EU’s €50 note, one of the most counterfeited currencies in the world, the design integrates the face of Europa into the bill’s security features, displaying the Phoenician princess as both a hologram and as a watermark. The reason, according to Eagleman, is that the human eye proves itself far more adept at spotting inconsistencies across faces instead of buildings. “The human brain is massively specialized for faces, but has little neural real estate devoted to edifices. As forged watermarks are generally hand-drawn, it would be much easier to spot an imperfect face than an imperfect building.” (Watch David’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.


Planet DebianSean Whitton: coffeereddit

Most people I know can handle a single coffee per day, sometimes even forgetting to drink it. I never could understand how they did it. Talking about this with a therapist I realised that the problem isn’t necessary the caffeine, it’s my low tolerance of less than razor sharp focus. Most people accept they have slumps in their focus and just work through them. binarybear on reddit

Planet DebianRiku Voipio: Deploying OBS

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions.

Things to learned while setting up OBS

Me coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise.

Well done packaging

Usually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows.

OBS does automatic rebuilds of reverse dependencies

Aka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices.

OBS server and worker do http requests in both directions

On startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here..

Signing repositories is complicated

With Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;)

Git integration is rather bolted-on than integrated

OBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git.

The upstream community is friendly

Including the happiest thanks from an upstream I've seen recently.

Summary

All in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you.

*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.

Rondam RamblingsWelcome to bizarro world

It's official: the world has gone completely insane. Yesterday, United Airlines forcibly dragged (literally!) a man off of one of their flights because they decided that their employees were more important than their customers.  Then, instead of doing what any decent human being would have done (i.e. apologize for the obvious and egregious mistake and promise a review and overhaul of their

Planet DebianAntoine Beaupré: A report from Netconf: Day 2

This article covers the second day of the informal Netconf discussions, held on on April 4, 2017. Topics discussed this day included the binding of sockets in VRF, identification of eBPF programs, inconsistencies between IPv4 and IPv6, changes to data-center hardware, and more. (See this article for coverage from the first day of discussions).

How to bind to specific sockets in VRF

One of the first presentations was from David Ahern of Cumulus, who presented a few interesting questions for the audience. His first was the problem of binding sockets to a given interface. Right now, there are four different ways this can be done:

  • the old SO_BINDTODEVICE generic socket option (see socket(7))
  • the IP_PKTINFO, IP-specific socket option (see ip(7)), introduced in Linux 2.2
  • the IP_UNICAST_IF flag, introduced in Linux 3.3 for WINE
  • the IPv6 scope ID suffix, part of the IPv6 addressing standard

So there's a problem of having too many ways of doing the same thing, something that cannot really be fixed without breaking ABI compatibility. But even worse, conflicts between those options are not reported by the kernel so it's possible for a user to set up socket flags in a way that certain flags override others and there are no checks made or errors reported. It was agreed that the user should get some notification of conflicting changes here, at least.

Furthermore, binding sockets to a specific VRF (Virtual Routing and Forwarding) device is not currently possible, so Ahern asked what the best way to do this would be, considering the many options available. A use case example is a UDP multicast socket that could be bound to a specific interface within a VRF.

This is an old problem: Tom Herbert explained that there were previous discussions about making the bind() system call more programmable so that, for example, you could bind() a UDP socket to a discrete list of IP addresses or a subnet. So he identified this issue as a broader problem that should be addressed by making the interfaces more generic.

Ahern explained that it is currently possible to bind sockets to the slave device of a VRF even though that should not be allowed. He also raised the question of how the kernel should tell which socket should be selected for incoming packets. Right now, there is a scoring mechanism for UDP sockets, but that cannot be used directly in this more general case.

David Miller said that there are already different ways of specifying scope: there is the VRF layer and the namespace ("netns") layer. A long time ago, Miller reluctantly accepted the addition of netns keys everywhere, swallowing the performance cost to gain flexibility. He argued that a new key should not be added and instead existing infrastructure should be reused. Herbert argued this was exactly the reason why this should be simplified: "if we don't answer the question, people will keep on trying this". For example, one can use a VRF to limit listening addresses, but it gets complicated if we need a device for every address. It seems the consensus evolved towards using, IP_UNICAST_IF, added back in 2012, which is accessible for non-root users. It is currently limited to UDP and RAW sockets, but it could be extended for TCP.

XDP and eBPF program identification

Ahern then turned to the problem of extracting BPF programs from the kernel. He gave the example of a simple cBPF (classic BPF) filter that checks for ARP packets. If the filter is read back from the kernel, the user gets a blob of binary data, which is hard to interpret. There is an kernel verifier that can show C-like output, but that is also difficult to interpret. Ahern then added annotations to his slide that showed what the original program actually does, which was a good demonstration of why such a feature is needed.

Ahern explained that, at least for cBPF, it should be possible to recover the original plaintext, or at least something close to the original program. A first step would be to replace known constants (like 0x806 for ARP). Even with eBPF, it should be possible to improve the output. Alexei Starovoitov, the BPF maintainer, explained that it might make sense to start by returning information about the maps used by an eBPF program. Then more complex data structures could be inspected once we know their type.

The first priority is to get simple debugging tools working but, in the long term, the goal is a full decompiler that can reconstruct instructions into a human-readable program. The question that remains is how to return this data. Ahern explained that right now the bpf() system call copies the data to a different file descriptor, but it could just fill in a buffer. Starovoitov argued for a file descriptor; that would allow the kernel to stream everything through the same descriptor instead of having many attach points. Netlink cannot be used for this because of its asynchronous nature.

A similar issue regarding the way we identify express data path (XDP) programs (which are also written in BPF) was raised by Daniel Borkmann from Covalent. Miller explained that users will want ways to figure out which XDP program was installed, so XDP needs an introspection mechanism. We currently have SHA-1 identifiers that can be internally used to tell which binary is currently loaded but those are not exposed to user space. Starovoitov mentioned it is now just a boolean that shows if a program is loaded or not.

A use case for this, on top of just trying to figure out which BPF program is loaded, is to actually fetch the source code of a BPF program that was deployed in the field for which the source was lost. It is still uncertain that it will be possible to extract an exact copy that could then be recompiled into the same program. Starovoitov added that he needed this in production to do proper reporting.

IPv4/IPv6 equivalency

The last issue — or set of issues — that Ahern brought up was the question of inconsistencies between IPv4 and IPv6. It turns out that, because both protocols were (naturally) implemented separately, there are inconsistencies in how they are handled in the Linux kernel, which affect, among other things, the VRF framework. The first example he gave was the fact that IPv6 addresses added on the loopback interface generate unreachable routes in the main routing table, yet this doesn't happen with IPv4 addresses. Hannes Frederic Sowa explained this was part of the IPv6 specification: there are stronger restrictions on loopback interfaces in IPv6 than IPv4. Ahern explained that VRF loopback interfaces do not implement these restrictions and wanted to know if this was a problem.

Another issue is that anycast routes are added to the wrong interface. This is apparently not specific to VRF: this was done "just because Java", and has been there from day one. It seems that the Java Virtual Machine builds its own routing table and assumes this behavior, so changing this would break every JVM out there, which is obviously not acceptable.

Finally, Martin Kafai Lau asked if work should be done to merge the IPv4 and IPv6 FIB (forwarding information base) trees. The FIB tree is the data structure that represents routing tables in the Linux kernel. Miller explained that the two trees are not semantically equivalent: while IPv6 does source-address lookup and routing, IPv4 does not. We can't remove the source lookups from IPv6, because "people probably use that". According to Alexander Duyck, adding source tables to IPv4 would degrade performance to the level of IPv6 performance, which was jokingly referred to as an incentive to switch to IPv6.

More seriously, Sowa argued that using the same compressed tree IPv4 uses in IPv6 could make sense. People may want to have source routing in IPv4 as well. Miller argued that the kernel is optimized for 32-bit addresses in IPv4, and conceded that it could be scaled to 64-bit subnets, but 128-bit addresses would be much harder. Sowa suggested that they could be limited to 64 bits, as global routes that are announced over BGP usually have such a limit, and more specific routes are usually at discrete prefixes like /65, /127 (for interconnect links) or /128 for (for point-to-point links). He expressed concerns over the reliability of such an implementation so, at this point, it is unlikely that the data structures could be merged. What is more likely is that the code path could be merged and simplified, while keeping the data structures separate.

Modules options substitutions

The next issue that was raised was from Jiří Pírko, who asked how to pass configuration options to a driver before the driver is initialized. Some chips require that some settings be sent before the firmware is loaded, which leads to a weird situation where there is a need to address a device before it's actually recognized by the kernel. The question then can be summarized as to how to pass information to a device that doesn't exist yet.

The answer seems to be that devlink could do this, as it has access to the full device tree and, therefore, to devices that can be addressed by (say) PCI identifiers. Then a possible devlink command could look something like:

    devlink dev pci/0000:03:00.0 option set foo bar

This idea raised a bunch of extra questions: some devices don't have a one-to-one mapping with the PCI bridge identifiers, for example, meaning that those identifiers cannot be used to access such devices. Another issue is that you may want to send multiple settings in a single transaction, which doesn't fit well in the devlink model. Miller then proposed to let the driver initialize itself to some state and wait for configuration to be sent when necessary. Another way would be to unregister the driver and re-register with the given configuration. Shrijeet Mukherjee explained that right now, Cumulus is doing this using horrible startup script magic by retrying and re-registering, but it would be nice to have a more standard way to do this.

Control over UAPI patches

Another issue that came up was the problem of changes in the user-space API (UAPI) which break backward compatibility. Pírko said that "we have to be more careful about those changes". The problem is that reviewers are not always available to make detailed reviews of such changes and may not notice API-breaking changes. Pírko proposed creating a bot to check if a given patch introduces UAPI changes, changes in structs, or in netlink enums. Miller said he could block merges until discussions happen and that patchwork, which Miller uses to process patches from the mailing list, does some of this. He also pointed out there aren't enough test cases in the first place.

Starovoitov argued UAPI isn't special, there are other ways of breaking backward compatibility. He expressed concerns that such a bot could create a false sense that everything is fine while a patch could break compatibility and not be detected. Miller countered that UAPI is special in that "we're stuck with it forever". He then went on to propose that, since there's a maintainer (or more) for each module, he can make sure that each maintainer explicitly approves changes to those modules.

Data-center hardware changes

Starovoitov brought up the issue of a new type of hardware that is currently being deployed in data centers called a "multi-host NIC" (network interface card). It's a single NIC that is connected to multiple servers. Facebook, for example, uses this in its Yosemite platform that shoves twelve servers into a 2U rack mount, in three modules. Each module is made of four servers connected to the traditional switch fabric with a single NIC through PCI-Express. Mellanox and and Broadcom also have similar devices.

One question is how to manage those devices. Since they are connected through a PCI-Express bus, Linux will see them as a NIC, yet they are also a little like switches, in that they interconnect multiple servers. Furthermore, the kernel security model assumes that a NIC is trusted, and gladly opens its own memory to NICs through DMA; this can become a huge security issue when the NIC is under the control of another server. This can especially become problematic if we consider that there could be TLS hardware offloading in the future with the introduction of in-kernel TLS stacks.

The other problem is the question of reliability: since those devices are currently "dumb", they need to be managed just like a regular NIC. If the host managing the card crashes, it could disable a whole set of servers that rely on the same NIC. There could be an election process among the servers, but that complicates significantly what used to be a simple PCI connection.

Mukherjee pointed out that the model Cisco uses for this is that the "smart NIC" is a "slave" of the main switch fabric. It's a daughter card, which makes it easier to manage from a network perspective. It is clear that Linux will need a way to represent those devices, probably through the newly introduced switchdev or DSA (distributed switch architecture), but it will be something to keep an eye on as density increases in the data center.

There were many more discussions during Netconf, too many to cover here, but in the end, Miller thanked everyone for all the interesting topics as the participants dispersed for a day off to travel to Montreal to attend the following Netdev conference.

The author would like to thank the Netconf and Netdev organizers for travel to, and hosting assistance in, Toronto. Many thanks to Alexei Starovoitov for his time taken for a technical review of this article.

Note: this article first appeared in the Linux Weekly News.

Planet DebianAntoine Beaupré: A report from Netconf: Day 1

As is becoming traditional, two times a year the kernel networking community meets in a two-stage conference: an invite-only, informal, two-day plenary session called Netconf, held in Toronto this year, and a more conventional one-track conference open to the public called Netdev. I was invited to cover both conferences this year, given that Netdev was in Montreal (my hometown), and was happy to meet the crew of developers that maintain the network stack of the Linux kernel.

This article covers the first day of the conference which consisted of around 25 Linux developers meeting under the direction of David Miller, the kernel's networking subsystem maintainer. Netconf has no formal sessions; although some people presented slides, interruptions are frequent (indeed, encouraged) and the focus is on hashing out issues that are blocked on the mailing list and getting suggestions, ideas, solutions, and feedback from their peers.

Removing ndo_select_queue()

One of the first discussions that elicited a significant debate was the ndo_select_queue() function, a key component of the Linux polling system that determines when and how to send packets on a network interface (see netdev_pick_tx and friends). The general question was whether the use of ndo_select_queue() in drivers is a good idea. Alexander Duyck explained that Intel people were considering using ndo_select_queue() for receive/transmit queue matching. Intel drivers do not currently use the hook provided by the Linux kernel and it turns out no one is happy with ndo_select_queue(): the heuristics it uses don't really please anyone. The consensus (including from Duyck himself) seemed to be that it should just not be used anymore, or at least not used for that specific purpose.

The discussion turned toward the wireless network stack, which uses it extensively, but for other purposes. Johannes Berg explained that the wireless stack uses ndo_select_queue() for traffic classification, for example to get voice traffic through even if the best-effort queue is backed up. The wireless stack could stop using it by doing flow control completely inside the wireless stack, which already uses the fq_codel flow-control mechanism for other purposes, so porting away from ndo_select_queue() seems possible there.

The problem then becomes how to update all the drivers to change that behavior, which would be a lot of work. Still, it seems people are moving away from a generic ndo_select_queue() interface to stack-specific or even driver-specific (in the case of Intel) queue management interfaces.

refcount_t followup

There was a followup discussion on the integration of the refcount_t type into the network stack, which we covered recently. This type is meant to be an in-kernel defense against exploits based on overflowing or underflowing an object's reference count.

The consensus seems to be that having refcount_t used for debugging is acceptable, but it cannot be enabled by default. An issue that was identified is that the networking developers are fairly sure that introducing refcount_t would have a severe impact on performance, but they do not have benchmarks to prove it, something Miller identified as a problem that needs to be worked on. Miller then expressed some openness to the idea of having it as a kernel configuration option.

A similar discussion happened, on the second day, regarding the KASan memory error detector which was covered when it was introduced in 2014. Eric Dumazet warned that there could be a lot of issues that cannot be detected by KASan because of the way the network stack often bypasses regular memory-allocation routines for performance reasons. He also noted that this can sometimes mean the stack may go over the regular 10% memory limit (the tcp_mem parameter, described in the tcp(7) man page) for certain operations, especially when rebuilding out of order packets with lots of parallel TCP connections.

Therefore it was proposed that these special memory recycling tricks could be optionally disabled, at run or compile-time, to instrument proper memory tracking. Dumazet argued this was a situation similar to refcount_t in that we need a way to disable high performance to make the network stack easier to debug with KAsan.

The problem with optional parameters is that they are often disabled in production or even by default, which, in turn, means that critical bugs cannot actually be found because the code paths are not tested. When I asked Dumazet about this, he explained that Google performs integration testing of new kernels before putting them in production, and those toggles could be enabled there to find and fix those bugs. But he agreed that certain code paths are then not tested until the code gets deployed in production.

So it seems the status quo remains: security folks wants to improve the reliability of the kernel, but the network folks can't afford the performance cost. Yet it was clear in the discussions that the team cares about security issues and wants those issues to be fixed; the impact of some of the solutions is just too big.

Lightweight wireless management packet access

Berg explained that some users need to have high-performance access to certain management frames in the wireless stack and wondered how to best expose those to user space. The wireless stack already allows users to clone a network interface in "monitor" mode, but this has a big performance cost, as the radiotap header needs to be constructed from scratch and the packet header needs to be copied. As wireless improves and the bandwidth rises to gigabit levels, this can become significant bottleneck for packet sniffers or reporting software that need to know precisely what's going on over the air outside of the regular access point client operation.

It seems the proper way to do this is with an eBPF program. As Miller summarized, just add another API call that allows loading a BPF program into the kernel and then those users can use a BPF filtering point to get the statistics they need. This will require an extra hook in the wireless stack, but it seems like this is the way that will be taken to implement this feature.

VLAN 0 inconsistencies

Hannes Frederic Sowa brought up the seemingly innocuous question of "how do we handle VLAN 0?" In theory, VLAN 0 means "no VLAN". But the Linux kernel currently handles this differently depending on whether the VLAN module is loaded and whether a VLAN 0 interface was created. Sometimes the VLAN tag is stripped, sometimes not.

It turns out the semantics of this were accidentally changed last time there was a change here and this was originally working but is now broken. Sowa therefore got the go-ahead to fix this to make the behavior consistent again.

Loopy fun

Then it came the turn of Jamal Hadi Salim, the maintainer of the kernel's traffic-control (tc) subsystem. The first issue he brought up is a problem in the tc REDIRECT action that can create infinite loops within the kernel. The problem can be easily alleviated when loops are created on the same interface: checks can be added that just drop packets coming from the same device and rate-limit logging to avoid a denial-of-service (DoS) condition.

The more serious problem occurs when a packet is forwarded from (say) interface eth0 to eth1 which then promptly redirects it from eth1 back to eth0. Obviously, this kind of problem can only be created by a user with root access so, at first glance, those issues don't seem that serious: admins can shoot themselves in the foot, so what?

But things become a little more serious when you consider the container case, where an untrusted user has root access inside a container and should have constrained resource limitations. Such a loop could allow this user to deploy an effective DoS attack against a whole group of containers running on the same machine. Even worse, this endless loop could possibly turn into a deadlock in certain scenarios, as the kernel could try to transmit the packet on the same device it originated from and block, progressively filling the queues and eventually completely breaking network access. Florian Westphal argued that a container can already create DoS conditions, for example by doing a ping flood.

According to Salim, this whole problem was created when two bits used for tracking such packets were reclaimed from the skb structure used to represent packets in the kernel. Those bits were a simple TTL (time to live) field that was incremented on each loop and dropped after a pre-determined limit was reached, breaking infinite loops. Salim asked everyone if this should be fixed or if we should just forget about this issue and move on.

Miller proposed to keep a one-behind state for the packet, fixing the simplest case (two interfaces). The general case, however, would requite a bitmap of all the interfaces to be scanned, which would impose a large overhead. Miller said an attempt to fix this should somehow be made. The root of the problem is that the network maintainers are trying to reduce the size of the skb structure, because it's used in many critical paths of the network stack. Salim's position is that, without the TTL fields, there is no way to fix the general case here, and this constitutes a security issue. So either the bits need to be brought back, or we need to live with the inherent DoS threat.

Dumping large statistics sets

Another issue Salim brought up was the question of how to export large statistics sets from the kernel. It turns out that some use cases may end up dumping a lot of data. Salim mentioned a real-world tc use case that calls for reading six-million entries. The current netlink-based API provides a way to get only 20 entries at a time, which means it takes forever to dump the state of all those policy actions. Salim has a patch that changes the dump size be eight times the NLMSG_GOOD_SIZE, which improves performance by an order of magnitude already, although there are issues with checking the user-space buffer size there.

But a more complete solution is needed. What Salim proposed was a way to ask only for the states that changed since the last dump was requested. He has a patch to add a last_access field to the netlink_callback structure used by netlink_dump() to output data; that raised the question of how to actually use that field. Since Salim fetches that data every five seconds, he figured he could just tell the kernel to return all the nodes that changed in that period. But then if the dump takes more than five seconds to complete, the next dump may be missing states that changed during the extra delay. An alternative mechanism would be for the user-space utility to keep the time stamp it requested and use that as a delta for the next dump.

It turns out this is a larger problem than just tc. Dumazet mentioned this was an issue with fq_codel classes: he would even like to be able to dump those statistics faster than every five seconds. Roopa Prabhu mentioned that Cumulus also has similar problems dumping stats from bridges, so clearly a more generic solution is needed here. There is, however, a fundamental problem with dumping large statistics sets from the kernel: those statistics are constantly changing while the dump is created and unless versioning or locking mechanisms are used — which would slow things down — the data returned is bound to be only an approximation of reality. Salim promised to send a set of RFC patches to further discussions regarding this issue, but during the following Netdev conference, Berg published a patch to fix this ten-year-old issue, which brought cheers from the audience.

The author would like to thank the Netconf and Netdev organizers for travel to, and hosting assistance in, Toronto. Many thanks to Berg, Dumazet, Salim, and Sowa for their time taken for a technical review of this article.

Note: this article first appeared in the Linux Weekly News.

CryptogramShadow Brokers Releases the Rest of Their NSA Hacking Tools

Last August, an unknown group called the Shadow Brokers released a bunch of NSA tools to the public. The common guesses were that the tools were discovered on an external staging server, and that the hack and release was the work of the Russians (back then, that wasn't controversial). This was me:

Okay, so let's think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it's a signal to the Obama Administration: "Before you even think of sanctioning us for the DNC hack, know where we've been and what we can do to you."

They published a second, encrypted, file. My speculation:

They claim to be auctioning off the rest of the data to the highest bidder. I think that's PR nonsense. More likely, that second file is random nonsense, and this is all we're going to get. It's a lot, though.

I was wrong. On November 1, the Shadow Brokers released some more documents, and two days ago they released the key to that original encrypted archive:

EQGRP-Auction-Files is CrDj"(;Va.*NdlnzB9M?@K2)#>deB7mN

I don't think their statement is worth reading for content. I still believe the Russia are more likely to be the perpetrator than China.

There's not much yet on the contents of this dump of Top Secret NSA hacking tools, but it can't be a fun weekend at Ft. Meade. I'm sure that by now they have enough information to know exactly where and when the data got stolen, and maybe even detailed information on who did it. My guess is that we'll never see that information, though.

EDITED TO ADD (4/11): Seems like there's not a lot here.

Krebs on SecurityFake News at Work in Spam Kingpin’s Arrest?

Over the past several days, many Western news media outlets have predictably devoured thinly-sourced reporting from a Russian publication that the arrest last week of a Russian spam kingpin in Spain was related to hacking attacks linked to last year’s U.S. election. While there is scant evidence that the spammer’s arrest had anything to do with the election, the success of that narrative is a sterling example of how the Kremlin’s propaganda machine is adept at manufacturing fake news, undermining public trust in the media, and distracting attention away from the real story.

Russian President Vladimir Putin tours RT facilities. Image: DNI

Russian President Vladimir Putin tours RT facilities. Image: DNI

On Saturday, news broke from RT.com (formerly Russia Today) that authorities in Spain had arrested 36-year-old Peter “Severa” Levashov, one of the most-wanted spammers on the planet and the alleged creator of some of the nastiest cybercrime engines in history — including the Storm worm, and the Waledac and Kelihos spam botnets.

But the RT story didn’t lead with Levashov’s alleged misdeeds or his primacy among junk emailers and virus writers. Rather, the publication said it interviewed Levashov’s wife Maria, who claimed that Spanish authorities said her husband was detained because he was suspected of being involved in hacking attacks aimed at influencing the 2016 U.S. election.

The RT piece is fairly typical of one that covers the arrest of Russian hackers in that the story quickly becomes not about the criminal charges but about how the accused is being unfairly treated or maligned by overzealous or misguided Western law enforcement agencies.

The RT story about Levashov, for example, seems engineered to leave readers with the impression that some bumbling cops rudely disturbed the springtime vacation of a nice Russian family, stole their belongings, and left a dazed and confused young mother alone to fend for herself and her child.

This should not be shocking to any journalist or reader who has paid attention to U.S. intelligence agency reports on Russia’s efforts to influence the outcome of last year’s election. A 25-page dossier released in January by the Office of the Director of National Intelligence describes RT as a U.S.-based but Kremlin-financed media outlet that is little more than an engine of anti-Western propaganda controlled by Russian intelligence agencies.

Somehow, this small detail was lost on countless Western media outlets, who seemed all too willing to parrot the narrative constructed by RT regarding Levashov’s arrest. With a brief nod to RT’s “scoop,” these publications back-benched the real story (the long-sought capture of one of the world’s most wanted spammers) and led with an angle supported by the flimsiest of sourcing.

On Monday, the U.S. Justice Department released a bevy of documents detailing Levashov’s alleged history as a spammer, and many of the sordid details in the allegations laid out in the government’s case echoed those in a story I published early Monday. Investigators said they had dismantled the Kelihos botnet that Severa allegedly built and used to distribute junk email, but they also emphasized that Levashov’s arrest had nothing to do with hacking efforts tied to last year’s election.

“Despite Russian news media reports to the contrary, American officials said Mr. Levashov played no role in attempts by Russian government hackers to meddle in the 2016 presidential election and support the candidacy of Donald J. Trump,” The New York Times reported.

Nevertheless, from the Kremlin’s perspective, the RT story is almost certainly being viewed as an unqualified success: It distracted attention away from the real scoop (a major Russian spammer was apprehended); it made much of the news media appear unreliable and foolish by regurgitating fake news; and it continued to sow doubt in the minds of the Western public about the legitimacy of democratic process.

Levashov’s wife may well have been told her husband was wanted for political hacking. Likewise, Levashov could have played a part in Russian hacking efforts aimed at influencing last year’s election. As noted here and in The New York Times earlier this week, the Kelihos botnet does have a historic association with election meddling: It was used during the Russian election in 2012 to send political messages to email accounts on computers with Russian Internet addresses.

According to The Times, those emails linked to fake news stories saying that Mikhail D. Prokhorov, a businessman who was running for president against Vladimir V. Putin, had come out as gay. It’s also well established that the Kremlin has a history of recruiting successful criminal hackers for political and espionage purposes.

But the less glamorous truth in this case is that the facts as we know them so far do not support the narrative that Levashov was involved in hacking activities related to last year’s election. To insist otherwise absent any facts to support such a conclusion only encourages the spread of more fake news.

CryptogramNew Destructive Malware Bricks IoT Devices

There's a new malware called BrickerBot that permanently disables vulnerable IoT devices by corrupting their storage capability and reconfiguring kernel parameters. Right now, it targets devices with open Telnet ports, but we should assume that future versions will have other infection mechanisms.

Slashdot thread.

Worse Than FailureMy Machine Is Full

Close-up photo of a 3.5-inch floppy disk

In the mid-90s, Darren landed his first corporate job at a company that sold IT systems to insurance brokers. Their software ran on servers about the size of small chest freezers—outdated by the 70s, let alone the 90s. Every month, they'd push out software fixes by sending each customer between 3 and 15 numbered floppy disks. The customers would have to insert the first disk, type UPDATE into the console, and wait for "Insert Disk Number X" prompts to appear on screen.

It wasn't slick, but it worked. The firm even offered a recycling service for the hundreds of disks that eventually built up at customer sites.

While working there, Darren became unfortunately well acquainted with one particular insurance broker, Mr. Lasco. The man refused all offers of training ("Too expensive!") and paid support ("You don't know enough!"), and was too good to read instructions, but could always be counted on to tie up some poor tech support rep's phone every time a new update went out. He never let them charge the call against his company's account, or even thanked anybody for the help. When told about it, management just shrugged their shoulders. Mr. Lasco's firm did have a big contract with them, after all.

Early one Monday morning, Darren answered his phone, only to receive an ear-splitting tirade. As Mr. Lasco ranted, Darren held in a sigh and used the time to start filing a support ticket.

"What's the nature of your problem, sir?" he asked during the gap in which Mr. Lasco paused to breathe.

"I—it's—your damn update won't work! Again!" Mr. Lasco sputtered. "My machine is full!"

Darren frowned. That wasn't an error message that the update process would ever throw. "'Full?' Hmm, maybe one of the server's hard drives is out of disk space? Maybe you need to—"

"No, you fool, it's full! It's FULL!" Mr. Lasco snapped. "I KNEW this would happen eventually! Do you know how much money I'm losing right now with this thing down? I want one of your people to come out here and fix this immediately!"

The demand prompted an eyeroll from Darren, who already knew Mr. Lasco would never pay for a consultant's time. Still, this was a perfect way to get him off the phone. "Why don't I forward you to your sales rep?"

To Darren's amazement—and pity—a software engineer was dispatched within the hour to drive several hundred miles to Mr. Lasco's site, with instructions to call Darren with updates.

By late afternoon, the call came. The engineer was laughing so hard, he couldn't talk.

"Is everything OK?" prompted Darren.

"Wait'll you hear this." The engineer struggled to breathe. "There's a gap in the server casing. All this time, Lasco's been inserting update disks into the server, and couldn't force any more in. I popped the case open, and swear to God, there must be a few hundred disks crammed in there, easy. Years of updates!"

Darren joined in the mirth, but it was short-lived. The poor engineer had to spend 7 hours onsite carefully extracting floppy disks wedged between drives and memory cards, sorting them, then applying the updates in order.

The one silver lining to the whole affair was that Mr. Lasco never called them again.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianReproducible builds folks: Reproducible Builds: week 102 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 2 and Saturday April 8 2017:

Media coverage

Toolchain development and fixes

Reviews of unreproducible packages

27 package reviews have been added, 14 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Aaron M. Ucko (1)
  • Adrian Bunk (1)
  • Chris Lamb (2)

tests.reproducible-builds.org

Misc.

This week's edition was written by Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

,

LongNowThese 1,000-Year-Old Windmills Work Perfectly, But Their Future is in Doubt

In Nashtifan, Iran, windmills constructed over a thousand years ago out of clay, straw and wood are not only still standing; they work just as well as they did when they were first built.

In designing and building the Clock of the Long Now, we have investigated many technologies built for the long-term. Some, like Iran’s windmills and Japan’s Ise Shrine, are ancient. Others, like the Svalbard Global Seed Vault, the Yucca Mountain nuclear waste repository, and the Mormon Genealogical Vault, are more recent efforts. All offer important lessons in why some technologies last and others do not.

Muhammad Etebari is the last custodian of the Northeast Iran’s ancient windmills.

Long Now Executive Director Alexander Rose, discussing his excursions to these remote sites in a 02011 Seminar, noted that one of the main reasons a technology lasts is because there are people and institutions built to maintain it. In the case of the Nashtifan windmills, Muhammad Etebari is the last remaining custodian of the mills, and he cannot find an apprentice. After centuries of keeping the windmills running by passing the responsibility of maintenance from generation to another, the future of the ancient durable windmills of Nashtifan is now in doubt.

 

Watch National Geographic’s “See the 1,000-Year-Old Windmills Still in Use Today” in full. 

Watch Alexander Rose’s 02011 Long Now Seminar “Millennial Precedent” in full.

Planet DebianDaniel Pocock: If Alan Turing was born today, would he be a Muslim?

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).


Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler.

While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life.

Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere.

Please prove me wrong

In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared.

Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit".

What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage.

The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London.

One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed.

In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born.

A deeper insight into Turing's fate

With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died).

In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds.

In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street.

It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?


Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich:

Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance?

It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect?

Playing ping-pong with children

If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment.

When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it?

How do British politicians really view migrants?

Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear.

According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships.

While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution.

Who is Britain's next Alan Turing?

Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era.

Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however.

Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.

Sociological ImagesAnimal “inspiration porn”: Implications for othering and accommodation

The term “inspiration porn” was coined by disability activist Stella Young. Aimed at able-bodied viewers, inspiration porn features people with disabilities who appear happy or are doing things, alongside an encouraging message. She explains:

Inspiration porn is an image of a person with a disability, often a kid, doing something completely ordinary — like playing, or talking, or running, or drawing a picture, or hitting a tennis ball — carrying a caption like “your excuse is invalid” or “before you quit, try.”

Or, the famous one: “The only disability is a bad attitude.”

She called it porn quite deliberately, arguing that inspiration porn is like sexual porn in that the images “objectify one group of people for the benefit of another group of people.”

And, as with sexual porn, it sometimes involves animals.

At Disability Intersections, Anna Hamilton suggests that inspiration porn involving animals is another step removed from recognizing the full humanity of people with disabilities. These “inspiring” stories, she argues, “provide a way for nondisabled people to talk about and engage with disability in a facile way.”

Disability isn’t just othered; it’s cute, adorable, fuzzy.

Hamilton continues:

If one is constantly gawking and aww-ing over pictures and stories about animals with disabilities, then they don’t have to spend time thinking about actual disabled people, or the ableism against disabled humans that still exists.

When featuring animals, accommodation is no longer the least a society can do: a basic acknowledgement that human beings in all forms deserve access to their societies. Instead, it’s over-the-top, idiosyncratic and rare, even excessive in its generosity. To find inspiration in a turtle who has been fitted with a tiny skateboard, for example, is to frame accommodation as something one does out of the goodness of one’s heart, not a human and civil right.

Inspiration porn others and objectifies people with disabilities. When featuring animals, it dehumanizes them, too.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: A Piece of the Variable

In the Star Trek episode, “A Piece of the Action”, Kirk and his crew travel to Sigma Iotia II, a planet last visited before the Prime Directive of non-interference existed. Well, they left behind a book, Chicago Mobs of the Twenties, which the Iotians took as a holy guide, to be imitated and followed even if they didn’t quite understand it, a sort of sci-fi cargo-cult. Cue the crew of the Enterprise being threatened with Tommy Guns and guys doing bad Al Capone impressions.

Michael’s co-worker may have fallen into a similar trap. An advanced developer came to him, and gave him a rule: in PHP, since variables may be used without being declared, it’s entirely possible to have an unset variable. Thus, it’s a good practice to check and see if the variable is set before you use it. Normally, we use this to check if, for example, the submitted form contains certain fields.

Like Bela Okmyx, the “Boss” of Sigma Iotia II, this developer may have read the rules, but they certainly didn’t understand them.

$numDIDSthisMonth=0;
if(isset($numDIDSthisMonth)) {
   if($numDIDSthisMonth == "") {
      $numDIDSthisMonth=0;
   }
}


$numTFDIDSthisMonth=0;
if(isset($numTFDIDSthisMonth)) {
   if($numTFDIDSthisMonth == "") {
      $numTFDIDSthisMonth=0;
   }
}
/*
$numDIDSthisMonthToCharge=$_POST['numDIDSthisMonthToCharge'];
if(isset($numDIDSthisMonthToCharge)){
   if($numDIDSthisMonthToCharge == ""){
      $numDIDSthisMonthToCharge=0;
   }
}
*/
$STDNPthisMonth=0;
if(isset($STDNPthisMonth)) {
   if($STDNPthisMonth == "") {
      $STDNPthisMonth=0;
   }
}
$TFNPthisMonth=0;
if(isset($TFNPthisMonth)) {
   if($TFNPthisMonth == "") {
      $TFNPthisMonth=0;
   }
}
$E911thisMonth=0;
if(isset($E911thisMonth)) {
   if($E911thisMonth == "") {
      $E911thisMonth=0;
   }
}
$E411thisMonth=0;
if(isset($E411thisMonth)) {
   if($E411thisMonth == "") {
      $E411thisMonth=0;
   }
}
/*
$PBthisMonth=0;
if(isset($PBthisMonth)) {
   if($PBthisMonth == "") {
      $PBthisMonth=0;
   }
}
*/
$TFthisMonth=0;
if(isset($TFthisMonth)) {
   if($TFthisMonth == "") {
      $TFthisMonth=0;
   }
}

As you can see, in this entire block of variables, we first set the variable, then we check, if the variable isset, on the off-chance it magically got unset between lines of code, and then we double check to see if it’s an empty string, and if it is, make it zero.

For extra credit, some of these variables are used in the application. Most of them are not actually used anywhere. See if you can guess each ones!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianMichal Čihař: New free software projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. Finally I got to processing requests a bit faster, so there are just few new projects.

This time, the newly hosted projects include:

  • Pext - Python-based extendable tool
  • Dino - modern Jabber/XMPP Client using GTK+/Vala

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English Weblate | 0 comments

Krebs on SecurityAlleged Spam King Pyotr Levashov Arrested

Authorities in Spain have arrested a Russian computer programmer thought to be one of the world’s most notorious spam kingpins.

Spanish police arrested Pyotr Levashov under an international warrant executed in the city of Barcelona, according to Reuters. Russian state-run television station RT (formerly Russia Today) reported that Levashov was arrested while vacationing in Spain with his family.

Spamdot.biz moderator Severa listing prices to rent his Waledac spam botnet.

Spamdot.biz moderator Severa listing prices to rent his Waledac spam botnet.

According to numerous stories here at KrebsOnSecurity, Levashov was better known as “Severa,” the hacker moniker used by a pivotal figure in many Russian-language cybercrime forums. Severa was the moderator for the spam subsection of multiple online communities, and in this role served as the virtual linchpin connecting virus writers with huge spam networks — including some that Severa allegedly created and sold himself.

Levashov is currently listed as #7 in the the world’s Top 10 Worst Spammers list maintained by anti-spam group Spamhaus. The U.S. Justice Department maintains that Severa was the Russian partner of Alan Ralsky, a convicted American spammer who specialized in “pump-and-dump” spam schemes designed to artificially inflate the value of penny stocks.

Levashov allegedly went by the aliases Peter Severa and Peter of the North (Pyotr is the Russian form of Peter). My reporting indicates that — in addition to spamming activities — Severa was responsible for running multiple criminal operations that paid virus writers and spammers to install “fake antivirus” software. So-called “fake AV” uses malware and/or programming tricks to bombard the victim with misleading alerts about security threats, hijacking the PC until its owner either pays for a license to the bogus security software or figures out how to remove the invasive program.

A screenshot of a fake antivirus or "scareware" affiliate program run by "Severa," allegedly the cybercriminal alias of Pyotr Levashov, the Russian arrested in Spain last week.

A screenshot of a fake antivirus or “scareware” affiliate program run by “Severa,” allegedly the cybercriminal alias of Pyotr Levashov.

There is ample evidence that Severa is the cybercriminal behind the Waledac spam botnet, a spam engine that for several years infected between 70,000 and 90,000 computers and was capable of sending approximately 1.5 billion spam messages a day.

In 2010, Microsoft launched a combined technical and legal sneak attack on the Waledac botnet, successfully dismantling it. The company would later do the same to the Kelihos botnet, a global spam machine which shared a great deal of computer code with Waledac.

The connection between Waledac/Kelihos and Severa is supported by data leaked in 2010 after hackers broke into the servers of pharmacy spam affiliate program SpamIt. According to the stolen SpamIt records, Severa — this time using the alias “Viktor Sergeevich Ivashov” — brought in revenues of $438,000 and earned commissions of $145,000 spamming rogue online pharmacy sites over a 3-year period.

Severa also was a moderator of Spamdot.biz (pictured in the first screenshot above), a vetted, members-only forum that at one time attracted almost daily visits from most of Russia’s top spammers. Leaked Spamdot forum posts for Severa indicate that he hails from Saint Petersburg, Russia’s second-largest city.

According to an exhaustive analysis published in my book — Spam Nation: The Inside Story of Organized Cybercrime — Severa likely made more money renting Waledac and other custom spam botnets to other spammers than blasting out junk email on his own. For $200, vetted users could hire one of his botnets to send 1 million pieces of spam. Junk email campaigns touting auction and employment scams cost $300 per million, and phishing emails designed to separate unwary email users from their usernames and passwords could be blasted out through Severa’s botnet for the bargain price of $500 per million.

The above-referenced Reuters story on Levashov’s arrest cited reporting from Russian news outlet RT which associated Levashov with hacking attacks linked to alleged interference in last year’s U.S. election. But subsequent updates from Reuters cast doubt on those claims.

“A U.S. Department of Justice official said it was a criminal matter without an apparent national security connection,” Reuters added in an update to an earlier version of its story.

The New York Times reports that Russian news media did not say if Levashov was suspected of being involved in that activity. However, The Times piece observes that the Kelihos botnet does have a historic association with election meddling, noting the botnet was used during the Russian election in 2012 to send political messages to email accounts on computers with Russian Internet addresses. According to The Times, those emails linked to fake news stories saying that Mikhail D. Prokhorov, a businessman who was running for president against Vladimir V. Putin, had come out as gay.

,

Harald WelteSIGTRAN/SS7 stack in libosmo-sigtran merged to master

As I blogged in my blog post in Fabruary, I was working towards a more fully-featured SIGTRAN stack in the Osmocom (C-language) universe.

The trigger for this is the support of 3GPP compliant AoIP (with a BSSAP/SCCP/M3UA/SCTP protocol stacking), but it is of much more general nature.

The code has finally matured in my development branch(es) and is now ready for mainline inclusion. It's a series of about 77 (!) patches, some of which already are the squashed results of many more incremental development steps.

The result is as follows:

  • General SS7 core functions maintaining links, linksets and routes
  • xUA functionality for the various User Adaptations (currently SUA and M3UA supported)
    • MTP User SAP according to ITU-T Q.701 (using osmo_prim)
    • management of application servers (AS)
    • management of application server processes (ASP)
    • ASP-SM and ASP-TM state machine for ASP, AS-State Machine (using osmo_fsm)
    • server (SG) and client (ASP) side implementation
    • validated against ETSI TS 102 381 (by means of Michael Tuexen's m3ua-testtool)
    • support for dynamic registration via RKM (routing key management)
    • osmo-stp binary that can be used as Signal Transfer Point, with the usual "Cisco-style" command-line interface that all Osmocom telecom software has.
  • SCCP implementation, with strong focus on Connection Oriented SCCP (as that's what the A interface uses).
    • osmo_fsm based state machine for SCCP connection, both incoming and outgoing
    • SCCP User SAP according to ITU-T Q.711 (osmo_prim based)
    • Interfaces with underlying SS7 stack via MTP User SAP (osmo_prim based)
    • Support for SCCP Class 0 (unit data) and Class 2 (connection oriented)
    • All SCCP + SUA Address formats (Global Title, SSN, PC, IPv4 Address)
    • SCCP and SUA share one implementation, where SCCP messages are transcoded into SUA before processing, and re-encoded into SCCP after processing, as needed.

I have already done experimental OsmoMSC and OsmoHNB-GW over to libosmo-sigtran. They're now all just M3UA clients (ASPs) which connect to osmo-stp to exchange SCCP messages back and for the between them.

What's next on the agenda is to

  • finish my incomplete hacks to introduce IPA/SCCPlite as an alternative to SUA and M3UA (for backwards compatibility)
  • port over OsmoBSC to the SCCP User SAP of libosmo-sigtran
    • validate with SSCPlite lower layer against existing SCCPlite MSCs
  • implement BSSAP / A-interface procedures in OsmoMSC, on top of the SCCP-User SAP.

If those steps are complete, we will have a single OsmoMSC that can talk both IuCS to the HNB-GW (or RNCs) for 3G/3.5G as well as AoIP towards OsmoBSC. We will then have fully SIGTRAN-enabled the full Osmocom stack, and are all on track to bury the OsmoNITB that was devoid of such interfaces.

If any reader is interested in interoperability testing with other implementations, either on M3UA or on SCCP or even on A or Iu interface level, please contact me by e-mail.

Planet DebianEnrico Zini: Ansible config for my stereo

I bought a Raspberry Pi 2 and its case. I could not reuse the existing SD card because it wants a MicroSD.

A wise person once told me:

First you do it, then you document it, then you automate it.

I had done the first two, and now I've redone the whole setup with ansible, here: stereo.tar.xz.

Hifi with a Raspberry Pi 2 and its case

Planet DebianSam Hartman: When "when" is too hard a question: SQLAlchemy, Python datetime, and ISO8601

A new programmer asked on a work chat room how timezones are handled in databases. He asked if it was a good idea to store things in UTC. The senior programmers all laughed as we told some of our horror stories with timezones. Yes, UTC is great; if only it were that simple.
About a week later I was designing the schema for a blue sky project I'm implementing. I had to confront time in all its Pythonic horror.
Let's start with the datetime.datetime class. Datetime objects optionally include a timezone. If no timezone is present, several methods such as timestamp treat the object as a local time in the system's timezone. The timezone method returns a POSIX timestamp, which is always expressed in UTC, so knowing the input timezone is important. The now method constructs such an object from the current time.
However other methods act differently. The utcnow method constructs a datetime object that has the UTC time, but is not marked with a timezone. So, for example datetime.fromtimestamp(datetime.utcnow().timestamp()) produces the wrong result unless your system timezone happens to have the same offset as UTC.
It's also possible to construct a datetime object that includes a UTC time and is marked as having a UTC time. The utcnow method never does this, but you can pass the UTC timezone into the now method and get that effect. As you'd expect, the timestamp method returns the correct result on such a datetime.
Now enter SQLAlchemy, one of the more popular Python ORMs. Its DATETIME type has an argument that tries to request a column capable of storing a a timezone from the underlying database. You aren't guaranteed to get this though; some databases don't provide that functionality. With PostgreSQL, I do get such a column, although something in SQLAlchemy is not preserving the timezones (although it is correctly adjusting the time). That is, I'll store a UTC time in an object, flush it to my session, and then read back the same time represented in my local timezone (marked as my local timezone). You'd think this would be safe.
Enter SQLite. SQLite makes life hard for people wanting to store time; it seems to want to store things as strings. That's fairly incompatible with storing a timezone and doing any sort of comparisons on dates. SQLAlchemy does not try to store a timezone in SQLite. It just trims any timezone information from the datetime. So, if I do something like
d = datetime.now(timezone.utc)
obj.date_col = d
session.add(obj)
session.flush()
assert obj.date_col == d # fails
assert obj.date_col.timestamp() == d.timestamp() # fails
assert d == obj.date_col.replace(tzinfo = timezone.utc) # finally succeeds

There are some unfortunate consequences of this. If you mark your datetimes with timezone information (even if it is always the same timezone), whether two datetimes representing the same datetime compare equal depends on whether objects have been flushed to the session yet. If you don't mark your objects with timezones, then you may not store timezone information on other databases.
At least if you use only the methods we've discussed so far, you're reasonably safe if you use local time everywhere in your application and don't mark your datetimes with timezones. That's undesirable because as our new programmer correctly surmised, you really should be using UTC. This is particularly true if users of your database might span multiple timezones.
You can use UTC time and not mark your objects as UTC. This will give the wrong data with a database that actually does support timezones, but will sort of work with SQLite. You need to be careful never to convert your datetime objects into POSIX time as you'll get the wrong result.
It turns out that my life was even more complicated because parts of my project serialize data into JSON. For that serialization, I've chosen ISO 8601. You've probably seen that format: '2017-04-09T18:17:27.340410+00:00. Datetime provides the convenient isoformat method to print timestamps in the ISO 8601 format. If the datetime has a timezone indication, it is included in the ISO formatted string. If not, then no timezone indication is included. You know how I mentioned that datetime takes a string without a timezone marker as local time? Yeah, well, that's not what 8601 does: UTC all the way, baby! And at least the parser in the iso8601 module will always include timezone markers. So, if you use datetime to print a timestamp without a timezone marker and then read that back in to construct a new datetime on the deserialization side, then you'll get the wrong time. OK, so mark things with timezones then. Well, if you use local time, then the time you get depends on whether you print the ISO string before or after session flush (before or after SQLAlchemy trims the timezone information as it goes to SQLite).
It turns out that I had the additional complication of one side of my application using SQLite and one side using PostgreSQL. Remember how I mentioned that something between SQLAlchemy and PostgreSQL was recasting my times in local timezone (although keeping the time the same)? Well, consider how that's going to work. I serialize with the timezone marker on the PostgreSQL side. I get a ISO8601 localtime marked with the correct timezone marker. I deserialize on the SQLite side. Before session flush, I get a local time marked as localtime. After session flush, I get a local time with no marking. That's bad. If I further serialize on the SQLite side, I'll get that local time incorrectly marked as UTC. Moreover, all the times being locally generated on the SQLite side are UTC, and as we explored, SQLite really only wants one timezone in play.
I eventually came up with the following approach:

  1. If I find myself manipulating a time without a timezone marking, assert that its timezone is UTC not localtime.

  2. Always use UTC for times coming into the system.

  3. If I'm generating an ISO 8601 time from a datetime that has a timezone marker in a timezone other than UTC, represent that time as a UTC-marked datetime adjusting the time for the change in timezone.


This is way too complicated. I think that both datetime and SQLAlchemy's SQLite time handling have a lot to answer for. I think SQLAlchemy's core time handling may also have some to answer for, but I'm less sure of that.

Planet DebianAntoine Beaupré: Contribute your skills to Debian in Montreal, April 14 2017

Join us in Montreal, on April 14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to).

Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian.

The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome :)

Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated.

Here's what we will be doing:

  • We will triage bug reports that are blocking the release of the upcoming version of Debian.

  • Debian package maintainers will fix some of these bugs.

Goals and principles

This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet.

We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend.

We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting the Debian Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after.

The space where this event will take place is unfortunately not accessible to wheelchairs. Food (including vegetarian options) should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best.

What we will be doing

This will be an informal session to confirm and fix bugs in Debian. If you have never worked with Debian packages, this is a good opportunity to learn about packaging and bugtracker usage.

Bugs flagged as Release Critical are blocking the release of the upcoming version of Debian. To fix them, it helps to make sure the bug report documents the up-to-date status of the bug, and of its resolution. One does not need to be a programmer to do this work! For example, you can try and reproduce bugs in software you use... or in software you will discover. This helps package maintainers better focus their work.

We will also try to actually fix bugs by testing patches and uploading fixes into Debian itself. Antoine Beaupré, a seasoned Debian developer, will be available to sponsor uploads and teach people about basic Debian packaging skills.

Where? When? How to register?

See https://wiki.debian.org/BSP/2017/04/ca/Montreal for the exact address and time.

Planet DebianChristoph Egger: Secured OTP Server (ASIS CTF 2017)

This weekend was ASIS Quals weekend again. And just like last year they have quite a lot of nice crypto-related puzzles which are fun to solve (and not "the same as every ctf").

Actually Secured OTP Server is pretty much the same as the First OTP Server (actually it's a "fixed" version to enforce the intended attack). However the template phrase now starts with enough stars to prevent simple root.:

def gen_otps():
    template_phrase = '*************** Welcome, dear customer, the secret passphrase for today is: '

    OTP_1 = template_phrase + gen_passphrase(18)
    OTP_2 = template_phrase + gen_passphrase(18)

    otp_1 = bytes_to_long(OTP_1)
    otp_2 = bytes_to_long(OTP_2)

    nbit, e = 2048, 3
    privkey = RSA.generate(nbit, e = e)
    pubkey  = privkey.publickey().exportKey()
    n = getattr(privkey.key, 'n')

    r = otp_2 - otp_1
    if r < 0:
        r = -r
    IMP = n - r**(e**2)
    if IMP > 0:
        c_1 = pow(otp_1, e, n)
        c_2 = pow(otp_2, e, n)
    return pubkey, OTP_1[-18:], OTP_2[-18:], c_1, c_2

Now let A = template * 2^(18*8), B = passphrase. This results in OTP = A + B. c therefore is (A+B)^3 mod n == A^3 + 3A^2b + 3AB^2 + B^3. Notice that only B^3 is larger than N and is statically known. Therefore we can calculate A^3 // N and add that to c to "undo" the modulo operation. With that it's only iroot and long_to_bytes to the solution. Note that we're talking about OTP and C here. The code actually produced two OTP and C values but you can use either one just fine.

#!/usr/bin/python3

import sys
from util import bytes_to_long
from gmpy2 import iroot

PREFIX = b'*************** Welcome, dear customer, the secret passphrase for today is: '
OTPbase = bytes_to_long(PREFIX + b'\x00' * 18)

N = 27990886688403106156886965929373472780889297823794580465068327683395428917362065615739951108259750066435069668684573174325731274170995250924795407965212988361462373732974161447634230854196410219114860784487233470335168426228481911440564783725621653286383831270780196463991259147093068328414348781344702123357674899863389442417020336086993549312395661361400479571900883022046732515264355119081391467082453786314312161949246102368333523674765325492285740191982756488086280405915565444751334123879989607088707099191056578977164106743480580290273650405587226976754077483115441525080890390557890622557458363028198676980513

WRAPPINGS = (OTPbase ** 3) // N

C = 13094996712007124344470117620331768168185106904388859938604066108465461324834973803666594501350900379061600358157727804618756203188081640756273094533547432660678049428176040512041763322083599542634138737945137753879630587019478835634179440093707008313841275705670232461560481682247853853414820158909864021171009368832781090330881410994954019971742796971725232022238997115648269445491368963695366241477101714073751712571563044945769609486276590337268791325927670563621008906770405196742606813034486998852494456372962791608053890663313231907163444106882221102735242733933067370757085585830451536661157788688695854436646

x = N * WRAPPINGS + C

val, _ = iroot(x, 3)
bstr = "%x" % int(val)

for i in range(0, len(bstr) // 2):
    sys.stdout.write(chr(int(bstr[2*i:2*i+2], 16)))

print()

Planet DebianMichael Stapelberg: manpages.debian.org: what’s new since the launch?

On 2017-01-18, I announced that https://manpages.debian.org had been modernized. Let me catch you up on a few things which happened in the meantime:

  • Debian experimental was added to manpages.debian.org. I was surprised to learn that adding experimental only required 52MB of disk usage. Further, Debian contrib was added after realizing that contrib licenses are compatible with the DFSG.
  • Indentation in some code examples was fixed upstream in mandoc.
  • Address-bar search should now also work in Firefox, which apparently requires a title attribute on the opensearch XML file reference.
  • manpages now specify their language in the HTML tag so that search engines can offer users the most appropriate version of the manpage.
  • I contributed mandocd(8) to the mandoc project, which debiman now uses for significantly faster manpage conversion (useful for disaster recovery/development). An entire run previously took 2 hours on my workstation. With this change, it takes merely 22 minutes. The effects are even more pronounced on manziarly, the VM behind manpages.debian.org.
  • Thanks to Peter Palfrader (weasel) from the Debian System Administrators (DSA) team, manpages.debian.org is now serving its manpages (and most of its redirects) from Debian’s static mirroring infrastructure. That way, planned maintenance won’t result in service downtime. I contributed README.static-mirroring.txt, which describes the infrastructure in more detail.

The list above is not complete, but rather a selection of things I found worth pointing out to the larger public.

There are still a few things I plan to work on soon, so stay tuned :).

,

Planet Linux AustraliaColin Charles: Speaking in April 2017

Its been a while since I’ve blogged (will have to catch up soon), but here’s a few appearances:

  • How we use MySQL today – April 10 2017 – New York MySQL meetup. I am almost certain this will be very interesting with the diversity of speakers and topics.
  • Percona Live 2017 – April 24-27 2017 – Santa Clara, California. This is going to be huge, as its expanded beyond just MySQL to include MongoDB, PostgreSQL, and other open source databases. Might even be the conference with the largest time series track out there. Use code COLIN30 for the best discount at registration.

I will also be in attendance at the MariaDB Developer’s (Un)Conference, and M|17 that follows.

Don MartiBunny: Internet famous?

bunny

I bought this ceramic bunny at a store on Park Street in Alameda, California. Somehow I think I have seen it before.

,

CryptogramFriday Squid Blogging: Squid Can Edit Their Own RNA

This is just plain weird:

Rosenthal, a neurobiologist at the Marine Biological Laboratory, was a grad student studying a specific protein in squid when he got an an inkling that some cephalopods might be different. Every time he analyzed that protein's RNA sequence, it came out slightly different. He realized the RNA was occasionally substituting A' for I's, and wondered if squid might apply RNA editing to other proteins. Rosenthal, a grad student at the time, joined Tel Aviv University bioinformaticists Noa Liscovitch-Braur and Eli Eisenberg to find out.

In results published today, they report that the family of intelligent mollusks, which includes squid, octopuses and cuttlefish, feature thousands of RNA editing sites in their genes. Where the genetic material of humans, insects, and other multi-celled organisms read like a book, the squid genome reads more like a Mad Lib.

So why do these creatures engage in RNA editing when most others largely abandoned it? The answer seems to lie in some crazy double-stranded cloverleaves that form alongside editing sites in the RNA. That information is like a tag for RNA editing. When the scientists studied octopuses, squid, and cuttlefish, they found that these species had retained those vast swaths of genetic information at the expense of making the small changes that facilitate evolution. "Editing is important enough that they're forgoing standard evolution," Rosenthal says.

He hypothesizes that the development of a complex brain was worth that price. The researchers found many of the edited proteins in brain tissue, creating the elaborate dendrites and axons of the neurons and tuning the shape of the electrical signals that neurons pass. Perhaps RNA editing, adopted as a means of creating a more sophisticated brain, allowed these species to use tools, camouflage themselves, and communicate.

Yet more evidence that these bizarre creatures are actually aliens.

Three more articles. Academic paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityGamestop.com Investigating Possible Breach

Video game giant GameStop Corp.  [NSYE: GME] says it is investigating reports that hackers may have siphoned credit card and customer data from its website — gamestop.com. The company acknowledged the investigation after being contacted by KrebsOnSecurity.

gs“GameStop recently received notification from a third party that it believed payment card data from cards used on the GameStop.com website was being offered for sale on a website,” a company spokesman wrote in response to questions from this author.

“That day a leading security firm was engaged to investigate these claims. Gamestop has and will continue to work non-stop to address this report and take appropriate measures to eradicate any issue that may be identified,” the company’s statement continued.

Two sources in the financial industry told KrebsOnSecurity that they have received alerts from a credit card processor stating that Gamestop.com was likely compromised by intruders between mid-September 2016 and the first week of February 2017.

Those same sources said the compromised data is thought to include customer card number, expiration date, name, address and card verification value (CVV2), usually a 3-digit security code printed on the backs of credit cards.

Online merchants are not supposed to store CVV2 codes, but hackers can steal the codes by placing malicious software on a company’s e-commerce site, so that the data is copied and recorded by the intruders before it is encrypted and transmitted to be processed.

GameStop would not comment on the possible timeframe of the suspected breach, or say what types of customer data might be impacted.

Based in Grapevine, Texas, GameStop generated more than $8.6 billion in revenue in 2016, although it’s unclear how much of that came through the company’s Web site. GameStop operates more than 7,000 retail stores through the United States, Canada, Australia, New Zealand and Europe. There is currently no indication that the company’s retail store locations may have been affected.

According to Web site statistics firm Alexa.com, Gamestop.com is the 269th most popular Web site in the United States.

“We regret any concern this situation may cause for our customers,” Game Stop said in its statement. “GameStop would like to remind its customers that it is always advisable to monitor payment card account statements for unauthorized charges. If you identify such a charge, report it immediately to the bank that issued the card because payment card network rules generally state that cardholders are not responsible for unauthorized charges that are timely reported.”

Sociological ImagesOn intellectual thrashing: My thanks to Dorothy Roberts

This Flashback Friday is in honor of the 20th anniversary of Dorothy Roberts’ groundbreaking book, Killing the Black Body.

One of the most important moments of my graduate education occurred during a talk by Dorothy Roberts for the sociology department at the University of Wisconsin, Madison. At the time I had been teaching her book, Killing the Black Body. I thought this book was genius, absolutely loved it, so I was really excited to be seeing her in person.

I sat in anticipation; she was introduced and then, before she launched into the substance of her talk, she apologized for likely weaknesses in her thinking as, she explained, she had only been thinking about it for “about a year.”

I was stunned.

I couldn’t believe that Dorothy Roberts would have to think about anything for a year. In my mind, her brilliance appeared full form, in a span of mere moments, perfectly articulated.

Her comment made me realize, for the first time, that the fantastic books and expertly-crafted journal articles written by scholars were the result of hard work, not just genius. And I realized that part of the task of writing these things is to hide all of the hard work that goes into writing them. They read as if it were obvious that the conclusions of the paper are true when, in fact, the conclusion on paper are probably just one of many sets of possible conclusions with which the author experimented. Roberts’ humble admission made me realize that all of the wild intellectual goose chases, mental thrashing, deleted passages, and revised arguments were part of my job, not evidence that I was perpetually failing.

And I was and am tremendously grateful to Dr. Roberts for that insight.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramIncident Response as "Hand-to-Hand Combat"

NSA Deputy Director Richard Ledgett described a 2014 Russian cyberattack against the US State Department as "hand-to-hand" combat:

"It was hand-to-hand combat," said NSA Deputy Director Richard Ledgett, who described the incident at a recent cyber forum, but did not name the nation behind it. The culprit was identified by other current and former officials. Ledgett said the attackers' thrust-and-parry moves inside the network while defenders were trying to kick them out amounted to "a new level of interaction between a cyber attacker and a defender."

[...]

Fortunately, Ledgett said, the NSA, whose hackers penetrate foreign adversaries' systems to glean intelligence, was able to spy on the attackers' tools and tactics. "So we were able to see them teeing up new things to do," Ledgett said. "That's a really useful capability to have."

I think this is the first public admission that we spy on foreign governments' cyberwarriors for defensive purposes. He's right: being able to spy on the attackers' networks and see what they're doing before they do it is a very useful capability. It's something that was first exposed by the Snowden documents: that the NSA spies on enemy networks for defensive purposes.

Interesting is that another country first found out about the intrusion, and that they also have offensive capabilities inside Russia's cyberattack units:

The NSA was alerted to the compromises by a Western intelligence agency. The ally had managed to hack not only the Russians' computers, but also the surveillance cameras inside their workspace, according to the former officials. They monitored the hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.

There's a myth that it's hard for the US to attribute these sorts of cyberattacks. It used to be, but for the US -- and other countries with this kind of intelligence gathering capabilities -- attribution is not hard. It's not fast, which is its own problem, and of course it's not perfect: but it's not hard.

Worse Than FailureError'd: Our Deepest Regrets (and 20% off your next purchase)

"I too have always felt that discount codes are a great way to express sympathy," writes Shawn A.

 

Peter wrote, "Those folks over at Amazon UK sure have an interesting concept of gardening tools."

 

"I tried to unsubscribe from a Washington Post newsletter, but I can't seem to uncheck the box," wrote Peter C.

 

"I, for one, love the idea of having my CPU wall-mounted," writes Daniel.

 

"My trial ends in 90 years you say? I think I'll 'Buy Later' instead," wrote Mike S.

 

Nikolaus R. writes, "Yes, Google Maps, I really got it."

 

"Ummm...so how does closing my PC help me avoid having to restart my PC?" wrote Daniel.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

CryptogramMany Android Phones Vulnerable to Attacks Over Malicious Wi-Fi Networks

There's a blog post from Google's Project Zero detailing an attack against Android phones over Wi-Fi. From Ars Technica:

The vulnerability resides in a widely used Wi-Fi chipset manufactured by Broadcom and used in both iOS and Android devices. Apple patched the vulnerability with Monday's release of iOS 10.3.1. "An attacker within range may be able to execute arbitrary code on the Wi-Fi chip," Apple's accompanying advisory warned. In a highly detailed blog post published Tuesday, the Google Project Zero researcher who discovered the flaw said it allowed the execution of malicious code on a fully updated 6P "by Wi-Fi proximity alone, requiring no user interaction."

Google is in the process of releasing an update in its April security bulletin. The fix is available only to a select number of device models, and even then it can take two weeks or more to be available as an over-the-air update to those who are eligible. Company representatives didn't respond to an e-mail seeking comment for this post.

The proof-of-concept exploit developed by Project Zero researcher Gal Beniamini uses Wi-Fi frames that contain irregular values. The values, in turn, cause the firmware running on Broadcom's wireless system-on-chip to overflow its stack. By using the frames to target timers responsible for carrying out regularly occurring events such as performing scans for adjacent networks, Beniamini managed to overwrite specific regions of device memory with arbitrary shellcode. Beniamini's code does nothing more than write a benign value to a specific memory address. Attackers could obviously exploit the same series of flaws to surreptitiously execute malicious code on vulnerable devices within range of a rogue access point.

Slashdot thread.

,

Krebs on SecuritySelf-Proclaimed ‘Nuclear Bot’ Author Weighs U.S. Job Offer

The author of a banking Trojan called Nuclear Bot — a teenager living in France — recently released the source code for his creation just months after the malware began showing up for sale in cybercrime forums. Now the young man’s father is trying to convince him not to act on a job offer in the United States, fearing it may be a trap set by law enforcement agents.

In December 2016, Arbor Networks released a writeup on Nuclear Bot (a.k.a. NukeBot) after researchers discovered the malware package for sale in the usual underground cybercrime forums for the price of USD $2,500.

The program’s author claimed the malware was written from scratch, but that it functioned similarly to the ZeuS banking trojan in that it could steal passwords and inject arbitrary content when victims visited banking Web sites.

The administration panel for Nuclear Bot. Image: IBM X-Force.

The administration panel for Nuclear Bot. Image: IBM X-Force.

Malware analysts at IBM’s X-Force research division also examined the code, primarily because the individual selling it claimed that Nuclear Bot could bypass Trusteer Rapport, an IBM security product that many banks offer customers to help blunt the effectiveness of banking trojans.

“These claims are unfounded and incorrect,” IBM’s researchers wrote. “Rapport detection and protection against the NukeBot malware are effective on all protection layers.”

But the malware’s original author — 18-year-old Augustin Inzirillo — begs to differ, saying he released the source code for the bot late last month in part because he wanted others be able to test his claims.

In an interview with KrebsOnSecurity, Inzirillo admits he wrote the Nuclear Bot trojan as a proof-of-concept to demonstrate a method he developed that he says bypasses Rapport. But he denies ever selling or marketing the malware, and maintains that this was done without his permission by an acquaintance with whom he shared the code privately.

“I’ve been interested in malware since I [was] a child, and I wanted to have a challenge,” Inzirillo said. “I was excited about this, and having nobody to share this with, I distributed the code to ‘friends’ who tried to profit off my work.”

After the source code for Nuclear Bot was released on Github, IBM followed up with a more in-depth examination of it, which argued that the author of the code appeared to release it in a failed bid to shore up his fragile ego.

According to IBM, a hacker calling himself “Gosya” tried to sell the malware in such a clumsy and inexperienced fashion that he managed to get himself banned from multiple cybercrime forums for violating specific rules about how such products should be sold.

“He did not have the malware tested and certified by forum admins, nor did he provide any test versions to members,” IBM researchers Limor Kessem and Ilya Kolmanovich wrote. “At the same time, he was attacked by existing competition, namely the FlokiBot vendor, who wanted to get down to the technical nitty gritty with him and find out if Gosya’s claims about his malware’s capabilities were indeed viable.”

The IBM authors continued:

“In posts where he replied to challenging questions, Gosya got nervous and defensive, raising suspicion among other forum members. This was likely a simple case of inexperience, but it cost him the trust of potential buyers.”

“For his next wrong move, Gosya started selling on additional forums under multiple monikers. When fraudsters realized that the same person was trying to vend under different names, they got even more suspicious that he was a ripper, misrepresenting or selling a product he does not possess. The issue got worse when Gosya changed the malware’s name to Micro Banking Trojan in one last attempt to buy it a new life.”

Inzirillo said the main reason he released his code was to prevent others from profiting off his creation. But now he says he regrets that decision as well.

“It was a big mistake, because now I know people will reuse my code to steal money from other people,” Inzirillo told KrebsOnSecurity in an online chat. 

Inzirillo released the code on Github with a short note explaining his motivations, and included a contact email address at a domain (inzirillo.com) set up long ago by his father, Daniel Inzirillo.

KrebsOnSecurity also reached out to Augustin’s dad, and heard back from him roughly an hour before Augustin replied to requests for an interview. Inzirillo the elder said his son used the family domain name in his source code release as part of a misguided attempt to impress him.

“He didn’t do it for money,” said Daniel Inzirillo, whose CV shows he has built an impressive career in computer programming and working for various financial institutions. “He did it to spite all the cyber shitheads. The idea was that they wouldn’t be able to sell his software anymore because it was now free for grabs.”

Daniel Inzirillo said he’s worried because his son has expressed a strong interest in traveling to the United States after receiving a job offer from a supposed recruiter at a technology firm which said it was impressed by Augustin’s coding skills.

“I am very worried for him, because some technology company told him they wanted to fly him to the U.S. for a job interview as a result of him posting that online,” Daniel Inzirillo said. “There is a strong possibility that in one or two weeks he’s going to be flying to California, and I am concerned that maybe some guy in some law enforcement agency has his sights on him.”

Augustin’s dad said he had hoped his son might choose a different profession than his own.

“I didn’t want him to do software development, I always wanted him to do something else,” Daniel said. “He was introduced to programming by a math teacher at school. As soon as he learned about this it became a passion for him. But I was so pissed off about this. Even though I have been doing software all my life, I didn’t have a good opinion about this profession. I got a degree in software development as a kind of ‘Plan B,’ but I always felt there was something missing there, that it wasn’t intellectually satisfying.”

Nevertheless, Daniel said he is proud of his son’s intellectual abilities, noting that Augustin is completely self-taught in computer programming.

“I haven’t taught him anything, although sometimes he comes and he asks me some questions,” Daniel said. “He’s a self-made made man. In terms of software security and hacking, nearly everything he knows he learned by himself.”

Daniel said that after he and his wife divorced in 2012, his son went from being the first or second best student in his class to dropping out of school. After that, computers became an obsession for Augustin, he said.

Daniel said his son is extremely opinionated but not very emotionally intelligent, and he believes Augustin has strong misgivings about his chosen path. By way of example, he related a story about an incident in which Augustin was recently arrested after an altercation at a local establishment.

“When he got arrested, for no reason, he blurted out everything he was doing on his computer,” Daniel recalled. “The policemen couldn’t believe he was telling them that for no reason. I realized at that moment that he just wanted to get out. He didn’t want to continue doing what he was doing.”

Daniel said he’s deeply concerned for his kid’s future, but also recognizes that his son won’t listen to his counsel.

“He respects me, he admires me, and he knows in terms of software development I’m very good, and he wants to become like me but on the other hand he doesn’t want to listen to me,” Daniel said. “If my vision of things is written about, that might help him. But I’m also worried now that he might feel I have hijacked his notoriety. This is his story, his way of surpassing me, and he might hate me for being here.”

Augustin said he wasn’t interested in discussing his father or his family life, but he did confirm (without elaborating) that he recently was offered a job in the United States. He remains somewhat ambivalent about the opportunity, but indicated he is leaning toward accepting it.

“Well, I don’t think it’s fair that I would feel bad about getting a job because of this code, I just feel bad about having released the code,” he said. “If people want to offer me something interesting as a result, I don’t think it makes sense me saying no.”

Worse Than FailureCodeSOD: An Extinction Event

Microsoft’s C# has become an extremely popular language for “enterprise” development, and it’s sobering to think that: yes, this language has been around for 15 years at this point. That’s long enough for the language to have grown from a “sort of Java with reliability, productivity and security deleted.” (James Gosling, one of the creators of Java) to “sort of Java, but with generics and lambdas actually implemented in a useful way, and not completely broken by design”.

15 years is also more than enough time for a project to grow out of control, turning into a sprawling mass of tentacles with mouths on the ends, thrashing about looking for a programmer’s brain to absorb. Viginia N is currently locked in a struggle for sanity against one such project.

Some of the code looks like this:

if (m_ProgEnt!=null)
{
        if (m_ProgEnt.SAFF_SIG==m_PSOC_SIG)
        {
                ChangePosition("P",true,(bool)ar[6],(DateTime)ar[1],(DateTime)ar[5]);
        }
        else
        {
                ChangePosition("P",true,(bool)ar[6], (DateTime)ar[1], (DateTime)ar[5]);
        }
}
else
{

}
ChangePosition("P",true,(bool)ar[6],(DateTime)ar[1],(DateTime)ar[5]);

You’ll note that all three calls to ChangePosition do exactly the same thing. I also don’t know what the array ar holds, but apparently it’s a mixture of booleans and date-time objects.

The code-base is littered with little meaningless conditionals, which makes the meaningful ones so much more dangerous:

if ( demandesDataRowView["SGESDEM_SOC"].ToString().Equals(demandesDataRowView["SGESDEM_SIG"].ToString()) && demandesDataRowView["SGESDEM_SIG"].ToString().Length > 0 )
        sSocieteCour = demandesDataRowView["SGESDEM_SIG"].ToString();
else
        sSocieteCour = demandesDataRowView["SGESDEM_SOC"].ToString();

In another part of the code, they define a ResultsetFields object twice, storing it in the same variable, and never using the first definition for anything:

int iFieldIndex = 0;
ResultsetFields fields = new ResultsetFields(19);
//DemandesDetail fields
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_REFLIVR,   iFieldIndex++,          "SGESDEMD_REFLIVR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_SOCLIVR,   iFieldIndex++,          "SGESDEMD_SOCLIVR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTEAUT,    iFieldIndex++,          "SGESDEMD_QTEAUT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_PU,                        iFieldIndex++,          "SGESDEMD_PU");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_TYPE,                      iFieldIndex++,          "SGESDEMD_TYPE");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_REMCLIENT, iFieldIndex++,          "SGESDEMD_REMCLIENT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTEASORT,  iFieldIndex++,          "SGESDEMD_QTEASORT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_LBID,                      iFieldIndex++,          "SGESDEMD_LBID");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_LIVRPAR,           iFieldIndex++,          "SGESDEMD_LIVRPAR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_CMDNUM,            iFieldIndex++,          "SGESDEMD_ASORT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_REFDEM,            iFieldIndex++,          "SGESDEMD_REFDEM");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_SOCDEM,            iFieldIndex++,          "SGESDEMD_SOCDEM");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTESOR,            iFieldIndex++,          "SGESDEMD_QTESOR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTERECENT,         iFieldIndex++,          "SGESDEMD_QTERECENT");



// Stock fields
fields.DefineField(STOCKFieldIndex.SREF_SOC ,   iFieldIndex++,          "SREF_SOC");
fields.DefineField(STOCKFieldIndex.SREF_COD ,   iFieldIndex++,          "SREF_COD");
fields.DefineField(STOCKFieldIndex.SREF_NSTOCK ,        iFieldIndex++,          "SREF_NSTOCK");
fields.DefineField(STOCKFieldIndex.SREF_QTERES ,        iFieldIndex++,          "SREF_QTERES");
fields.DefineField(STOCKFieldIndex.SREF_QTEASORT ,      iFieldIndex++,          "SREF_QTEASORT");

//SNIP [... 1500 lines without using fields]

iFieldIndex = 0;
fields = new ResultsetFields(29);
//DemandesDetail fields
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_REFLIVR,   iFieldIndex++,          "SGESDEMD_REFLIVR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_SOCLIVR,   iFieldIndex++,          "SGESDEMD_SOCLIVR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTEAUT,    iFieldIndex++,          "SGESDEMD_QTEAUT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTELIVR,   iFieldIndex++,          "SGESDEMD_QTELIVR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_PU,                        iFieldIndex++,          "SGESDEMD_PU");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_TYPE,                      iFieldIndex++,          "SGESDEMD_TYPE");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_REMCLIENT, iFieldIndex++,          "SGESDEMD_REMCLIENT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTEASORT,  iFieldIndex++,          "SGESDEMD_QTEASORT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_LBID,                      iFieldIndex++,          "SGESDEMD_LBID");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_LIVRPAR,           iFieldIndex++,          "SGESDEMD_LIVRPAR");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_CMDNUM,            iFieldIndex++,          "SGESDEMD_ASORT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_REFDEM,    iFieldIndex++,          "SGESDEMD_REFDEM");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_SOCDEM,    iFieldIndex++,          "SGESDEMD_SOCDEM");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_DOAA_ID,   iFieldIndex++,          "SGESDEMD_DOAA_ID");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_DOALID,    iFieldIndex++,          "SGESDEMD_DOALID");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_DOAA_ANNEE,        iFieldIndex++,          "SGESDEMD_DOAA_ANNEE");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_VALMASSE,  iFieldIndex++,          "SGESDEMD_VALMASSE");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTERECENT, iFieldIndex++,          "SGESDEMD_QTERECENT");
fields.DefineField(DEMANDESDETAILFieldIndex.SGESDEMD_QTESOR,    iFieldIndex++,          "SGESDEMD_QTESOR");


// Stock fields
fields.DefineField(STOCKFieldIndex.SREF_SOC ,   iFieldIndex++,          "SREF_SOC");
fields.DefineField(STOCKFieldIndex.SREF_COD ,   iFieldIndex++,          "SREF_COD");
fields.DefineField(STOCKFieldIndex.SREF_DES ,   iFieldIndex++,          "SREF_DES");
fields.DefineField(STOCKFieldIndex.SREF_NSTOCK ,        iFieldIndex++,          "SREF_NSTOCK");
fields.DefineField(STOCKFieldIndex.SREF_QTERES ,        iFieldIndex++,          "SREF_QTERES");
fields.DefineField(STOCKFieldIndex.SREF_QTEASORT ,      iFieldIndex++,          "SREF_QTEASORT");
fields.DefineField(STOCKFieldIndex.SREF_BIEN ,  iFieldIndex++,          "SREF_BIEN");
fields.DefineField(STOCKPARAMFieldIndex.SREFP_LOT ,     iFieldIndex++,          "SREFP_LOT");
fields.DefineField(STOCKPARAMFieldIndex.SREFP_KIT ,     iFieldIndex++,          "SREFP_KIT");
fields.DefineField(STOCKPARAMFieldIndex.SREFP_EPI ,     iFieldIndex++,          "SREFP_EPI");

These three different samples I’ve used here did not come from various methods throughout the code-base. No, to the contrary, these are all in the same method, lnkPrCompte_LinkClicked. That would be an event-handling method. One event handling method. In a .NET solution which contains over 70 individual DLL and executable projects, with hundreds of classes per DLL, and a whopping 65 million lines of code- roughly one line for each year the dinosaurs have been extinct.

Virginia is trying valiantly to refactor the project into something supportable. She’s tried to bring in tools like ReSharper to speed the effort along, but ReSharper takes one look at the code base, decides the dinosaurs had the right idea, and promptly dies.

A screengrab of Visual Studio 2003, showing the method lnkPrCompte_LinkClicked is over 2,000 lines in a 39,000 line file

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaDave Hall: Remote Presentations

Living in the middle of nowhere and working most of my hours in the evenings I have few opportunities to attend events in person, let alone deliver presentations. As someone who likes to share knowledge and present at events this is a problem. My work around has been presenting remotely. Many of my talks are available on playlist on my youtube channel.

I've been doing remote presentations for many years. During this time I have learned a lot about what it takes to make a remote presentation sucessful.

Preparation

When scheduling a remote session you should make sure there is enough time for a test before your scheduled slot. Personally I prefer presenting after lunch as it allows an hour or so for dealing with any gremlins. The test presentation should use the same machines and connections you'll be using for your presentation.

I prefer using Hangouts On Air for my presentations. This allows me to stream my session to the world and have it recorded for future reference. I review every one of my recorded talks to see what I can do better next time.

Both sides of the connection should use wired connections. WiFi, especially at conferences can be flakely. Organisers should ensure that all presentation machines are using Ethernet, and if possible it should be on a separate VLAN.

Tips for Presenters

Presenting to a remote audience is very different to presenting in front of a live audience. When presenting in person you're able to focus on people in the audience who seem to be really engaged with your presentation or scan the crowd to see if you're putting people to sleep. Even if there is a webcam on the audience it is likely to be grainy and in a fixed position. It is also difficult to pace when presenting remotely.

When presenting in person your slides will be diplayed in full screen mode, often with a presenter view in your application of choice. Most tools don't allow you to run your slides in full screen mode. This makes it more difficult as a presenter. Transitions won't work, videos won't autoplay and any links Keynote (and PowerPoint) open will open in a new window that isn't being shared which makes demos trickier. If you don't hide the slide thumbnails to remind you of what is coming next, the audience will see them too. Recently I worked out printing thumbnails avoids revealing the punchlines prematurely.

Find out as much information as possible about the room your presentation will be held in. How big is it? What is the seating configuration? Where is the screen relative to where the podium is?

Tips for Organisers

Event organisers are usually flat out on the day of the event. Having to deal with a remote presenter adds to the workload. Some preparation can make life easier for the organisers. Well before the event day make sure someone is nominated to be the point of contact for the presenter. If possible share the details (name, email and mobile number) for the primary contact and a fallback. This avoids the presenter chasing random people from the organising team.

On the day of the event communicate delays/schedule changes to the presenter. This allows them to be ready to go at the right time.

It is always nice for the speaker to receive a swag bag and name tag in the mail. If you can afford to send this, your speaker will always appreciate it.

Need a Speaker?

Are you looking for a speaker to talk about Drupal, automation, devops, workflows or open source? I'd be happy to consider speaking at your event. If your event doesn't have a travel budget to fly me in, then I can present remotely. To discuss this futher please get in touch using my contact form.

Chaotic IdealismQ&A: In which I feed the trolls and explain why I am ugly.

Q: Why are disabled people so ugly?

A: I feel like I ought to be insulted that you asked this question, which presumes that disabled people are indeed ugly, because in general, we’re not; we’re just ordinary-looking, some not so good-looking, some gorgeous. It’s down to luck whether we look pretty or not—well, luck and a talent for style and grooming.

I’m one of your “ugly” disabled people.

This is a photo of me at a recent protest against repealing the Affordable Care Act; the sign says “Don’t Let Us Die”. (Disabled people like myself are at risk of dying if we can’t get health care, and that’s not an overstatement.) We’re wearing party hats because the ACA turned seven years old, so we’re holding a “birthday party”. This is a candid photo taken by a news reporter, so it's not a posed picture.

Let’s analyze this. Why do I look “ugly” in this picture?


  • Loose clothing. I have autism and related sensory processing disorder, which means I wear clothing about two sizes too big. Anything tight gets on my nerves, and takes energy I could be using to deal with something else.

  • Buzz cut. The hairstyle is all about comfort. I could look “prettier” if I had long hair, but I cut it off because otherwise it gets in the way, and because I have to take extra energy to take care of long hair. So it goes.

  • Big, clunky wrap-around sunglasses. Not a highly stylish choice, but absolutely necessary for my sanity. My vision is very sensitive, and I’d get a migraine if I didn’t wear sunglasses outdoors, even on rainy days.

  • No make-up. I could look “prettier” if I wore some, but having stuff smeared on my face would take up energy I don’t have. Are we sensing a theme here?

  • Overweight. The clothes may make me look fatter than I am, but I’m still carrying about fifty extra pounds. This is unrelated to my disability, other than that because I am low-income, I am unable to afford the expensive, nutrient-dense food that would give me the nutrition I need with fewer calories. Because of my disability, it’s hard for me to cook for myself; I eat mostly prepared food, which can be quite low on nutrition.

  • Clothing. The newest piece of clothing I’m wearing is more than five years old; this is down to my low income. Due to my disability, I wear a “uniform” of black pants and polo shirt every day, which helps save energy.

  • Androgynous, but not good at fashion. I don’t look stereotypically feminine, probably because I’m not really female, nor really male. This is a sociocultural thing, a gender identity, that has little to do with disability; but because of my disability, I am unable to spend money or time on perfecting an androgynous style that “looks good”. Suits cost money. Tailored anything costs money. And all of that outfit design would take a lot of energy.

Why am I so focused on saving energy? Because I need it to do things like going to to that protest. If you’re disabled, you’re on an energy budget; you have only so much of it, and if you go over your budget, the debt catches up to you in a massive way—physical illness, breakdowns, or just plain being unable to care for yourself for a while. And if I want to do anything other than just keeping myself alive, I have to cut some corners. I simplify everything. And because I do, I can splurge on things like standing on a street corner, yelling at politicians as they go into a fund raiser, because I’m aware that there are a lot of other disabled people who could never, no matter how hard they saved up their energy, make it to a protest like that even at the cost of being exhausted afterwards. And they’re at an even greater risk of dying from losing health care than I am.

But there’s something behind this, something we all should be aware of, because when we’re aware of it, it causes much less harm. It’s simply this: Humans have a sort of evolutionary survival mechanism that tells us to stay away from disease, and to mate with those who are healthy. That’s where the concepts of “ugly” and “pretty” come from; they’re just stand-ins for whether or not you’re healthy and fertile.

So once you know that, once you know that that’s just how your primitive, animal self reacts to people, you can gauge your reaction with a bit more wisdom. Your lizard brain just looks at somebody and goes, “Are they going to give me a disease? Could we have healthy babies together?” and doesn’t care about anything else. In reality, that “ugly” person could become your best friend for life; or you could find that their personality and their mind are so charming that you fall in love with them despite what your primitive lizard-brain is trying to tell you about their looks. Because there’s so much more to a relationship than that early “ugly/pretty” reaction.

Some disabled people look “ugly” in this lizard-brain way because they are ill, or because their faces are not symmetrical. Those are indications that if you mated with them, your babies might not be as healthy. However, even for the purpose of judging whether you could have healthy babies, “ugly” is a horribly imprecise standard. Nowadays, if you wanted to tell whether you could have healthy babies with somebody, you would just go to the doctor and get check-ups.

But “ugly” is something your primitive self sees. It’s not something invented by your humanity or your compassion or your ability to communicate and empathize. “Ugly” is a concept that’s been around since our ancestors were laying eggs. We’ve gone long past that now, and so should you.

,

CryptogramAPT10 and Cloud Hopper

There's a new report of a nation-state attack, presumed to be from China, on a series of managed ISPs. From the executive summary:

Since late 2016, PwC UK and BAE Systems have been assisting victims of a new cyber espionage campaign conducted by a China-based threat actor. We assess this threat actor to almost certainly be the same as the threat actor widely known within the security community as 'APT10'. The campaign, which we refer to as Operation Cloud Hopper, has targeted managed IT service providers (MSPs), allowing APT10 unprecedented potential access to the intellectual property and sensitive data of those MSPs and their clients globally. A number of Japanese organisations have also been directly targeted in a separate, simultaneous campaign by the same actor.

We have identified a number of key findings that are detailed below.

APT10 has recently unleashed a sustained campaign against MSPs. The compromise of MSP networks has provided broad and unprecedented access to MSP customer networks.

  • Multiple MSPs were almost certainly being targeted from 2016 onwards, and it is likely that APT10 had already begun to do so from as early as 2014.

  • MSP infrastructure has been used as part of a complex web of exfiltration routes spanning multiple victim networks.

[...]

APT10 focuses on espionage activity, targeting intellectual property and other sensitive data.

  • APT10 is known to have exfiltrated a high volume of data from multiple victims, exploiting compromised MSP networks, and those of their customers, to stealthily move this data around the world.

  • The targeted nature of the exfiltration we have observed, along with the volume of the data, is reminiscent of the previous era of APT campaigns pre-2013.

PwC UK and BAE Systems assess APT10 as highly likely to be a China-based threat actor.

  • It is a widely held view within the cyber security community that APT10 is a China-based threat actor.

  • Our analysis of the compile times of malware binaries, the registration times of domains attributed to APT10, and the majority of its intrusion activity indicates a pattern of work in line with China Standard Time (UTC+8).

  • The threat actor's targeting of diplomatic and political organisations in response to geopolitical tensions, as well as the targeting of specific commercial enterprises, is closely aligned with strategic Chinese interests.

I know nothing more than what's in this report, but it looks like a big one.

Press release.

Sociological ImagesWorrisome new data on the dynamics of fake news

“Fake news” has emerged as a substantial problem for democracy. The circulation of false narratives, lies, and conspiracy theories on self-described “alternative news” sites undercuts the knowledge voters rely on to make political decisions. Sometimes the spread of this misinformation is deliberate, spread by hate groups, foreign governments, or individuals bent on harming the US.

A new study offers information as to the content, connectedness, and use of these websites. Information scholar Kate Starbird performed a network analysis of twitter users responding to mass shootings. These users denied the mainstream narrative about the shooting (arguing, for example, that the real story was being hidden from the public or that the shooting never happened at all). Since most of the fake news sites cross-promote conspiracy theories across the board, focusing on this one type of story was sufficient for mapping the networks. Here is some of what she found:

  • The sites do not share a political point of view. They are dominated by the far right, but they also include the far left, hate groups, nationalists, and Russian propaganda sites. They did strongly overlap in being anti-globalist, anti-science, and anti-mainstream media.

  • Fake news sites are highly repetitive, spreading the same conspiracies and lies, often re-posting identical content on multiple sites.
  • Users, then, aren’t necessarily being careless or undisciplined in their information gathering. They often tweet overlapping content from several different fake news sites, suggesting that they are obeying a hallmark of media literacy: seeking out multiple sources. You can see the dense network created by this use of multiple data sources in the upper left.

  • One of the main conspiracy stories promulgated by fake news sites is that the real news is fake.
  • Believing this, Twitter users who share links to fake news sites often also share links to traditional news outlets (see the connections in the network to the Washington Post, for example), but they do so primarily as evidence that their false belief was true. When the New York Times reports the mainstream story about the mass shooting, for instance, it is argued to be proof of a cover up. This is consistent with the backfire effect: exposure to facts tends to strengthen belief in misinformation rather than undermine it.

In an interview with the Seattle Times, Starbird expresses distress at her findings. “I used to be a techno-utopian,” she explained, but she is now deeply worried about the “menace of unreality.” Emerging research suggests that believing in one conspiracy theory is a risk factor for believing in another. Individuals drawn to these sites out of a concern with the safety of vaccines, for example, may come out with a belief in a Clinton-backed pedophilia ring, a global order controlled by Jews, and an aversion to the only cure for misinformation: truth. “There really is an information war for your mind,” Starbird concluded. “And we’re losing it.”

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramClever Physical ATM Attack

This is an interesting combination of computer and physical attack:

Researchers from the Russian security firm Kaspersky on Monday detailed a new ATM-emptying attack, one that mixes digital savvy with a very precise form of physical penetration. Kaspersky's team has even reverse engineered and demonstrated the attack, using only a portable power drill and a $15 homemade gadget that injects malicious commands to trigger the machine's cash dispenser. And though they won't name the ATM manufacturer or the banks affected, they warn that thieves have already used the drill attack across Russia and Europe, and that the technique could still leave ATMs around the world vulnerable to having their cash safes disemboweled in a matter of minutes.

"We wanted to know: To what extent can you control the internals of the ATM with one drilled hole and one connected wire? It turns out we can do anything with it," says Kaspersky researcher Igor Soumenkov, who presented the research at the company's annual Kaspersky Analyst Summit. "The dispenser will obey and dispense money, and it can all be done with a very simple microcomputer."

Cory DoctorowHow optimistic disaster stories can save us from dystopia


I’ve got an editorial in this month’s Wired magazine about the relationship between the science fiction stories we read and our real-world responses to disasters: Disasters Don’t Have to End in Dystopias; it’s occasioned by the upcoming publication of my “optimistic disaster novel” Walkaway (pre-order signed copies: US/UK; read excerpts: Chapter 1, Chapter 2; US/Canada tour schedule).

The stories we tell ourselves about the way that the people around us will behave in times of crisis determines how we behave when things go wrong. Decades of science fiction that advanced the nonsensical proposition that our neighbors will come and eat us as soon as the lights go out has produced a widespread idea that disasters are when you should run away, far from the people who can help you (and can need your help). With optimistic disaster stories, I’m hoping to help people understand why they should bug in, not bug out, and help get things fixed and the rubble cleared away. It’s the difference between “disaster” and “dystopia.”



Since Thomas More, utopian projects have focused on describing the perfect state and mapping the route to it. But that’s not an ideology, that’s a daydream. The most perfect society will exist in an imperfect universe, one where the second law of thermodynamics means that everything needs constant winding up and fixing and adjusting. Even if your utopia has tight-as-hell service routines, it’s at risk of being smashed by less-well-maintained hazards: passing aster­oids, feckless neighboring states, mutating pathogens. If your utopia works well in theory but degenerates into an orgy of cannibalistic violence the first time the lights go out, it is not actually a utopia.

I took inspiration from some of science fiction’s most daring utopias. In Kim Stanley Robinson’s Pacific Edge—easily the most uplifting book in my collection—a seemingly petty squabble over zoning for an office park is a microcosm for all the challenges that go into creating and maintaining a peaceful, cooperative society. Ada Palmer’s 2016 fiction debut, Too Like the Lightning, is a utopia only a historian could have written: a multi­polar, authoritarian society where the quality of life is assured by a mix of rigid social convention, high tech federalism, and something almost like feudalism.

The great problem in Walkaway (as in those novels) isn’t the exogenous shocks but rather humanity itself. It’s the challenge of getting walkaways—the 99 percent who’ve taken their leave of society and thrive by cleverly harvesting its exhaust stream—to help one another despite the prepper instincts that whisper, “The disaster will only spare so many of its victims, so you’d better save space on any handy lifeboats, just in case you get a chance to rescue one of your own.” That whispering voice is the background hum of a society where my gain is your loss and everything I have is something you don’t—a world where material abundance is perverted by ungainly and unstable wealth distribution, so everyone has to worry about coming up short.

Disasters Don’t Have to End in Dystopias [Cory Doctorow/Wired]

Pre order Walkaway: US/UK

Read excerpts: Chapter 1, Chapter 2)

US/Canada tour schedule

Worse Than FailureBy the Book

A long, long time ago when C was all the rage and C++ was just coming into its own, many people that were running applications on Unix boxes used the X-Windowing system created by MIT to build their GUI applications. This was the GUI equivalent of programming in assembly; it worked, but was cumbersome and hard to do. Shortly thereafter, the Xt-Intrinsics library was created as a wrapper, which provided higher level entities that were easier to work with. Shortly after that, several higher level toolkits that were even easier to use were created. Among these was Motif, created by DEC, HP, etc.

While these higher level libraries were easier to use than raw X-lib, they were not without their problems.

Sam was a senior developer at Military Widgets, Inc. His responsibilities included the usual architectural/development duties on his project. One day, Pat, Sam's boss, asked him to stay late. "Taylor has a bug that he just can't crack," Pat explained. "I want someone with a little more experience to give him a hand."

It seems that after making some changes to their Motif GUI, the application started throwing stack dumps on every transaction. As a result, every transaction was rolling back as failed. Taylor insisted that his code was correct and that it was not the cause of the problem. To this end, Pat asked Sam to stay late one afternoon and take a look at it.

After some company-sponsored pizza, Pat had Taylor hand Sam a stack trace. In the middle of it was a call to:


    XmTextSetString(theWidget,"String to display");

Sam laughed. "Taylor," he said, "this macro has a known memory leak." Since the macro was just a wrapper, Taylor should just replace it with the corresponding underlying call to do the actual work:


    XmTextReplace(theWidget, 
                  (XmTextPosition) 0,
                  XmTextGetLastPosition(theWidget),
                  "String to display");

This was long before Tim Berners-Lee created the internet on top of DARPA-net, so all they had to go on was a very thick set of printed manuals. Taylor pulled out the Motif manual and showed Sam the proper use of the XmTextSetString macro. "I followed the procedures laid out in the manual."

"Just because it's in the manual, doesn't mean it's *right*," Sam said. "The Motif code has a bug in it. That does happen, you know."

Taylor insisted the problem must be someplace else. "I followed the instructions in the manual!" Sam looked at Pat, raised an eyebrow and sighed.

Pat asked Taylor to try the change. Taylor refused, insisting that he did it right and that Sam didn't know what he was talking about.

Sam said he was going to his desk and would have it temporarily patched in 5 minutes.

He opened a file called last.h, and undefined the XmTextSetString macro and then redefined it using his fix. Then he kicked off a 3.5 hour full build, opened another window and typed the command to run the server, but did not enter it.

Sam returned to the conference room, explained what he did and told Taylor to keep moving the mouse to keep Sam's box from closing. Then in 3.5 hours, he could hit ENTER and test the result.

He told Pat that there might be other problems that they could encounter at other points in the application, but this WAS the cause of this problem, and that he'd see that in several hours. He also pointed out that this was only a hack, that it should NOT be checked in, and that all of the actual macro uses should be replaced with the correct code. Then he went home, relegating Taylor to several hours of jiggling a mouse, all the while mumbling that this wouldn't work because he had done it right.

When the build finished, Pat watched as Taylor launched the server, and brought the GUI to the last keystroke before the error had been occurring. To Taylor's surprise, the transaction went through without the exception. Pat then had him jam a whole year of transactions down that pipe via their batch mechanism. It took all night but it worked.

The next day, Pat had Taylor manually tracking down and replacing every instance of that macro with the corrected code, because doing what the manual says without verifying that it's right is NOT doing it right.

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

Planet Linux AustraliaLev Lafayette: 'Advanced Computing': A International Journal of Plagiarism

Advanced Computing : An International Journal was a publication that I considering writing for. However it is almost certainly a predatory open-access journal, that seeks a "publication charge", without even performing the minimal standards of editorial checking.

I can just tolerate the fact that the most recent issue has numerous spelling and grammatical errors as the I believe that English is not the first language of the authors. It should have been caught by the editors, but we'll let that slide for a far greater crime - that of widespread plagiarism.

The fact that the editors clearly didn't even check for this is in inexcusable oversight.

I opened this correspondence to the editors in the hope that others will find it prior to submitting or even considering submission to the journal in question. I also hope the editors take the opportunity to dramatically improve their editorial standards.

read more

Planet Linux AustraliaMichael Still: Light to Light, Day Three

The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.



Interactive map for this route.

                     

Tags for this post: events pictures 20170313 photo scouts bushwalk
Related posts: Light to Light, Day Two; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Planet Linux AustraliaMichael Still: Light to Light, Day Two

Our second day walking the Light to Light walk in Ben Boyd National Park. This second day was about 10 kms and was on easier terrain than the first day. That said, probably a little less scenic than the first day too.



Interactive map for this route.

             

Tags for this post: events pictures 20170312 photo scouts bushwalk
Related posts: Light to Light, Day Three; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

,

Planet Linux AustraliaMichael Still: Light to Light, Day One

Macarthur Scouts took a group of teenagers down to Ben Boyd National Park on the weekend to do the Light to Light walk. The first day was 14 kms through lovely undulating terrain. This was the hardest day of the walk, but very rewarding and I think we all had fun.



Interactive map for this route.

                                       

See more thumbnails

Tags for this post: events pictures 20170311 photo scouts bushwalk
Related posts: Light to Light, Day Three; Light to Light, Day Two; Exploring the Jagungal; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

CryptogramEncryption Policy and Freedom of the Press

Interesting law journal article: "Encryption and the Press Clause," by D. Victoria Barantetsky.

Abstract: Almost twenty years ago, a hostile debate over whether government could regulate encryption -- later named the Crypto Wars -- seized the country. At the center of this debate stirred one simple question: is encryption protected speech? This issue touched all branches of government percolating from Congress, to the President, and eventually to the federal courts. In a waterfall of cases, several United States Court of Appeals appeared to reach a consensus that encryption was protected speech under the First Amendment, and with that the Crypto Wars appeared to be over, until now.

Nearly twenty years later, the Crypto Wars have returned. Following recent mass shootings, law enforcement has once again questioned the legal protection for encryption and tried to implement "backdoor" techniques to access messages sent over encrypted channels. In the case, Apple v. FBI, the agency tried to compel Apple to grant access to the iPhone of a San Bernardino shooter. The case was never decided, but the legal arguments briefed before the court were essentially the same as they were two decades prior. Apple and amici supporting the company argued that encryption was protected speech.

While these arguments remain convincing, circumstances have changed in ways that should be reflected in the legal doctrines that lawyers use. Unlike twenty years ago, today surveillance is ubiquitous, and the need for encryption is no longer felt by a seldom few. Encryption has become necessary for even the most basic exchange of information given that most Americans share "nearly every aspect of their lives ­-- from the mundane to the intimate" over the Internet, as stated in a recent Supreme Court opinion.

Given these developments, lawyers might consider a new justification under the Press Clause. In addition to the many doctrinal concerns that exist with protection under the Speech Clause, the
Press Clause is normatively and descriptively more accurate at protecting encryption as a tool for secure communication without fear of government surveillance. This Article outlines that framework by examining the historical and theoretical transformation of the Press Clause since its inception.

Krebs on SecurityDual-Use Software Criminal Case Not So Novel

“He built a piece of software. That tool was pirated and abused by hackers. Now the feds want him to pay for the computer crooks’ crimes.”

The above snippet is the subhead of a story published last month by the The Daily Beast titled, “FBI Arrests Hacker Who Hacked No One.” The subject of that piece — a 26-year-old American named Taylor Huddleston — faces felony hacking charges connected to two computer programs he authored and sold: An anti-piracy product called Net Seal, and a Remote Administration Tool (RAT) called NanoCore that he says was a benign program designed to help users remotely administer their computers.

Photo illustration by Lyne Lucien/The Daily Beast

Photo illustration by Lyne Lucien/The Daily Beast

The author of the Daily Beast story, former black hat hacker and Wired.com editor Kevin Poulsen, argues that Huddleston’s case raises a novel question: When is a programmer criminally responsible for the actions of his users?

“Some experts say [the case] could have far reaching implications for developers, particularly those working on new technologies that criminals might adopt in unforeseeable ways,” Poulsen wrote.

But a closer look at the government’s side of the story — as well as public postings left behind by the accused and his alleged accomplices — paints a more complex and nuanced picture that suggests this may not be the case to raise that specific legal question in any meaningful way.

Mark Rumold, senior staff attorney at the Electronic Frontier Foundation (EFF), said cases like these are not so cut-and-dry because they hinge on intent, and determining who knew what and when.

“I don’t read the government’s complaint as making the case that selling some type of RAT is illegal, and if that were the case I think we would be very interested in this,” Rumold said. “Whether or not [the government’s] claims are valid is going to be extraordinarily fact-specific, but unfortunately there is not a precise set of facts that would push this case from being about the valid reselling of a tool that no one questions can be done legally to crossing that threshold of engaging in a criminal conspiracy.”

Citing group chat logs and other evidence that hasn’t yet been made public, U.S. prosecutors say Huddleston intended NanoCore to function more like a Remote Access Trojan used to remotely control compromised PCs, and they’ve indicted Huddleston on criminal charges of conspiracy as well as aiding and abetting computer intrusions.

Poulsen depicts Huddleston as an ambitious — if extremely naive — programmer struggling to make an honest living selling what is essentially a dual-use software product. Using the nickname “Aeonhack,” Huddleston marketed his NanoCore RAT on Hackforums[dot]net, an English-language hacking forum that is overrun with young, impressionable but otherwise low-skilled hackers who are constantly looking for point-and-click tools and services that can help them demonstrate their supposed hacking prowess.

Yet we’re told that Huddleston was positively shocked to discover that many buyers on the forum were using his tools in a less-than-legal manner, and that in response he chastised and even penalized customers who did so. By way of example, Poulsen writes that Huddleston routinely used his Net Seal program to revoke the software licenses for customers who boasted online about using his NanoCore RAT illegally.

We later learn that — despite Net Seal’s copy protection abilities — denizens of Hackforums were able to pirate copies of NanoCore and spread it far and wide in malware and phishing campaigns. Eventually, Huddleston said he grew weary of all the drama and sold both programs to another Hackforums member, using the $60,000 or so in proceeds to move out of the rusty trailer he and his girlfriend shared and buy a house in a low-income corner of Hot Springs, Arkansas.

From the story:

“Now even Huddleston’s modest home is in jeopardy,” Poulsen writes. “As part of their case, prosecutors are seeking forfeiture of any property derived from the proceeds of NanoCore, as well as from Huddleston’s anti piracy system, which is also featured in the indictment. ‘Net Seal licensing software is licensing software for cybercriminals,’ the indictment declares.

“For this surprising charge—remember, Huddleston use the licenses to fight crooks and pirates—the government leans on the conviction of a Virginia college student named Zachary Shames, who pleaded guilty in January to selling hackers a keystroke logging program called Limitless. Unlike Huddleston, Shames embraced malicious use of his code. And he used Net Seal to protect and distribute it.

“Huddleston admits an acquaintanceship with Shames, who was known on HackForums as ‘Mephobia,’ but bristles at the accusation that Net Seal was built for crime. ‘Net Seal is literally the exact opposite of aiding and abetting’ criminals, he says. ‘It logs their IP addresses, it block their access to the software, it stops them from sharing it with other cyber criminals. I mean, every aspect of it fundamentally prevents cybercrime. For them to say that [crime] is its intention is just ridiculous.’”

Poulsen does note that Shames pleaded guilty in January to selling his Limitless keystroke logging program, which relied on Huddleston’s Net Seal program for distribution and copy protection.

Otherwise, The Daily Beast story seems to breeze over relationship between Huddleston and Shames as almost incidental. But according to the government it is at the crux of the case, and a review of the indictment against Huddleston suggests the two’s fortunes were intimately intertwined.

From the government’s indictment:

“During the course of the conspiracy, Huddleston received over 25,000 payments via PayPal from Net Seal customers. As part of the conspiracy, Huddleston provided Shames with access to his Net Seal licensing software in order to assist Shames in the distribution of his Limitless keylogger. In exchange, Shames made at least one thousand payments via PayPal to Huddleston.”

“As part of the conspiracy, Huddleston and Shames distributed the Limitless keylogger to over 3,000 people who used it to access over 16,000 computers without authorization with the goal and frequently with the result of stealing sensitive information from those computers. As part of the conspiracy, Huddleston provided Net Seal to several other co-conspirators to assist in the profitable distribution of the malicious software they developed, including prolific malware that has repeatedly been used to conduct unlawful and unauthorized computer intrusions.”

A screen shot of Zach "Mephobia" Shames on Hackforums discussing the relationship between his Limitless keylogger and Huddleston's Net Seal anti-piracy and payment platform.

A screen shot of Zach “Mephobia” Shames on Hackforums discussing the relationship between his Limitless keylogger and Huddleston’s (Aeonhack) Net Seal anti-piracy and payment platform.

Allison Nixon, director of security research for New York City-based security firm Flashpoint, observed that in the context of Hackforums, payment processing through Paypal is a significant problem for forum members trying to sell dual-use software and services on the forum.

“Most of their potential customer base uses PayPal, but their vendor accounts keep getting suspended for being associated with crime, so people who can successfully get payments through are prized,” Nixon said. “Net Seal can revoke access to a program that uses it, but it is a payment processing and digital rights management (DRM) system. Huddleston can claim the DRM is to prevent cybercrime, but realistically speaking the DRM is part of the payment system — to prevent people from pirating the software or initiating a Paypal chargeback. Just because he says that he blocked someone’s license due to an admission of crime does not mean that was the original purpose of the software.”

Nixon, a researcher who has spent countless hours profiling hackers and activities on Hackforums, said selling the NanoCore RAT on Hackforums and simultaneously scolding people for using it to illegally spy on people “could at best be seen as the actions of the most naive software developer on the Earth.”

“In the greater context of his role as the money man for Limitless Keylogger, it does raise questions about how sincere his anti-cybercrime stance really is,” Nixon said. “Considering that he bought a house from this, he has a significant financial incentive to play ignorant while simultaneously operating a business that can’t make nearly as much money if it was operated on a forum that wasn’t infested with criminals.”

Huddleston makes the case in Poulsen’s story that there’s a corporate-friendly double standard at work in the government’s charges, noting that malicious hackers have used commercial remote administration tools like TeamViewer and VNC for years, but the FBI doesn’t show up at their corporate headquarters with guns drawn.

But Nixon notes that RATs sold on Hackforums are extremely dangerous for the average person to use on his personal computer because there are past cases when RAT authors divert infected machines to their own botnet.

Case in point: The author of the Blackshades Trojan — once a wildly popular RAT sold principally on Hackforums before its author and hundreds of its paying customers were arrested in a global law enforcement sweep — wasn’t content to simply rake in money from the sale of each Blackshades license: He also included a backdoor that let him secretly commandeer machines running the software.

A Hackforums user details how the Blackshades RAT included a backdoor that let the RAT's original author secretly access systems infected with the RAT.

A Hackforums user details how the Blackshades RAT included a backdoor that let the RAT’s original author secretly access systems infected with the RAT.

“If a person is using RAT software on their personal machine that they purchased from Hackforums, they are taking this risk,” Nixon said. “Programs like VNC and Teamviewer are much safer for legitimate use, because they are actual companies, not programs produced by teenagers in a criminogenic environment.”

All of this may be moot if the government can’t win its case against Huddleston. The EFF’s Rumold said while prosecutors may have leverage in Shames’s conviction, the government probably doesn’t want to take the case to trial.

“My guess is if they want a conviction, they’re going to have to go to trial or offer him some type of very favorable plea,” Rumold said. “Just the fact that Huddleston was able to tell his story in a way that makes him come off as a very sympathetic character sounds like the government may have a difficult time prosecuting him.”

A copy of the indictment against Huddleston is available here (PDF).

If you enjoyed this story, take a look at a related piece published here last year about a different RAT proprietor selling his product on Hackforums who similarly claimed the software was just a security tool designed for system administrators, despite features of the program and related services that strongly suggested otherwise.

TEDA night to talk about design

TED NYC Design Lab

Designers solve problems and bring beauty to the world. At TEDNYC Design Lab, a night of talks at TED HQ in New York City hosted by design curator Chee Pearlman with content producer Cloe Shasha, six speakers pulled back the curtain to reveal the hard work and creative process behind great design. Speakers covered a range of topics, including the numbing monotony of modern cities (and how to break it), the power of a single image to tell a story and the challenge of building a sacred space in a secular age.

First up was Pulitzer-winning music and architecture critic Justin Davidson.

The touchable city. Shiny buildings are an invasive species, says Pulitzer-winning architecture critic Justin Davidson. In recent years, cities have become smooth, bright and reflective, as new downtowns sprout clusters of tall buildings that are almost always made of steel and glass. While glass can be beautiful (and easily transported, installed and replaced), the rejection of wood, sandstone, terra cotta, copper and marble as building materials has led to the simplification and impoverishment of the architecture in cities — as if we wanted to reduce all of the world’s cuisines to the blandness of airline food. “The need for shelter is bound up with the human desire for beauty,” Davidson says. “A city’s surfaces affect the way we live in it.” Buildings create the spaces around them; ravishing public places such as the Plaza Mayor in Salamanca, Spain, and the 17th-century Place des Vosges in Paris draw people in and make life look like an opera set, while glass towers push people away. Davidson warns of the dangers of this global trend: “When a city defaults to glass as it grows, it becomes a hall of mirrors: uneasy, disquiet and cold.” By offering a series of contemporary examples, Davidson call for “an urban architecture that honors the full range of urban experience.”

“The main thing we need right now is a good cartoon,” says Françoise Mouly. (Photo: Ryan Lash / TED)

The power of an image to capture a moment. The first cover of The New Yorker depicted a dandy looking at a butterfly through a monocle. Now referred to as “Eustace Tilley,” this iconic image was a tongue-in-cheek response to the stuffy aristocrats of the Jazz Age. When Françoise Mouly joined the magazine as art editor in 1993, she sought to restore the same spirit of humor to a magazine that had grown staid. In doing so, Mouly looked back into how The New Yorker covers reflected moments in history, finding that covers from the Great Depression revealed what made people laugh in times of hardship. For every anniversary edition of The New Yorker, a new version of the Eustace Tilley appears on the cover. This year, we see Vladimir Putin as the monocled Eustace Tilley peering at his butterfly, Donald Trump. For Mouly, “Free press is essential to our democracy. Artists can capture what is going on — with just ink and watercolor, they can capture and enter into a cultural dialogue, putting artists at the center of culture.”

Sinéad Burke

Sinéad Burke shared insights into a world that many designers don’t see, challenging the idea that design is only a tool to create function and beauty. “Design can inflict vulnerability on a group whose needs aren’t considered,” she says. (Photo: Ryan Lash / TED)

What is accessible design? “Design inhibits my independence and autonomy,” says educator and fashion blogger Sinéad Burke, who was born with achondroplasia (which translates as “without cartilage formation”) the most common form of dwarfism. At 105 centimeters (or 3 feet 5 inches) tall, Burke is acutely aware of details that are practically invisible to the rest of us — like the height of the lock in a bathroom stall or the range of available shoe sizes. So-called “accessible spaces” like bathrooms for people in wheelchairs are barely any better. In a stellar talk, Burke offers us a new perspective on the physical world we live in and asks us to consider the limits and biases of accessible design.

The beat of the Book Tree. Sofi Tukker brought the audience to their feet with hits “Hey Lion” and “Awoo,” featuring Betta Lemme. For the New York City–based duo, physical performance is a crucial element of their onstage presence, demonstrated through the use of a unique standing instrument they designed call “Book Tree,” made from actual books attached to a sampler — with each percussion comes a beat. Their debut album, Soft Animals, was released in July 2016, and their single “Drinkee” was nominated for Best Dance Recording at the 2017 Grammys.

Finding ourselves in dataGiorgia Lupi was 13 when Silvio Berlusconi shocked many in Italy by becoming prime minister in 1994. Why was that election result so surprising, she wondered? This was the first time when the “data” she had gave her a distorted image of reality: the information available to her was simply too limited and imprecise, too skewed to give any real picture of what was going on. In America’s 2016 election, when we had many more data points and many felt the sample was representative enough, most data analysts still predicted the wrong outcome. Lupi, the co-founder of data firm Accurat, suggests that such events highlight larger problems behind data’s representation. When we focus on creating powerful headlines and simple messages, we often lose the point completely, forgetting that data alone cannot represent reality; that beneath these numbers, human stories transform the abstract and the uncountable into something that can be seen, felt and directly reconnected to our lives and to our behaviors. What we need, she says, is data humanism. “To make data [sets] faithfully representative of our human nature, and to make sure they won’t mislead us anymore, we need to start designing new ways to include empathy, imperfection and human qualities in how we collect, process, analyze and display them.”

Siamak Hariri

Siamak Hariri describes his project, the Bahá’í Temple of South America in Santiago: “A prayer answered, open in all directions, capturing the blue light of dawn, the tent-like white light of day, the gold light of the afternoon, and at night, the reversal … catching the light in all kinds of mysterious ways.” (Photo: Ryan Lash / TED)

Can you design a sacred experience? Starting in 2006, architect Siamak Hariri attempted to do just that when he began his work on the Bahá’í Temple of South America in Santiago, Chile. He describes how he designed for a feeling that is at once essential and ineffable by focusing on illumination and creating a structure that captures the movement of light across the day. Hariri journeys from the quarries of Portugal, where his team found the precious stone to line the inside of the building like the silk of a jacket, to the temple’s splendid opening ceremony for an architectural experience unlike any other.

In the final talk of the night, Michael Bierut told a story of consequences, both intended and unintended. (Photo: Ryan Lash / TED)

Unintended consequences are often the best consequences. A few years ago, designer Michael Bierut was tapped by the Robin Hood Foundation to design a logo for a project to improve libraries in New York City public schools. Beruit is a legendary designer and critic — recent projects include rebranding the MIT Media Lab, reimagining the Saks Fifth Avenue logo and creating the logo for Hillary Clinton’s presidential campaign. So after some iterating, he came upon a simple idea: replacing the “i” in “library” with an exclamation point: L!BRARY, or The L!BRARY Initiative. His work on the project wasn’t over. One of the architects working on the libraries came to Bierut with a problem: the space between the library shelves, which had to be low to be accessible for kids, and the ceilings, which are often very high in the older school buildings, were calling out for design attention. After tapping his wife, a photographer, to fill in this space with a mural of beautiful portraits of schoolchildren, other schools took notice and wanted art of their own. Bierut brought in other illustrators, painters and artists to fill in the spaces with one-of-a-kind murals and art installations. As the new libraries opened, Bierut had a chance to visit them and the librarians who worked there, where he discovered the unintended consequences of his work. Far from designing only a logo, Bierut’s involvement in this project snowballed into a quest to bring energy, learning, art and graphics into these school libraries, where librarians dedicate themselves to excite new generations of readers and thinkers.


CryptogramAcoustic Attack Against Accelerometers

Interesting acoustic attack against the MEMS accelerometers in devices like FitBits.

Millions of accelerometers reside inside smartphones, automobiles, medical devices, anti-theft devices, drones, IoT devices, and many other industrial and consumer applications. Our work investigates how analog acoustic injection attacks can damage the digital integrity of the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Our contributions include (1) modeling the physics of malicious acoustic interference on MEMS accelerometers, (2) discovering the circuit-level security flaws that cause the vulnerabilities by measuring acoustic injection attacks on MEMS accelerometers as well as systems that employ on these sensors, and (3) two software-only defenses that mitigate many of the risks to the integrity of MEMS accelerometer outputs.

This is not that a big deal with things like FitBits, but as IoT devices get more autonomous -- and start making decisions and then putting them into effect automatically -- these vulnerabilities will become critical.

Academic paper.

Worse Than FailureCoded Smorgasbord: Cerebral Flatulence

There’s plenty of bad code that makes you ask, “what were they thinking?” There’s a whole bunch of code we get, however, that doesn’t even raise that question- the programmer responsible simply wasn’t thinking. Today, let’s examine a few “programmer brain-farts”. We turn our attention first, to Jim C.

While reviewing some unit tests, he found this line:

if (!succeeded) {
    SmartAssert::IsTrue(succeeded, messageStr);
}

This, obviously, confirms that succeeded is false, and the asserts that succeeded shouldn’t be false. Perhaps not so smart an assert. Jim ran blame to find the party responsible, but as it turned out- he had written this code six months earlier.

Oh, to live in a world where we are the only source of our own pain. Jason is not so lucky. It was the eve of a major release, and there was a bug. Certain date values were being miscalculated. What could possibly be the cause?

Well, after a frantic investigation, he found a fellow developer had done this:

SystemTime startTime = getCurrentSystemTime().subtractSeconds(TimeUnit.HOURS.toMillis(1));

What is supposed to be calculating out one hour prior was actually calculating out 1000 times that much. While they discovered this bug right before a release, the code had been in production for months.

Meanwhile, Dave was hunting through some code, trying to understand some of the output. Eventually, he traced the problem to a function called generateUUID.

public static String generateGUID(AuditLog auditLog) {
  String guid = UUID.randomUUID().toString();
  return (guid + auditLog.getContextName() + auditLog.getRequestStart().getTime());
}

This was simply a case of a very poorly named function. At no point in its history had it ever actually generated a UUID, and had always returned some variation on this string concatenated version.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!