Planet Russell

,

Planet DebianBen Hutchings: Debian LTS work, August 2016

I was assigned 14.75 hours of work by Freexian's Debian LTS initiative and carried over 0.7 from last month. I worked a total of 14 hours, carrying over 1.45 hours.

I finished preparing and finally uploaded an update for linux (3.2.81-2). This took longer than expected due to the difficulty of reproducing CVE-2016-5696 and verifying the backported fix. I also released an upstream stable update (3.2.82) which will go into the next update in wheezy LTS.

I discussed a few other security updates and issues on the debian-lts mailing list.

Worse Than FailureCoded Smorgasbord: Properly Handled PHP

It’s tempting to pick on PHP, because PHP is a terribly designed language. At the same time, there’s an air of snobbery and elitism in our mockery of PHP. I mean, half the Web runs on PHP, so how bad can it be? These examples could easily have been written in nearly any other language, and they’d be just as bad in those languages too. Is it fair to single out PHP? Perhaps not, but each of these examples does nothing- or nearly nothing- which may very well be PHP’s greatest asset.

As a case in point, Ethan inherited some code. It needs to count how many sub-directories there are in a certain directory.

    $dir = opendir("/var/www/media/docs/");
    $nCount = 0;
    while($imagename = readdir($dir)) {
        if (strlen($imagename)>2 && is_dir("/var/www/media/docs/" . $imagename)) {
            $nCount++;
            $foldernames[] = $imagename;
        }
    }
    closedir($dir);

We could pick on hard-coded strings and misleading, but for the most part, this code doesn’t seem too unreasonable. That is, until you look at the expression strlen($imagename)>2 in that if condition. What on Earth is going on there? Well, Ethan wondered the same thing. After some research, he managed to figure it out- there was a folder called “HR” that the original developer didn’t want indexed by this code. If you don’t think about it, this is much more efficient than an equality test, because this only has to look at every character in one string, not two.

Well, that code definitely does something. Is there some code that doesn’t do anything? Well, what about some code that doesn’t do anything useful? Betty learned about this error handling convention from a very expensive contractor.

    try {
        // code
    } catch (Exception $e) {
        throw new Exception($e->getMessage());
    }

At least we know every exception is being handled… except the last one.

That still does too much. Is there any PHP code that’s even safer- and by safer, I mean “more useless”? Well, what about this shopping cart system David C found?

    $pids = array();
    foreach ($scratch_products as $product_data) {
        $pids[] = $product_data['productid'];
    }
    unset($pids);

Now there’s the stuff. Create the array, populate it, and destroy it all in three lines. It would have worked great, too, if not for the fact that some code elsewhere in the program expected $pids to exist.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

Planet Linux AustraliaStewart Smith: MySQL removes the FRM (7 years after Drizzle did)

The new MySQL 8.0.0 milestone release that was recently announced brings something that has been a looooong time coming: the removal of the FRM file. I was the one who implemented this in Drizzle way back in 2009 (July 28th 2009 according to Brian)- and I may have had a flashback to removing the tentacles of the FRM when reading the MySQL 8.0.0 announcement.

As an idea for how long this has been on the cards, I’ll quote Brian from when we removed it in Drizzle:

We have been talking about getting rid of FRM since around 2003. I remember a drive up to northern Finland with Kaj Arnö, where we spent an hour talking about this. I, David, and MontyW have talked about this for years.

http://krow.livejournal.com/642329.html

Soo… it was a known problem for at least thirteen years. One of the issues removing it was how pervasive all of the FRM related things were. I shudder at the mention of “pack_flag” and Jay Pipes probably does too.

At the time, we tried a couple of approaches as to how things should look. Our philosophy with Drizzle was that it should get out of the way at let the storage engines be the storage engines and not try to second guess them or keep track of things behind their back. I still think that was the correct architectural approach: the role of Drizzle was to put SQL on top of a storage engine, not to also be one itself.

Looking at the MySQL code, there’s one giant commit 31350e8ab15179acab5197fa29d12686b1efd6ef. I do mean giant too, the diffstat is amazing:

 786 files changed, 58471 insertions(+), 25586 deletions(-)

How anyone even remotely did code review on that I have absolutely no idea. I know the only way I could get it to work in Drizzle was to do it incrementally, a series of patches that gradually chiseled out what needed to be taken out so I could put it an API and the protobuf code.

Oh, and in case you’re wondering:

- uint offset,pack_flag;
+ uint offset;

Thank goodness. Now, you may not appreciate that as much as I might, but pack_flag was not the height of design, it was… pretty much a catchalll for some kind of data about a field that wasn’t something that already had a field in the FRM. So it may include information on if the field could be null or not, if it’s decimal, how many bytes an integer takes, that it’s a number and how many oh, just don’t ask.

Also gone is the weird interval_id and a whole bunch of limitations because of the FRM format, including one that I either just discovered or didn’t remember: if you used all 256 characters in an enum, you couldn’t create the table as MySQL would pick either a comma or an unused character to be the separator in the FRM!?!

Also changed is how the MySQL server handles default values. For those not aware, the FRM file contains a static copy of the row containing default values. This means the default values are computed once on table creation and never again (there’s a bunch of work arounds for things like AUTO_INCREMENT and DEFAULT NOW()). The new sql/default_values.cc is where this is done now.

For now at least, table metadata is also written to a file that appears to be JSON format. It’s interesting that a SQL database server is using a schemaless file format to describe schema. It appears that these files exist only for disaster recovery or perhaps portable tablespaces. As such, I’m not entirely convinced they’re needed…. it’s just a thing to get out of sync with what the storage engine thinks and causes extra IO on DDL (as well as forcing the issue that you can’t have MVCC into the data dictionary itself).

What will be interesting is to see the lifting of these various limitations and how MariaDB will cope with that. Basically, unless they switch, we’re going to see some interesting divergence in what you can do in either database.

There’s certainly differences in how MySQL removed the FRM file to the way we did it in Drizzle. Hopefully some of the ideas we had were helpful in coming up with this different approach, as well as an extra seven years of in-production use.

At some point I’ll write something up as to the fate of Drizzle and a bit of a post-mortem, I think I may have finally worked out what I want to say…. but that is a post for another day.

Planet DebianKees Cook: security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :)

A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features.

So, to that end, here are things I found interesting in v4.3:

CONFIG_CPU_SW_DOMAIN_PAN

Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow.

This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink.

While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally.

To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features.

Ambient Capabilities

Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex “least privilege” execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier.

For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn’t enough, there is a self-test available in tools/testing/selftests/capabilities/ too.

PowerPC and Tile support for seccomp filter

Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly.

That’s it for v4.3. If I missed stuff you found interesting, please let me know! I’m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Planet DebianReproducible builds folks: Reproducible Builds: week 74 in Stretch cycle

Here is what happened in the Reproducible Builds effort between Sunday September 18 and Saturday September 24 2016:

Outreachy

We intend to participate in Outreachy Round 13 and look forward for new enthusiastic applications to contribute to reproducible builds. We're offering four different areas to work on:

  • Improve test and debugging tools.
  • Improving reproducibility of Debian packages.
  • Improving Debian infrastructure.
  • Help collaboration across distributions.

Reproducible Builds World summit #2

We are planning e a similar event to our Athens 2015 summit and expect to reveal more information soon. If you haven't been contacted yet but would like to attend, please contact holger.

Toolchain development and fixes

Mattia uploaded dpkg/1.18.10.0~reproducible1 to our experimental repository. and covered the details for the upload in a mailing list post.

The most important change is the incorporation of improvements made by Guillem Jover (dpkg maintainer) to the .buildinfo generator. This is also in the hope that it will speed up the merge in the upstream.

One of the other relevant changes from before is that .buildinfo files generated from binary-only builds will no longer include the hash of the .dsc file in Checksums-Sha256 as documented in the specification.

Even if it was considered important to include a checksum of the source package in .buildinfo, storing it that way breaks other assumptions (eg. that Checksums-Sha256 contains only files part of that are part of a single upload, wheras the .dsc might not be part of that upload), thus we look forward for another solution to store the source checksum in .buildinfo.

Bugs filed

Reviews of unreproducible packages

250 package reviews have been added, 4 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

3 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (11)
  • Santiago Vila (2)

Documentation updates

h01ger created a new Jenkins job so that every commit pushed to the master branch for the website will update reproducible-builds.org.

diffoscope development

strip-nondeterminism development

reprotest development

tests.reproducible-builds.org

  • The full rebuild of all packages in unstable (for all tested archs) with the new build path variation has been completed. This has had the result that we are down to ~75% reproducible packages in unstable now. In comparison, for testing (where we don't vary the build path) we are still at ~90%. IRC notifications for unstable have been enabled again. (Holger)
  • Make the notes job robust about bad data (see #833695 and #833738). (Holger)
  • Setup profitbricks-build7 running stretch as testing reproducible builds of F-Droid need to use a newer version of vagrant in order to support running vagrant VMs with kvm on kvm. (Holger)
  • The misbehaving 'opi2a' armhf node has been replaced with a Jetson-TK1 board kindly donated by NVidia. This machine is using an NVIDIA tegra-k1 (cortex-a15) quad-core board. (vagrant and Holger)

Misc.

This week's edition was written by Chris Lamb, Holger Levsen and Mattia Rizzolo and reviewed by a bunch of Reproducible Builds folks on IRC.

CryptogramBrian Krebs DDoS

Brian Krebs writes about the massive DDoS attack against his site. In fact, the site is down as I post this.

Planet Linux AustraliaColin Charles: Percona Live Europe Amsterdam PostgreSQL Day

This is my very first post on Planet PostgreSQL, so thank you for having me here! I’m not sure if you’re aware, but the PostgreSQL Events page lists the conference as something that should be of interest to PostgreSQL users and developers.

There is a PostgreSQL Day on October 4 2016 in Amsterdam, and if you’re planning on just attending a single day, use code PostgreSQLRocks and it will only cost €200+VAT.

I for one am excited to see Patroni: PostgreSQL High Availability made easy, Relational Databases at Uber: MySQL & Postgres, and Linux tuning to improve PostgreSQL performance: from hardware to postgresql.conf.

I’ll write notes here, if time permits we’ll do a database hackers lunch gathering (its good to mingle with everyone), and I reckon if you’re coming for PostgreSQL day, don’t forget to also signup to the Community Dinner at Booking.com.

Cory DoctorowCome see me in Portland, Riverside, LA, and San Francisco

I’ve got a busy couple of weeks coming up! I’m speaking tomorrow at Powell’s in Portland, OR for Banned Books Week; on Wednesday, I’m at UC Riverside speaking to a Philosophy and Science Fiction class; on Friday I’ll be at the University of Southern California in Los Angeles, speaking on Canada’s dark decade of policy denial from climate science to digital locks; and then on Oct 6, I’m coming to SFMOMA to talk about museums, technology, and free culture. I hope to see you soon!

(Image: Alex Schoenfeldt Photography, www.schoenfeldt.com, CC-BY)

Cory DoctorowHow free software stayed free

I did an interview with the Changelog podcast (MP3) about my upcoming talk at the O’Reilly Open Source conference in London, explaining how it is that the free and open web became so closed and unfree, but free and open software stayed so very free, and came to dominate the software landscape.

“Desperate” is often the opposite of “open”: it’s when we’re in trouble that we’re most likely to compromise on our principles. How, then, did open become the default for so many tools and applications? Because when you use irrevocable open/free licenses, you lock your code open, defending it from anyone who would lock it up again—including a future version of you, in a moment of weakness.

Open licenses have served us well for more than two decades, but they need help if we’re going to survive the era in which computers invade our bodies and the structures we keep those bodies in. Cory Doctorow explains that we can lock the whole future Web open, if we do it right.

#221: How We Got Here with Cory Doctorow
[The Changelog]

(Image: Tux Droid, Sunny Ripert, CC-BY-SA)

Sociological ImagesMedia Coverage of Domestic Violence More Likely to Excuse White vs. Black Perpetrators

Controversy erupted in 2014 when video of National Football League (NFL) player Ray Rice violently punched his fiancé (now wife) and dragged her unconscious body from an elevator. Most recently, Deadspin released graphic images of the injuries NFL player Greg Hardy inflicted on his ex-girlfriend. In both instances, NFL officials insisted that if they had seen the visual evidence of the crime, they would have implemented harsher consequences from the onset.

Why are violent images so much more compelling than other evidence of men’s violence against women? A partial answer is found by looking at whose story is privileged and whose is discounted. The power of celebrity and masculinity reinforces a collective desire to disbelieve the very real violence women experience at the hands of men. Thirteen Black women collectively shared their story of being raped and sexually assaulted by a White police officer, Daniel Holtzclaw, in Oklahoma. Without the combined bravery of the victims, it is unlikely any one woman would have been able to get justice. A similar unfolding happened with Bill Cosby. The first victims to speak out against Cosby were dismissed and treated with suspicion. The same biases that interfere with effectively responding to rape and sexual assault hold true for domestic violence interventions.

Another part of the puzzle language. Anti-sexist male activist Jackson Katz points out that labeling alleged victims “accusers” shifts public support to alleged perpetrators. The media’s common use of a passive voice when reporting on domestic violence (e.g., “violence against women”) inaccurately emphasizes a shared responsibility of the perpetrator and victim for the abuser’s violence and generally leaves readers with an inaccurate perception that domestic violence isn’t a gendered social problem. Visual evidence of women’s injuries at the hands of men is a powerful antidote to this misrepresentation.

In my own research, published in Sociological Spectrum, I found that the race of perpetrators also matters to who is seen as accountable for their violence. I analyzed 330 news articles about 66 male celebrities in the headlines for committing domestic violence. Articles about Black celebrities included criminal imagery – mentioning the perpetrator was arrested, listing the charges, citing law enforcement and so on – 3 times more often than articles about White celebrities. White celebrities’ violence was excused and justified 2½ times more often than Black celebrities’, and more often described as mutual escalation or explained away due to mitigating circumstances, such as inebriation.

Data from an analysis of 330 articles about 66 Black and White celebrities who made headlines for perpetrating domestic violence (2009-2012):

Caption: Data from an analysis of 330 articles about 66 Black and White celebrities who made headlines for perpetrating domestic violence (2009 – 2012).

Accordingly, visual imagery of Ray and Hardy’s violence upholds common stereotypes of Black men as violent criminals. Similarly, White celebrity abusers, such as Charlie Sheen, remain unmarked as a source of a social problem. It’s telling that the public outcry to take domestic violence seriously has been centered around the NFL, a sport in which two-thirds of the players are African American. The spotlight on Black male professional athletes’ violence against women draws on racist imagery of Black men as criminals. Notably, although domestic violence arrests account for nearly half of NFL players’ arrests for violent crimes, players have lower arrest rates for domestic violence compared to national averages for men in a similar age range.

If the NFL is going to take meaningful action to reducing men’s violence against women, not just protect its own image, the league will have to do more than take action only in instances in which visual evidence of a crime is available. Moreover, race can’t be separated from gender in their efforts.

Joanna R. Pepin a PhD candidate in the Department of Sociology at the University of Maryland. Her work explores the relationship between historical change in families and the gender revolution.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureLogjam

Steven worked for Integrated Machinations, a company that made huge machines to sell to manufacturers so they could actually manufacture stuff. He didn't build the machines, that would require hard physical labor. Instead, he wrote computer programs that interfaced with the machines from the comfort of the air-conditioned office. One such program was a diagnostic app used to log the performance of Integrated Machinations products. The machines didn't break down often, but when they did, logging was very important. Customers wouldn't be in a mood to hear that IM didn't know why the equipment they dropped fat stacks of cash on failed.

Text-xml file-type image Steven also had a subordinate named Thomas, who was foist upon Steven in an effort to expand the small development team. Steven could have easily handled everything himself, but Thomas needed something to do so he was given the simplest part of the diagnostic app - the downloader. Steven's code handled the statistical compiling, number-crunching, and fancy chart-making aspects of the application. All Thomas had to do was make the piece that downloaded the raw files from the machines to pass back.

Thomas spent two months on something that would have taken Steven a week tops. It worked in their test environment, but Steven wanted to code review it went to production. Before he could, the higher-ups informed him there was no time. The logging and downloading system was installed and began to do its thing.

Much to Steven's pleasant surprise, the downloader piece worked in the real world. Thomas had it set up to run every minute from Crontab on every machine their pilot client had. It passed back what the compiler needed in XML format and they had neatly-displayed diagnostic stats to show. This went on for a week, until it didn't.

Steven came in that Monday to find that nothing had been downloaded over the weekend. As soon as Thomas meandered in, unshaven and bleary-eyed, he instructed him to check on the downloader. "Sure, if I can fight off this hangover long enough. Are you sure your stuff isn't broken??" Thomas replied, half joking, half trying not to pass out.

Two hours passed, half of which Thomas spent in the bathroom. He finally came back to Steven's office to report, "Everything is back to normal! We lost all the logs from the weekend, but who works on the weekend anyway?" He quickly disappeared without further explanation.

So began a repeating cycle of the downloader crashing, Thomas coming to work hung over, then fixing it without explanation. The Thomas problem got resolved before the downloader problem. He was relieved from his employment at Integrated Machinations after his sixth "no-call, no-show". This left Steven to support the downloader the next time it went down. It was completely undocumented, so he had to dig in.

He found the problem was with the log file itself, which had bad XML for some reason. Since XML has a rigorously specified "Parse or Die!" standard, and Thomas wasn't much for writing exception handlers, the next time the downloader ran, it would read in the XML file, get a parse error, and die. It was at this point Thomas would have to delete the XML file, restart the downloader, and things would get back to normal.

Digging in further, he found every time the downloader ran, it read and parsed the entire log file, then manipulated the parse tree and added a new <download> element after each record. Finally, it wrote the whole thing back to disk.


<logfile>
 <download timestamp="2016-09-04 16:23:00">
 <file name="foo_1234.data"/>
 <file name="foo_1235.data"/>
 <file name="foo_1236.data"/>
 </download>
 <download timestamp="2016-09-04 16:24:00">
 <file name="foo_1237.data"/>
 <file name="foo_1238.data"/>
 <file name="foo_1239.data"/>
 </download>
 ...
</logfile>

There was quite a bit of code dedicated to this rather complex and intricate process but there were no obvious problems with the code itself. When it ran, it worked. But if you just left the thing running from Cron, then sooner or later, something would go wrong and the XML file on disk would get corrupted. Steven never could figure out what was causing the file corruption, and it wasn't worth investigating. He tore out all the logging code and replaced it with three lines:


 # Name the file after today's date
 # >> opens the file for append. Linux *always* gets this right.
 open FILE, ">> $log_dir/$date.log";
 # Look Ma! No XML!
 print FILE "$date $time $downloaded_filename\n";
 # Delete any files that are more than 3 days old
 unlink grep { -M > 3 } <$log_dir/*.log>;

From there on, the downloader never failed again and the scourge of Thomas had been put to rest.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianRhonda D'Vine: LP

I guess you know by now that I simply love music. It is powerful, it can move you, change your mood in a lot of direction, make you wanna move your body to it, even unknowingly have this happen, and remind you of situations you want to keep in mind. The singer I present to you was introduce to me by a dear friend with the following words: So this hasn't happened to me in a looooong time: I hear a voice and can't stop crying. I can't decide which song I should send to you thus I send three of which the last one let me think of you.

And I have to agree, that voice is really great. Thanks a lot for sharing LP with me, dear! And given that I got sent three songs and I am not good at holding excitement back, I want to share it with you, so here are the songs:

  • Lost On You: Her voice is really great in this one.
  • Halo: Have to agree that this is really a great cover.
  • Someday: When I hear that song and think about that it reminds my friend of myself I'm close to tears, too ...

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

TEDThinking differently about the election at TEDNYC: The Election Edition

(L-R) Hosts Kelly Stoetzel and Helen Walters speak onstage at TEDNYC - The Election Edition, September 7, 2016, New York, NY. Photo: Ryan Lash / TED

TED curators Kelly Stoetzel and Helen Walters host the very first TEDNYC event in New York, NY, on September 7, 2016. (Photo: Ryan Lash / TED)

The conversation around the upcoming US presidential election is full of frenzy, headache and noise. But elections are about more than divisiveness and disagreement — they’re civic events worthy of celebration, and, while it may seem unbelievable at the moment, they hold the promise of transforming governments for the better.

At TEDNYC: The Election Edition, six speakers who think about elections differently — whether as a design challenge, a translation project or the stimulus for creative work — spoke about why the future of our shared political sphere may be brighter than it seems, and why it’s absolutely and completely necessary for Americans to vote in November.

It was our very first salon in the new theater at TED HQ, a custom-made cavern of seats, screens, cameras and all of the technical wizardry necessary to film sessions of expertly curated, intellectually stimulating TED Talks. The theater has been the working focus of many talented and dedicated TED staffers for countless months, and tonight’s inaugural session was a landmark moment for the organization and the first step in a new adventure that we can’t wait to share with you.

First up was the author of the best-selling memoir, Hillbilly Elegy, J.D. Vance.

America’s forgotten working class. J.D. Vance grew up in a small, poor, predominantly white town in the Rust Belt of southern Ohio, where he had a front-row seat to the social ills plaguing so many working-class towns like his: a heroin epidemic, families torn apart by divorce and sometimes violence. In these forgotten parts of America, structural barriers like a lack of jobs, failing schools and brain drain often prevent poor families from joining America’s fabled upward mobility. But, Vance noted, something much more difficult to quantify was infecting the minds of kids he grew up with — a sense of hopelessness and despair, a feeling that they’d never get ahead no matter how hard they worked. With the help of a perceptive grandmother who told him not to believe the deck was stacked against him, a four-year crash course in character-building in the form of the Marine Corps and a lot of luck, Vance closed the social-capital gap and went on to law school and a career in finance. But a lot of kids from his town won’t have that good luck, and that, he says, raises important questions that everyone from community leaders to policy makers needs to ask: How do we help more kids from towns like his?

J.D. Vance speaks at TEDNYC - The Election Edition, September 7, 2016, New York, NY. (Photo: Ryan Lash / TED)

The author of Hillbilly Elegy, J.D. Vance, spoke about growing up poor in southern Ohio, where his classmates shared “a sense of hopelessness that leads to conspiratorial places, the sense that ‘No matter how hard I work, they’re not going to let me in.'” He spoke at TEDNYC in New York, NY. (Photo: Ryan Lash / TED)

“Historical eras do come and go.” Journalist Michael Tomasky gives a historical crash course on how American politics has turned into such a polarized battlefield — and shares three rays of hope for the future that may break through the current ideological maelstrom. (A few key changes are on the near horizon, he says — including, believe it or not, reform of the dreaded filibuster.) Most of all, he encourages us to take the long view. “Historical eras do change,” says Tomasky. “All is not lost.”

Why ballot design matters. You’ve almost definitely made mistakes when you’ve voted, and you probably didn’t know it, says civic designer and ballot design researcher Dana Chisnell. She explains how ballot designs have confused voters and, in turn, influenced election outcomes, breaking down the story of an electoral disaster in Sarasota County, Florida — where, in 2006, 18,000 Floridians left the polls without recording a single vote in the congressional race, the hottest race on the ballot — as well as the infamous hanging chads of the US presidential election of 2000. Chisnell shares ten simple design principles, like using mixed-case lettering and avoiding centered type, which she and her team have developed to improve election ballots and to help people vote the way they intend. Learn more about the design principles and the quest to design the perfect ballot here.

Songs of protest. “When they say we want our America back, what the f*** do they mean,” asks Jill Sobule in “America Back,” a song she played often at Bernie Sanders rallies that became an anthem of his campaign. With an upbeat tune and a hefty dose of humor, Sobule reflects on the long history of immigration in America — and the anti-immigration sentiment that has always accompanied it. She also shared with the audience her wish to reinvigorate the tradition of protest art through her initiative My Song Is My Weapon, an online hub where artists and musicians can share, create and collaborate on protest songs and protest art.

A shared language of democracy. As a consultant for the United Nations, Philippa Neave works with emerging democracies to organize their very first elections, helping with the details those of us in established democracies take for granted: how to register, how to vote, why you should vote. And so often, one of the biggest stumbling blocks is language. In many countries, the appropriate language to describe the electoral process simply does not exist — or when it does, the concepts the words represent are not well understood. To right the problem, Neave worked with colleagues to establish the Arabic Lexicon of Electoral Terminology, a reference tool in Arabic, English and French that covers eight Arab countries. Even with this technical reference tool, Neave still sees an important missing piece of the puzzle, “a work of reference for the average person,” because it is only by creating a shared language and shared understanding of democracy that we can hear the voice of the voiceless: “The silent majority is silent because they don’t have the words. Let’s give them the words.”

Philippa Neave speaks at TEDNYC in New York, NY. (Photo: Ryan Lash / TED)

Elections consultant Philippa Neave holds up a ballot from the 2005 election in Afghanistan, whose citizens were so eager to run for office for the first time that the ballot ended up being the size of a newspaper.  She spoke at TEDNYC in New York, NY. (Photo: Ryan Lash / TED)

It’s our country, too. Sayu Bhojwani, president and founder of The New American Leaders Project, tells the essential American immigration story through her own 16-year journey to becoming an American citizen — and urges her fellow immigrants to find their own power in the political process, by voting, by running for office, and simply by speaking up about what they care about. “Immigrants’ votes, voices and vantage points can help make our democracy strong,” she says. “We have fought to be here.” 

The joy of voting. There was a time in America when voting was fun. That time, says civics educator Eric Liu, is called most of America’s history. Tracing the robustly, raucously participatory history of voting in America, from the Revolution through to the Civil Rights era, Liu recalls American traditions of parades, street theater, open-air debates, festivals and bonfires on election day. “Decades of television and the Internet have killed much of that joyful culture of voting,” he says. “The couch has replaced the commons, and the screen has made most citizens spectators.” How can we get people excited about voting again? In partnership with the Knight Foundation, Liu has launched The Joy of Voting project, inviting artists, activists, designers, and educators across the country to come up with creative projects — from DJ sets to plays to punk rock satire — to encourage the far-too-many Americans who don’t vote to express themselves at the ballot box. “Why bother voting? Because there is no such thing as not voting,” Liu says. “In a democracy, not voting is voting — for all that you may detest and oppose.”

Eric Liu speaks at TEDNYC. (Photo: Ryan Lash / TED)

“Not voting can be dressed up as an act of passive resistance,” says Eric Liu, “but it’s actively handing power over to people who will gladly take advantage of your absence.” He spoke at TEDNYC in New York, NY. (Photo: Ryan Lash / TED)


,

Planet DebianClint Adams: Collect the towers

Why is openbmap's North American coverage so sad? Is there a reason that RadioBeacon doesn't also submit to OpenCellID? Is there a free software Android app that submits data to OpenCellID?

Planet DebianVincent Sanders: I'll huff, and I'll puff, and I'll blow your house in

Sometimes it really helps to have a different view on a problem and after my recent writings on my Public Suffix List (PSL) library I was fortunate to receive a suggestion from my friend Enrico Zini.

I had asked for suggestions on reducing the size of the library further and Enrico simply suggested Huffman coding. This was a technique I had learned about long ago in connection with data compression and the intervening years had made all the details fuzzy which explains why it had not immediately sprung to mind.

A small subset of the Public Suffix List as stored within libnspslHuffman coding named for David A. Huffman is an algorithm that enables a representation of data which is very efficient. In a normal array of characters every character takes the same eight bits to represent which is the best we can do when any of the 256 values possible is equally likely. If your data is not evenly distributed this is not the case for example if the data was english text then the value is fifteen times more likely to be that for e than k.

every step of huffman encoding tree build for the example string tableSo if we have some data with a non uniform distribution of probabilities we need a way the data be encoded with fewer bits for the common values and more bits for the rarer values. To be efficient we would need some way of having variable length representations without storing the length separately. The term for this data representation is a prefix code and there are several ways to generate them.

Such is the influence of Huffman on the area of prefix codes they are often called Huffman codes even if they were not created using his algorithm. One can dream of becoming immortalised like this, to join the ranks of those whose names are given to units or whole ideas in a field must be immensely rewarding, however given Huffman invented his algorithm and proved it to be optimal to answer a question on a term paper in his early twenties I fear I may already be a bit too late.

The algorithm itself is relatively straightforward. First a frequency analysis is performed, a fancy way of saying count how many of each character is in the input data. Next a binary tree is created by using a priority queue initialised with the nodes sorted by frequency.

The resulting huffman tree and the binary representation of the input symbols
The two least frequent items count is summed together and a node placed in the tree with the two original entries as child nodes. This step is repeated until a single node exists with a count value equal to the length of the input.

To encode data once simply walks the tree outputting a 0 for a left node or 1 for right node until reaching the original value. This generates a mapping of values to bit output, the input is then simply converted value by value to the bit output. To decode the data the data is used bit by bit to walk the tree to arrive at values.

If we perform this algorithm on the example string table *!asiabvcomcoopitamazonawsarsaves-the-whalescomputebasilicata we can reduce the 488 bits (61 * 8 bit characters) to 282 bits or 40% reduction. Obviously in a real application the huffman tree would need to be stored which would probably exceed this saving but for larger data sets it is probable this technique would yield excellent results on this kind of data.

Once I proved this to myself I implemented the encoder within the existing conversion program. Although my perl encoder is not very efficient it can process the entire PSL string table (around six thousand labels using 40KB or so) in less than a second, so unless the table grows massively an inelegant approach will suffice.

The resulting bits were packed into 32bit values to improve decode performance (most systems prefer to deal with larger memory fetches less frequently) and resulted in 18KB of output or 47% of the original size. This is a great improvement in size and means the statically linked test program is now 59KB and is actually smaller than the gzipped source data.

$ ls -alh test_nspsl
-rwxr-xr-x 1 vince vince 59K Sep 25 23:58 test_nspsl
$ ls -al public_suffix_list.dat.gz
-rw-r--r-- 1 vince vince 62K Sep 1 08:52 public_suffix_list.dat.gz

To be clear the statically linked program can determine if a domain is in the PSL with no additional heap allocations and includes the entire PSL ordered tree, the domain label string table and the huffman decode table to read it.

An unexpected side effect is that because the decode loop is small it sits in the processor cache. This appears to cause the string comparison function huffcasecmp() (which is not locale dependant because we know the data will be limited ASCII) performance to be close to using strcasecmp() indeed on ARM32 systems there is a very modest improvement in performance.

I think this is as much work as I am willing to put into this library but I am pleased to have achieved a result which is on par with the best of breed (libpsl still has a data representation 20KB smaller than libnspsl but requires additional libraries for additional functionality) and I got to (re)learn an important algorithm too.

Planet DebianJulian Andres Klode: Introducing TrieHash, a order-preserving minimal perfect hash function generator for C(++)

Abstract

I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool).

Introduction

APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs).

There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance.

Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution.

As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot…

Enter TrieHash

Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them.

Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration.

By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however:

= 0
WordLabel ~ Word
OtherWord = 9

will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation

C code generation

switch(string[0] | 32) {
case 't':
    switch(string[1] | 32) {
    case 'a':
        switch(string[2] | 32) {
        case 'g':
            return Tag;
        }
    }
}
return Unknown;

Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at:

First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code!

Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere.

Finally, we build one trie per word length, and switch by the word length first. Like the 32 trick, his gives a huge improvement in performance.

Digging into the assembler code

The whole code translates to roughly 4 instructions per byte:

  1. A memory load,
  2. an or with 32
  3. a comparison, and
  4. a conditional jump.

(On x86, the case sensitive version actually only has a cmp-with-memory and a conditional jump).

Due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77729 this may be one instruction more: On some architectures an unneeded zero-extend-byte instruction is inserted – this causes a 20% performance loss.

Performance evaluation

I run the hash against all 82 words understood by APT in Packages and Sources files, 1,000,000 times for each word, and summed up the average run-time:

host arch Trie TrieCase GPerfCase GPerf DJB
plummer ppc64el 540 601 1914 2000 1345
eller mipsel 4728 5255 12018 7837 4087
asachi arm64 1000 1603 4333 2401 1625
asachi armhf 1230 1350 5593 5002 1784
barriere amd64 689 950 3218 1982 1776
x230 amd64 465 504 1200 837 693

Suffice to say, GPerf does not really come close.

All hosts except the x230 are Debian porterboxes. The x230 is my laptop with a a Core i5-3320M, barriere has an Opteron 23xx. I included the DJB hash function for another reference.

Source code

The generator is written in Perl, licensed under the MIT license and available from https://github.com/julian-klode/triehash – I initially prototyped it in Python, but guillem complained that this would add new build dependencies to dpkg, so I rewrote it in Perl.

Benchmark is available from https://github.com/julian-klode/hashbench

Usage

See the script for POD documentation.


Filed under: General

Planet DebianSteinar H. Gunderson: Nageru @ Fyrrom

When Samfundet wanted to make their own Boiler Room spinoff (called “Fyrrom”—more or less a direct translation), it was a great opportunity to try out the new multitrack code in Nageru. After all, what can go wrong with a pretty much untested and unfinished git branch, right?

So we cobbled together a bunch of random equipment from here and there:

Video equipment

Hooked it up to Nageru:

Nageru screenshot

and together with some great work from the people actually pulling together the event, this was the result. Lots of fun.

And yes, some bugs were discovered—of course, field testing without followup patches is meaningless (that would either mean you're not actually taking your test experience into account, or that your testing gave no actionable feedback and thus was useless), so they will be fixed in due time for the 1.4.0 release.

Edit: Fixed a screenshot link.

Krebs on SecurityThe Democratization of Censorship

John Gilmore, an American entrepreneur and civil libertarian, once famously quipped that “the Internet interprets censorship as damage and routes around it.” This notion undoubtedly rings true for those who see national governments as the principal threats to free speech.

However, events of the past week have convinced me that one of the fastest-growing censorship threats on the Internet today comes not from nation-states, but from super-empowered individuals who have been quietly building extremely potent cyber weapons with transnational reach.

underwater

More than 20 years after Gilmore first coined that turn of phrase, his most notable quotable has effectively been inverted — “Censorship can in fact route around the Internet.” The Internet can’t route around censorship when the censorship is all-pervasive and armed with, for all practical purposes, near-infinite reach and capacity. I call this rather unwelcome and hostile development the “The Democratization of Censorship.”

Allow me to explain how I arrived at this unsettling conclusion. As many of you know, my site was taken offline for the better part of this week. The outage came in the wake of a historically large distributed denial-of-service (DDoS) attack which hurled so much junk traffic at Krebsonsecurity.com that my DDoS protection provider Akamai chose to unmoor my site from its protective harbor.

Let me be clear: I do not fault Akamai for their decision. I was a pro bono customer from the start, and Akamai and its sister company Prolexic have stood by me through countless attacks over the past four years. It just so happened that this last siege was nearly twice the size of the next-largest attack they had ever seen before. Once it became evident that the assault was beginning to cause problems for the company’s paying customers, they explained that the choice to let my site go was a business decision, pure and simple.

Nevertheless, Akamai rather abruptly informed me I had until 6 p.m. that very same day — roughly two hours later — to make arrangements for migrating off their network. My main concern at the time was making sure my hosting provider wasn’t going to bear the brunt of the attack when the shields fell. To ensure that absolutely would not happen, I asked Akamai to redirect my site to 127.0.0.1 — effectively relegating all traffic destined for KrebsOnSecurity.com into a giant black hole.

Today, I am happy to report that the site is back up — this time under Project Shield, a free program run by Google to help protect journalists from online censorship. And make no mistake, DDoS attacks — particularly those the size of the assault that hit my site this week — are uniquely effective weapons for stomping on free speech, for reasons I’ll explore in this post.

Google's Project Shield is now protecting KrebsOnSecurity.com

Google’s Project Shield is now protecting KrebsOnSecurity.com

Why do I speak of DDoS attacks as a form of censorship? Quite simply because the economics of mitigating large-scale DDoS attacks do not bode well for protecting the individual user, to say nothing of independent journalists.

In an interview with The Boston Globe, Akamai executives said the attack — if sustained — likely would have cost the company millions of dollars. In the hours and days following my site going offline, I spoke with multiple DDoS mitigation firms. One offered to host KrebsOnSecurity for two weeks at no charge, but after that they said the same kind of protection I had under Akamai would cost between $150,000 and $200,000 per year.

Ask yourself how many independent journalists could possibly afford that kind of protection money? A number of other providers offered to help, but it was clear that they did not have the muscle to be able to withstand such massive attacks.

I’ve been toying with the idea of forming a 501(c)3 non-profit organization — ‘The Center for the Defense of Internet Journalism’, if you will — to assist Internet journalists with obtaining the kind of protection they may need when they become the targets of attacks like the one that hit my site.  Maybe a Kickstarter campaign, along with donations from well-known charitable organizations, could get the ball rolling.  It’s food for thought.

CALIBRATING THE CANNONS

Earlier this month, noted cryptologist and security blogger Bruce Schneier penned an unusually alarmist column titled, “Someone Is Learning How to Take Down the Internet.” Citing unnamed sources, Schneier warned that there was strong evidence indicating that nation-state actors were actively and aggressively probing the Internet for weak spots that could allow them to bring the entire Web to a virtual standstill.

“Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services,” Schneier wrote. “Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that.”

Schneier continued:

“Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cyber command trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.”

Whether Schneier’s sources were accurate in their assessment of the actors referenced in his blog post is unknown. But as my friend and mentor Roland Dobbins at Arbor Networks eloquently put it, “When it comes to DDoS attacks, nation-states are just another player.”

“Today’s reality is that DDoS attacks have become the Great Equalizer between private actors & nation-states,” Dobbins quipped.

UM…YOUR RERUNS OF ‘SEINFELD’ JUST ATTACKED ME

What exactly was it that generated the record-smashing DDoS of 620 Gbps against my site this week? Was it a space-based weapon of mass disruption built and tested by a rogue nation-state, or an arch villain like SPECTRE from the James Bond series of novels and films? If only the enemy here was that black-and-white.

No, as I reported in the last blog post before my site was unplugged, the enemy in this case was far less sexy. There is every indication that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — mainly routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords. Most of these devices are available for sale on retail store shelves for less than $100, or — in the case of routers — are shipped by ISPs to their customers.

Some readers on Twitter have asked why the attackers would have “burned” so many compromised systems with such an overwhelming force against my little site. After all, they reasoned, the attackers showed their hand in this assault, exposing the Internet addresses of a huge number of compromised devices that might otherwise be used for actual money-making cybercriminal activities, such as hosting malware or relaying spam. Surely, network providers would take that list of hacked devices and begin blocking them from launching attacks going forward, the thinking goes.

As KrebsOnSecurity reader Rob Wright commented on Twitter, “the DDoS attack on @briankrebs feels like testing the Death Star on the Millennium Falcon instead of Alderaan.” I replied that this maybe wasn’t the most apt analogy. The reality is that there are currently millions — if not tens of millions — of insecure or poorly secured IoT devices that are ripe for being enlisted in these attacks at any given time. And we’re adding millions more each year.

I suggested to Mr. Wright perhaps a better comparison was that ne’er-do-wells now have a virtually limitless supply of Stormtrooper clones that can be conscripted into an attack at a moment’s notice.

A scene from the 1978 movie Star Wars, which the Death Star tests its firepower by blowing up a planet.

A scene from the 1977 movie Star Wars, in which the Death Star tests its firepower by blowing up a planet.

SHAMING THE SPOOFERS

The problem of DDoS conscripts goes well beyond the millions of IoT devices that are shipped insecure by default: Countless hosting providers and ISPs do nothing to prevent devices on their networks from being used by miscreants to “spoof” the source of DDoS attacks.

As I noted in a November 2015 story, The Lingering Mess from Default Insecurity, one basic step that many ISPs can but are not taking to blunt these attacks involves a network security standard that was developed and released more than a dozen years ago. Known as BCP38, its use prevents insecure resources on an ISPs network (hacked servers, computers, routers, DVRs, etc.) from being leveraged in such powerful denial-of-service attacks.

Using a technique called traffic amplification and reflection, the attacker can reflect his traffic from one or more third-party machines toward the intended target. In this type of assault, the attacker sends a message to a third party, while spoofing the Internet address of the victim. When the third party replies to the message, the reply is sent to the victim — and the reply is much larger than the original message, thereby amplifying the size of the attack.

BCP38 is designed to filter such spoofed traffic, so that it never even traverses the network of an ISP that’s adopted the anti-spoofing measures. However, there are non-trivial economic reasons that many ISPs fail to adopt this best practice. This blog post from the Internet Society does a good job of explaining why many ISPs ultimately decide not to implement BCP38.

Fortunately, there are efforts afoot to gather information about which networks and ISPs have neglected to filter out spoofed traffic leaving their networks. The idea is that by “naming and shaming” the providers who aren’t doing said filtering, the Internet community might pressure some of these actors into doing the right thing (or perhaps even offer preferential treatment to those providers who do conduct this basic network hygiene).

A research experiment by the Center for Applied Internet Data Analysis (CAIDA) called the “Spoofer Project” is slowly collecting this data, but it relies on users voluntarily running CAIDA’s software client to gather that intel. Unfortunately, a huge percentage of the networks that allow spoofing are hosting providers that offer extremely low-cost, virtual private servers (VPS). And these companies will never voluntarily run CAIDA’s spoof-testing tools.

CAIDA's Spoofer Project page.

CAIDA’s Spoofer Project page.

As a result, the biggest offenders will continue to fly under the radar of public attention unless and until more pressure is applied by hardware and software makers, as well as ISPs that are doing the right thing.

How might we gain a more complete picture of which network providers aren’t blocking spoofed traffic — without relying solely on voluntary reporting? That would likely require a concerted effort by a coalition of major hardware makers, operating system manufacturers and cloud providers, including Amazon, Apple, Google, Microsoft and entities which maintain the major Web server products (Apache, Nginx, e.g.), as well as the major Linux and Unix operating systems.

The coalition could decide that they will unilaterally build such instrumentation into their products. At that point, it would become difficult for hosting providers or their myriad resellers to hide the fact that they’re allowing systems on their networks to be leveraged in large-scale DDoS attacks.

To address the threat from the mass-proliferation of hardware devices such as Internet routers, DVRs and IP cameras that ship with default-insecure settings, we probably need an industry security association, with published standards that all members adhere to and are audited against periodically.

The wholesalers and retailers of these devices might then be encouraged to shift their focus toward buying and promoting connected devices which have this industry security association seal of approval. Consumers also would need to be educated to look for that seal of approval. Something like Underwriters Laboratories (UL), but for the Internet, perhaps.

THE BLEAK VS. THE BRIGHT FUTURE

As much as I believe such efforts could help dramatically limit the firepower available to today’s attackers, I’m not holding my breath that such a coalition will materialize anytime soon. But it’s probably worth mentioning that there are several precedents for this type of cross-industry collaboration to fight global cyber threats.

In 2008, the United States Computer Emergency Readiness Team (CERT) announced that researcher Dan Kaminsky had discovered a fundamental flaw in DNS that could allow anyone to intercept and manipulate most Internet-based communications, including email and e-commerce applications. A diverse community of software and hardware makers came together to fix the vulnerability and to coordinate the disclosure and patching of the design flaw.

deathtoddosIn 2009, Microsoft heralded the formation of an industry group to collaboratively counter Conficker, a malware threat that infected tens of millions of Windows PCs and held the threat of allowing cybercriminals to amass a stupendous army of botted systems virtually overnight. A group of software and security firms, dubbed the Conficker Cabal, hashed out and executed a plan for corralling infected systems and halting the spread of Conficker.

In 2011, a diverse group of industry players and law enforcement organizations came together to eradicate the threat from the DNS Changer Trojan, a malware strain that infected millions of Microsoft Windows systems and enslaved them in a botnet that was used for large-scale cyber fraud schemes.

These examples provide useful templates for a solution to the DDoS problem going forward. What appears to be missing is any sense of urgency to address the DDoS threat on a coordinated, global scale.

That’s probably because at least for now, the criminals at the helm of these huge DDoS crime machines are content to use them to launch petty yet costly attacks against targets that suit their interests or whims.

For example, the massive 620 Gbps attack that hit my site this week was an apparent retaliation for a story I wrote exposing two Israeli men who were arrested shortly after that story ran for allegedly operating vDOS — until recently the most popular DDoS-for-hire network. The traffic hurled at my site in that massive attack included the text string “freeapplej4ck,” a reference to the hacker nickname used by one of vDOS’s alleged co-founders.

Most of the time, ne’er-do-wells like Applej4ck and others are content to use their huge DDoS armies to attack gaming sites and services. But the crooks maintaining these large crime machines haven’t just been targeting gaming sites. OVH, a major Web hosting provider based in France, said in a post on Twitter this week that it was recently the victim of an even more massive attack than hit my site. According to a Tweet from OVH founder Octave Klaba, that attack was launched by a botnet consisting of more than 145,000 compromised IP cameras and DVRs.

I don’t know what it will take to wake the larger Internet community out of its slumber to address this growing threat to free speech and ecommerce. My guess is it will take an attack that endangers human lives, shuts down critical national infrastructure systems, or disrupts national elections.

But what we’re allowing by our inaction is for individual actors to build the instrumentality of tyranny. And to be clear, these weapons can be wielded by anyone — with any motivation — who’s willing to expend a modicum of time and effort to learn the most basic principles of its operation.

The sad truth these days is that it’s a lot easier to censor the digital media on the Internet than it is to censor printed books and newspapers in the physical world. On the Internet, anyone with an axe to grind and the willingness to learn a bit about the technology can become an instant, self-appointed global censor.

I sincerely hope we can address this problem before it’s too late. And I’m deeply grateful for the overwhelming outpouring of support and solidarity that I’ve seen and heard from so many readers over the past few days. Thank you.

Planet DebianSven Hoexter: in causa wosign

Since I kind of recommended the free WoSign CA in the past, I would like to point out the issues that have piled up. Mozilla has a writeup due to a removal discussion for NSS: https://wiki.mozilla.org/CA:WoSign_Issues (Yes I'm late with this post, about a month or two by now ...)

Since WoSign, or the person behind it, silently also bought StartCom we've now with StartSSL and WoSign two of the three free CAs in one hand with a questionable track record. That leaves everyone looking for a low budget option with Let's Encrypt.

Planet DebianRuss Allbery: podlators 4.08

A new release of the distribution that provides Pod::Man and Pod::Text for Perl documentation formatting.

The impetus for this release is fixing a rendering bug in Pod::Man that spewed stray bits of half-escaped *roff into the man page for the text "TRUE (1)". This turned out to be due to two interlocking bugs in the dark magic regexes that try to fix up formatting to make man pages look a bit better: incorrect double-markup in both small caps and as a man page reference, and incorrect interpretation of the string "\s0(1)". Both are fixed in this release.

podlators 4.00 changed Pod::Man to make piping POD through pod2man on standard input without providing the --name option an error, since there was no good choice for the man page title. This turned out to be too disruptive: the old behavior of tolerating this had been around for too long, and I got several bug reports. Since I think backward compatibility is extremely important for these tools, I've backed down from this change, and now Pod::Man and pod2man just silently use the man page name "STDIN" (which still fixes the original problem of being reproducible).

It is, of course, still a good idea to provide the name option when dealing with standard input, since "STDIN" isn't a very good man page title.

This release also adds new --lquote and --rquote options to pod2man to set the quote marks independently, and removes a test that relied on a POD construct that is going to become an error in Pod::Simple.

You can get the latest release from the podlators distribution page.

,

Planet DebianDirk Eddelbuettel: tint 0.0.1: Tint Is Not Tufte

A new experimental package is now on the ghrr drat. It is named tint which stands for Tint Is Not Tufte. It provides an alternative for Tufte-style html presentation. I wrote a bit more on the package page and the README in the repo -- so go read this.

Here is just a little teaser of what it looks like:

and the full underlying document is available too.

For questions or comments use the issue tracker off the GitHub repo. The package may be short-lived as its functionality may end up inside the tufte package.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianIain R. Learmonth: Azure from Debian

Around a week ago, I started to play with programmatically controlling Azure. I needed to create and destroy a bunch of VMs over and over again, and this seemed like something I would want to automate once instead of doing manually and repeatedly. I started to look into the azure-sdk-for-python and mentioned that I wanted to look into this in #debian-python. ardumont from Software Heritage noticed me, and was planning to package azure-storage-python. We joined forces and started a packaging team for Azure-related software.

I spoke with the upstream developer of the azure-sdk-for-python and he pointed me towards azure-cli. It looked to me that this fit my use case better than the SDK alone, as it had the high level commands I was looking for.

Between me and ardumont, in the space of just under a week, we have now packaged: python-msrest (#838121), python-msrestazure (#838122), python-azure (#838101), python-azure-storage (#838135), python-adal (#838716), python-applicationinsights (#838717) and finally azure-cli (#838708). Some of these packages are still in the NEW queue at the time I’m writing this, but I don’t foresee any issues with these packages entering unstable.

azure-cli, as we have packaged, is the new Python-based CLI for Azure. The Microsoft developers gave it the tagline of “our next generation multi-platform command line experience for Azure”. In the short time I’ve been using it I’ve been very impressed with it.

In order to set it up initially, you have to configure a couple of of defaults using az configure. After that, you need to az login which again is an entirely painless process as long as you have a web browser handy in order to perform the login.

After those two steps, you’re only two commands away from deploying a Debian virtual machine:

az resource group create -n testgroup -l "West US"
az vm create -n testvm -g testgroup --image credativ:Debian:8:latest --authentication-type ssh

This will create a resource group, and then create a VM within that resource group with a user automatically created with your current username and with your SSH public key (~/.ssh/id_rsa.pub) automatically installed. Once it returns you the IP address, you can SSH in straight away.

Looking forward to some next steps for Debian on Azure, I’d like to get images built for Azure using vmdebootstrap and I’ll be exploring this in the lead up to, and at, the upcoming vmdebootstrap sprint in Cambridge, UK later in the year (still being organised).

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.70

I'm pleased to announce the release of Laptop Mode Tools, version 1.70. This release adds support for AHCI Runtime PM, introduced in Linux 4.6. It also includes many important bug fixes, mostly related to invocation and determination of power states.

Changelog:

1.70 - Sat Sep 24 16:51:02 IST 2016
    * Deal harder with broken battery states
    * On machines with 2+ batteries, determine states from all batteries
    * Limit status message logging frequency. Some machines tend to send
      ACPI events too often. Thanks Maciej S. Szmigiero
    * Try harder to determine power states. As reports have shown, the
      power_supply subsystem has had incorrect state reporting on many machines,
      for both, BAT and AC.
    * Relax conditional events where Laptop Mode Tools should be executed. This
      affected for use cases of Laptop being docked and undocked
      Thanks Daniel Koch.
    * CPU Hotplug settings extended
    * Cleanup states for improved Laptop Mode Tools invocation
      Thanks: Tomas Janousek
    * Align Intel P State default to what the actual driver (intel_pstate.c)
uses
      Thanks: George Caswell and Matthew Gabeler-Lee
    * Add support for AHCI Runtime PM in module intel-sata-powermgmt
    * Many systemd and initscript fixes
    * Relax default USB device list. This avoids the long standing issues with
      USB devices (mice, keyboard) that mis-behaved during autosuspend

Source tarball, Feodra/SUSE RPM Packages available at:
https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools
    
 

Categories: 

Keywords: 

Like: 

Planet DebianJames McCoy: neovim-enters-stretch

Last we heard from our fearless hero, Neovim, it was just entering the NEW queue. Well, a few days later it landed in experimental and 8 months, to the day, since then it is now in Stretch.

Enjoy the fish!

LongNowLong Now’s First Ever Member Summit: October 4, 02016

The Long Now Member Summit - Oct. 4, 02016

Our first ever global gathering is less than two weeks away!
Join us in San Francisco on October 4th, 02016.

In 01996: The Long Now Foundation was established to foster long-term thinking and responsibility in the framework of the next 10,000 years.

In 02007: The Long Now Foundation’s Membership program was launched. The list of our 1,000 Charter Members is here.

On October 4th, 02016 we will host the first ever global gathering of Long Now members. Our membership has grown to nearly 8,000 people around the world. It’s time we got together.

In celebration of Long Now’s 20th anniversary our Member Summit will be a day dedicated to long-term thinking. We will have components of the 10,000 Year Clock on display–which will later be installed in West Texas.

The Clock of the Long Now: actual components of our 10,000 Year Clock will be on display at the Summit

Our staff will give updates on our projects (including the Clock). Long Now founders and Board will be on stage, but we’ll also have talks & discussions led by Long Now members, hundreds of whom will travel to San Francisco for this event.

The Interval at Long Now, our bar/cafe/museum, will be at the center of the Summit. The Interval is full of Long Now-related information & artifacts, including Clock of the Long Now prototypes, passenger pigeons, thousands of books, and the art of Brian Eno.

There’s much more–dinner from Off The Grid food trucks, drinks from The Interval menu, a festival of short films about long-term thinking co-curated by our members, and more. Tickets are still available.

Join us at the Summit and help celebrate the first 20 years of Long Now!

Celebrating 20 years (so far) of Long Now
Featuring a keynote presentation by David Eagleman

Neuroscientist and author David Eagleman speaks at The Long Now Member Summit, October 4, 02016

,

CryptogramFriday Squid Blogging: Space Kraken

A Lego model of a giant space kraken destroying a Destroyer from Star Wars.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramiPhone 7 Jailbreak

Sociological ImagesNorms, Normality, and Normativity

Flashback Friday.

Sociologists distinguish between the terms norm, normal, and normative.

  • The norm refers to what is common or frequent.  For example, celebrating Christmas is the norm in America.
  • Normal is opposed to abnormal.  Even though celebrating Christmas is the norm, it is not abnormal to celebrate Hanukkah.  To celebrate Hanukkah is perfectly normal.
  • In contrast to both of these, normative refers to a morally-endorsed ideal. Some Americans make a normative argument that Americans should celebrate Christmas because they believe (wrongly) that this is a Christian country.

A thing can be the norm but not be normative. For example, a nuclear family with a married man and woman and their biological children is normative in the U.S., but it is certainly not the norm. Likewise, something can be normal but not the norm. It’s perfectly normal, for example, to date people of the same sex (so say the scientists of our day), but it’s not the norm. And something can be both normal and the norm, but not be normative, like Americans’ low rates of physical activity.

These three terms do not always work in sync, which is why they’re interesting.

I thought of these distinctions when I looked at a submission by Andrew, who blogs at Ethnographer. Bike lanes in Philadelphia used to be designated with this figure:

Today, however, they’re designated by this one:

Do you see the difference? The new figures are wearing bike helmets. The addition is normative. It suggests that bikers should be wearing bike helmets. It may or may not be the norm, and it certainly isn’t normal or abnormal either way, but the city of Philadelphia is certainly attempting to make helmets normative.

Originally posted in 2010.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Surprise!

"In life, there are good surprises and bad surprises, but Microsoft fails to differentiate," writes Rob

 

"Now I know how people from New Foundland feel," writes Chris B.

 

Austin S. writes, "Who knew that doing squats was the secret to bypassing *nix security?"

 

Paweł wrote, "I think WolframAlpha should fix its own problems before generating new ones."

 

"£0? I'll take one. Hell, I'll take ten!" writes Will K.

 

Rick R. wrote, "What happened with Detroit? Did they run around the bases in the wrong direction?"

 

Alex F. wrote, "Sometimes I guess string casting is like throwing spaghetti and hoping that it sticks."

 

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianArturo Borrero González: Blog moved from Blogger to Jekyllrb at GithubPages


This blog has finally moved away from blogger to jekyll, also changing the hosting and the domain. No new content will be published here.

New coordinates:

This blogger blog will remain as archive, since I don't plan to migrate the content from here to the new blog.

So, see you there!


,

Planet Linux AustraliaStewart Smith: First look at MySQL 8.0.0 Milestone

So, about ten days ago the MySQL Server Team released MySQL 8.0.0 Milestone to the world. One of the most unfortunate things about MySQL development is that it’s done behind closed doors, with the only hints of what’s to come arriving in maybe a note on a bug or such milestone releases that contain a lot of code changes. How much code change? Well, according to the text up on github for the 8.0 branch “This branch is 5714 commits ahead, 4 commits behind 5.7. ”

Way back in 2013, I looked at MySQL Code Size over releases, which I can again revisit and include both MySQL 5.7 and 8.0.0.

While 5.7 was a big jump again, we seem to be somewhat leveling off, which is a good thing. Managing to add features and fix long standing problems without bloating code size is good for software maintenance. Honestly, hats off to the MySQL team for keeping it to around a 130kLOC code size increase over 5.7 (that’s around 5%).

These days I’m mostly just a user of MySQL, pointing others in the right direction when it comes to some issues around it and being the resident MySQL grey(ing)beard(well, if I don’t shave for a few days) inside IBM as a very much side project to my day job of OPAL firmware.

So, personally, I’m thrilled about no more FRM, better Unicode, SET PERSIST and performance work. With my IBM hat on, I’m thrilled about the fact that it compiled on POWER out of the box and managed to work (I haven’t managed to crash it yet). There seems to be a possible performance issue, but hey, this is a huge improvement over the 5.7 developer milestones when run on POWER.

A lot of the changes are focused around usability, making it easier to manage and easier to run at at least a medium amount of scale. This is long overdue and it’s great to see even seemingly trivial things like SET PERSIST coming (I cannot tell you how many times that has tripped me up).

In a future post, I’ll talk about the FRM removal!

Planet DebianJonathan Dowland: WadC 2.1

WadC

Today I released version 2.1 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

This comes about a year after version 2.0. The most significant change is an adjustment to the line splitting algorithm to fix a long-standing issue when you try to draw a new linedef over the top of an existing one, but in the opposite direction. Now that this bug is fixed, it's much easier to overdraw vertical or horizontal lines without needing an awareness of the direction of the original lines.

The other big changes are in the GUI, which has been cleaned up a fair bit, now had undo/redo support, the initial window size is twice as large, and it now supports internationalisation, with a partial French translation included.

This version is dedicated to the memory of Professor Seymour Papert (1928-2016), co-inventor of the LOGO programming language).

For more information see the release notes and the reference.

Planet DebianJoey Hess: keysafe beta release

After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.

With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.

At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:

  • There could be some bug that prevents keysafe from restoring a backup.
  • Keysafe's design has not been completely reviewed for security.
  • None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.

Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.

If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.

TEDMeet the Fall 2016 class of TED Residents

TED Residents - TED HQ, September 2016, New York, New York. Photo: Dian Lofton/TED

TED Residents Susan Bird, Torin Perez and Che Grayson from our first cohort of TED Residents. Photo: Dian Lofton/TED

On September 12, TED welcomed its latest class of the TED Residency program, an in-house incubator for breakthrough ideas. Residents spend four months in the TED office with fellow brilliant minds who are creatively taking on projects that are making significant changes in their communities, across many different fields  

The new Residents include:

  • A fashion designer who is calling out pollution in the garment industry
  • A pair of musicians who are building an online resource to match artists with grants
  • An entrepreneur who is using geolocation and mobile technology to tackle the massive global litter problem
  • A teacher who is turning the children of an entire school district into citizen scientists providing research data on the Bronx River
  • A financial tech veteran who is interested in making our smartphone the focus of conservation!

At the end of their session, Residents have the opportunity to give a TED Talk about their ideas in the theater of TED HQ.  Read more about each Resident below:

Kevin F. Adler is the founder of Miracle Messages, a social venture that uses short videos, social media, and a global network of volunteers to reunite homeless people with their long-lost loved ones. His goal is to serve 1% of the world’s homeless population by 2021.

Zubaida Bai cofounded AYZH (pronounced “eyes”) seven years ago to bring simplicity and dignity to women’s healthcare worldwide. Innovations such as her Clean Birth Kit in a Purse are saving and changing the lives of the world’s most vulnerable women and children.

Formerly a career diplomat, Miriam Bekkouche‘s current work combines the latest in neuroscience and behavioral psychology with ancient traditional wisdom. She is the founder of Brain Spa, a coaching and consulting company that explores what mindfulness practice can bring to global problems.

Jordan Brown is a digital health professional who is developing a platform to promote the use of virtual reality and immersive video games in healthcare. In 2014, he founded MedPilot, which tackles the challenges of rising consumer medical costs.

Angel Chang is a womenswear designer working with traditional hand-woven textiles of ethnic minority tribes in rural China. She is taking what she’s learned about indigenous crafts and applying that knowledge to make the fashion industry more sustainable.

TED Residents Jeff Kirschner and Kunal Sood at TED HQ, New York, New York. Photo: Dian Lofton/TED

TED Residents Jeff Kirschner and Kunal Sood at TED HQ, New York, New York. Photo: Dian Lofton/TED

In his doctoral studies at Cornell University, Abram Coetsee studies the intersection of museums, new media and graffiti. Currently, he is curating a 3D digital reconstruction of 5Pointz, a New York City landmark until it was destroyed by real estate developers in 2014.

Sharon De La Cruz is CEO and Creative Coder of the Digital Citizens Lab, a design collective with a focus on civic technology. Using play as a fundamental tool, Sharon and her team create resources for educators that can meet the needs of historically underserved children of color. Their primary product, “El Cuco,” is an interactive digital comic built to teach children code logic.

As a oud player, Hadi Eldebek has toured with Yo-Yo Ma’s Silk Road Ensemble. As a cultural entrepreneur based in New York City, he is collaborating with his brother, Mohamad Eldebek, on two projects: GrantPA, a platform that helps artists find and apply for grants, and Circle World Arts, a global network of workshops that connects artists, audiences, and institutions across continents, languages, and traditions. Mohamad plays percussion and has a master’s degree in neuroscience.

Trained as a visual artist, Danielle Gustafson launched the New York Stock Exchange’s first website in 1996. For the next decade, while serving as a digital-strategy executive in financial services, she also (literally) moonlighted as cofounder of the NYC Bat Group. She now advocates for broader awareness and study of bats, and believes that the smartphone may be the most important conservation tool of the 21st century.

Shani Jamila is an artist and a managing director of the Urban Justice Center. She creates pieces and curates public programs that use the arts to explore justice, identity and global engagement.

Brooklyn-based filmmaker, eco-activist, and futurist Shalini Kantayya uses her work as a tool to inspire audiences to action. The mission of her production company, 7th Empire Media, is to create a sustainable planet and a culture of human rights through imaginative media.

Francesca Kennedy founded Ix Style after seeing Lake Atitlan, in which she was baptized, contaminated and overrun with algae (“Ix” is the Mayan word for water). The company sells huarache sandals and fashion accessories made by Mayan artisans in Guatemala, and then donates a portion of every sale to providing clean drinking water for local children.

TED Resident Second from the right is Francesca Kennedy working with her collaborators from Ix-Style ar TED HQ, September 2016, New York, New York. Photo: Dian Lofton/TED

TED Resident Francesca Kennedy (second from left) works with her collaborators from Ix-Style at TED HQ, September 2016, New York, New York. Photo: Dian Lofton/TED

A backpacker-turned-bartender, Jeff Kirschner is a serial entrepreneur with a love for storytelling. His latest venture is Litterati, a growing global community that’s crowdsource-cleaning the planet, one piece at a time.

Lia Oganesyan believes in the power of virtual reality to foster community. Previously, she co-published DARPA-funded research that assessed how soldiers with PTSD responded to virtual human therapists. She is now building VeeR Hub, an online marketplace for virtual reality content creation.

Marlon Peterson is president of The Precedential Group, a social justice consultancy. He is a gun violence prevention strategist, writer, and now media maker.

While maintaining her fulfilling career as a healthcare and life-sciences executive, Susan C. Robinson is exploring how the natural expertise and unique skill sets of people with “disabilities” may be reframed–and how businesses can thrive while pioneering new standards for diversity and inclusion.

Kunal Sood is the founder and CXO of X Fellows and cofounder of NOVUS, which recently hosted a summit at the United Nations that explored using exponential technologies and innovation to achieve the 17 UN Global Goals. He is writing a book about exponential happiness.

Artist Rachel Sussman created “The Oldest Living Things in the World” (see her 2010 TED Talk here). Her current work includes a massive, handwritten timeline of the history—and future—of the space-time continuum (at MASS MoCA through April 2017), a sand mandala of the Cosmic Microwave Background (Oct. 28 to March 5, 2017, at New Museum Los Gatos), and a 100-Year Calendar, which she will develop during her TED Residency.

Elizabeth Waters is a neuroscientist and educator working to enrich and expand science education. Her school-based research projects engage students in rigorous, purposeful multiyear science experiments. In her K–12 curriculum for the Bronxville, New York school system, kids will collect, analyze and interpret data on water quality in the Bronx River over the next five years.

TED Residents Elizabeth Waters, Abram Coetsee, Kevin Adler at TED HQ, September 2016, New York, New York. Photo: Dian Lofton/TED

TED Residents Elizabeth Waters, Abram Coetsee, Kevin Adler at TED HQ, September 2016, New York, New York. Photo: Dian Lofton/TED


Geek FeminismQuick Hit: Toward a !!Con Aesthetic – new essay at The Recompiler

Over at The Recompiler, I have a new essay out: “Toward A !!Con Aesthetic”. I talk about (what I consider to be) the countercultural tech conference !!Con, which focuses on “the joy, excitement, and surprise of programming”. If you’re interested in hospitality and inclusion in tech conferences — not just in event management but in talks, structure, and themes — check it out. (Christie Koehler also interviews me about this and about activist role models, my new consulting business, different learning approaches, and more in the latest Recompiler podcast.)

CryptogramAmtrak Security Awareness

I like this Amtrak security awareness campaign. Especially the use of my term "security theater."

Planet DebianGustavo Noronha Silva: WebKitGTK+ 2.14 and the Web Engines Hackfest

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =)

Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =)

One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release.

Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it.

At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic.

We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on an i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia.

Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI.

As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest.

One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for.

We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanuele Bassi thinks it might be better to add EGLImage as another code branch inside from_gl() though, so we will look into that.

Another very interesting igalian addition to this release is support for the MemoryPressureHandler even on systems with no cgroups set up. The memory pressure handler is a WebKit feature which flushes caches and frees resources that are not being used when the operating system notifies it memory is scarce.

We worked with the Raspberry Pi Foundation to add support for that feature to the Raspberry Pi browser and contributed it upstream back in 2014, when Collabora was trying to squeeze as much as possible from the hardware. We had to add a cgroups setup to wrap Epiphany in, back then, so that it would actually benefit from the feature.

With this improvement, it will benefit even without the custom cgroups setups as well, by having the UIProcess monitor memory usage and notify each WebProcess when memory is tight.

Some of these improvements were achieved by developers getting together at the Web Engines Hackfest last year and laying out the ground work or ideas that ended up in the code base. I look forward to another great few days of hackfest next week! See you there o/

Planet Linux AustraliaBinh Nguyen: Inside North Korea, Russia Vs USA Part 3, and More

At times, you have to admit the international situation regarding North Korea is fairly comical. The core focus on it has basically been it's nuclear weapons programs but there's obviously there's a lot more to a country than just it's defense force. I wanted to take at what's happening inside. While they clearly have difficulties thing's don't seem entirely horrible? North Korea’s Nuclear

TEDTEDWomen Update: Memory Banda and a warrior’s cry against child marriage

Over the years, we’ve had so many wonderful and moving talks at the TEDWomen conference, but perhaps one of the most striking was Malawi activist Memory Banda. The amazing 18-year-old presented at last year’s event – and inspired us all with her story.

Memory began her talk by reciting a poem written by another young woman she knows, 13-year-old Eileen Piri, entitled “I’ll Marry When I Want.” Memory told the audience that the poem might seem odd written by a 13-year-old girl, but in her home country of Malawi, she called it “a warrior’s cry.”

She told the audience how there was a traditional rite of passage in her country in which young girls who have just reached puberty were sent to “initiation camps” to learn how to please men sexually. As part of their initiation, a man visits the camp and the young girls are forced to have sex with him. Many girls end up pregnant or with sexually transmitted diseases, including AIDS.

Memory chose a different path. She refused to go to the camp. She wanted to continue her education and had dreams of being a lawyer. She became an activist and, with the help of the Girls Empowerment Network (Genet), a group dedicated to ending the practice of forced child marriage in Malawi, she began talking to other young women about their experiences.

At the time, Malawi had the highest rates of child marriage in the world. A 2014 Human Rights Watch report outlined the shocking statistics: one out of two girls in the country on average will be married by her 18th birthday. “In 2010, half of women aged 20 to 24 years were married or in unions before they were 18. Some are as young as 9 or 10 when they are married.”

Memory continued with her own schooling and began teaching other young women how to read and write. With the support of Genet and Let Girls Lead, she worked on a storytelling project in which girls were encouraged to share their stories – the dreams they had for themselves, as well as the obstacles they faced – in art, poetry and storytelling.

Memory says that participating in Genet’s River of Life project was transformational for her: “Until then, I always thought I was the only one who suffered. But sharing my story gave me strength to know that I wasn’t alone.”

As she explained in her TED Talk, the girls published their stories and they became part of a campaign to outlaw child marriage in Malawi. A female chief from Memory’s community joined the fight, and the girls worked with her and other village chiefs to develop bylaws banning the initiation camps and child marriage. Eventually, their advocacy went all the way to President Mutharika, who agreed with the girls that the sanctioning of child marriage was a “national disgrace.”

Last year, Malawi officially outlawed the marriage of girls younger than 18 years old. But, as Memory explained in her TED Talk, changing the law is one thing, enforcing it is quite another. Today, she continues to work on the issue, not only for young women in rural areas who might not be aware of the new protections that exist for them, but for young women in other countries where laws still need to be enacted.

Since Memory appeared at TEDWomen in 2015, response to her TED Talk in Malawi and around the world has been phenomenal – it has been viewed over 1.1 million times! This visibility has helped raise Memory’s profile as a global advocate and Rise Up girl leader. Memory is a globally renowned champion for girls’ rights and an advisor to global leaders on the importance of investing in girls. She is currently beginning her sophomore year of college in Malawi, achieving her dream of completing her education.


The TEDWomen conference is sold out now but we have decided to offer discounted registrations that include all conference activities except for guaranteed seats in the theater. These registrations provide comfortable viewing in the Simulcast Lounge where everyone gathers during breaks between sessions.  Find out more at the TEDWomen website.


Planet DebianZlatan Todorić: Open Source Motion Comic Almost Fully Funded - Pledge now!

The Pepper and Carrot motion comic is almost funded. The pledge from Ethic Cinema put it on good road (as it seemed it would fail). Ethic Cinema is non profit organization that wants to make open source art (as they call it Libre Art). Purism's creative director, François Téchené, is member and co-founder of Ethic Cinema. Lets push final bits so we can get this free as in freedom artwork.

Notice that Pepper and Carrot is a webcomic (also available as book) free as in freedom artwork done by David Revoy who also supports this campaign. Also the support is done by Krita community on their landing page.

Lets do this!

Planet Linux AustraliaStewart Smith: Lesson 124 in why scales on a graph matter…

The original article presented two graphs: one of MariaDB searches (which are increasing) and the other showing MySQL searches (decreasing or leveling out). It turns out that the y axis REALLY matters.

I honestly expected better….

Worse Than FailureCodeSOD: As The World Ternaries

Ah, the ternary operator. At their worst they’re a way to obfuscate your code. At their best, they’re a lovely short-hand.

For example, you might use the ternary operator to validate the inputs of a function or handle a flag.

Adam Spofford found this creative use of the ternary operator in a game he’s developing for:

    this.worldUuid = builder.worldId == null ? null : builder.worldId;
    this.position = builder.position == null ? null : builder.position;
    this.rotation = builder.rotation == null ? null : builder.rotation;
    this.scale = builder.scale == null ? null : builder.scale;

    this.worldUuid = builder.worldId;
    this.position = builder.position;
    this.rotation = builder.rotation;
    this.scale = builder.scale;

Curious about how this specific block came to be, Adam poked through the Git history. For starters, the previous version of the code was the last four lines- the sane lines. According to git blame, the project lead added the four ternary lines, with a git comment that simply read: “Constructing world details”. That explains everything.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianJunichi Uekawa: Tried creating a GCE control panel for myself.

Tried creating a GCE control panel for myself. GCP GCE control panel takes about 20 seconds for me to load, CPU is busy loading the page. It does so many things and it's very complex. I've noticed that the API isn't that slow, so I used OAuth to let me do what I want usually; list the hosts and start/stop the instance, and list the IPs. Takes 500ms to do it instead of 20 seconds. I've put the service on AppEngine. The hardest part was figuring out how this OAuth2 dance was supposed to work, and all the python documentation I have seen were somewhat outdated and rewriting then to a workable state. document was outdated but sample code was fixed. I had to read up on vendoring and PIP and other stuff in order to get all the dependencies installed. I guess my python appengine skills are too rusty now.

Planet Linux AustraliaStewart Smith: Compiling your own firmware for Barreleye (OpenCompute OpenPOWER system)

Aaron Sullivan announced on the Rackspace Blog that you can now get your own Barreleye system! What’s great is that the code for the Barreleye platform is upstream in the op-build project, which means you can build your own firmware for them (just like garrison, the “IBM S822LC for HPC” system I blogged about a few days ago).

Remarkably, to build an image for the host firmware, it’s eerily similar to any other platform:

git clone --recursive https://github.com/open-power/op-build.git
cd op-build
. op-build-env
op-build barreleye_defconfig
op-build

…and then you wait. You can cross compile on x86.

You’ve been able to build firmware for these machines with upstream code since Feb/March (I wouldn’t recommend running with builds from then though, try the latest release instead).

Hopefully, someone involved in OpenBMC can write on how to build the BMC firmware.

Krebs on SecurityKrebsOnSecurity Hit With Record DDoS

On Tuesday evening, KrebsOnSecurity.com was the target of an extremely large and unusual distributed denial-of-service (DDoS) attack designed to knock the site offline. The attack did not succeed thanks to the hard work of the engineers at Akamai, the company that protects my site from such digital sieges. But according to Akamai, it was nearly double the size of the largest attack they’d seen previously, and was among the biggest assaults the Internet has ever witnessed.
iotstuf

The attack began around 8 p.m. ET on Sept. 20, and initial reports put it at approximately 665 Gigabits of traffic per second. Additional analysis on the attack traffic suggests the assault was closer to 620 Gbps in size, but in any case this is many orders of magnitude more traffic than is typically needed to knock most sites offline.

Martin McKeay, Akamai’s senior security advocate, said the largest attack the company had seen previously clocked in earlier this year at 363 Gbps. But he said there was a major difference between last night’s DDoS and the previous record holder: The 363 Gpbs attack is thought to have been generated by a botnet of compromised systems using well-known techniques allowing them to “amplify” a relatively small attack into a much larger one.

In contrast, the huge assault this week on my site appears to have been launched almost exclusively by a very large botnet of hacked devices.

The largest DDoS attacks on record tend to be the result of a tried-and-true method known as a DNS reflection attack. In such assaults, the perpetrators are able to leverage unmanaged DNS servers on the Web to create huge traffic floods.

Ideally, DNS servers only provide services to machines within a trusted domain. But DNS reflection attacks rely on consumer and business routers and other devices equipped with DNS servers that are (mis)configured to accept queries from anywhere on the Web. Attackers can send spoofed DNS queries to these so-called “open recursive” DNS servers, forging the request so that it appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (target) address.

The bad guys also can amplify a reflective attack by crafting DNS queries so that the responses are much bigger than the requests. They do this by taking advantage of an extension to the DNS protocol that enables large DNS messages. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This “amplification” effect is especially pronounced if the perpetrators query dozens of DNS servers with these spoofed requests simultaneously.

But according to Akamai, none of the attack methods employed in Tuesday night’s assault on KrebsOnSecurity relied on amplification or reflection. Rather, many were garbage Web attack methods that require a legitimate connection between the attacking host and the target, including SYN, GET and POST floods.

That is, with the exception of one attack method: Preliminary analysis of the attack traffic suggests that perhaps the biggest chunk of the attack came in the form of traffic designed to look like it was generic routing encapsulation (GRE) data packets, a communication protocol used to establish a direct, point-to-point connection between network nodes. GRE lets two peers share data they wouldn’t be able to share over the public network itself.

“Seeing that much attack coming from GRE is really unusual,” Akamai’s McKeay said. “We’ve only started seeing that recently, but seeing it at this volume is very new.”

McKeay explained that the source of GRE traffic can’t be spoofed or faked the same way DDoS attackers can spoof DNS traffic. Nor can junk Web-based DDoS attacks like those mentioned above. That suggests the attackers behind this record assault launched it from quite a large collection of hacked systems — possibly hundreds of thousands of systems.

“Someone has a botnet with capabilities we haven’t seen before,” McKeay said. “We looked at the traffic coming from the attacking systems, and they weren’t just from one region of the world or from a small subset of networks — they were everywhere.”

There are some indications that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.

As noted in a recent report from Flashpoint and Level 3 Threat Research Labs, the threat from IoT-based botnets is powered by malware that goes by many names, including “Lizkebab,” “BASHLITE,” “Torlus” and “gafgyt.” According to that report, the source code for this malware was leaked in early 2015 and has been spun off into more than a dozen variants.

“Each botnet spreads to new hosts by scanning for vulnerable devices in order to install the malware,” the report notes. “Two primary models for scanning exist. The first instructs bots to port scan for telnet servers and attempts to brute force the username and password to gain access to the device.”

Their analysis continues:

“The other model, which is becoming increasingly common, uses external scanners to find and harvest new bots, in some cases scanning from the [botnet control] servers themselves. The latter model adds a wide variety of infection methods, including brute forcing login credentials on SSH servers and exploiting known security weaknesses in other services.”

I’ll address some of the challenges of minimizing the threat from large-scale DDoS attacks in a future post. But for now it seems likely that we can expect such monster attacks to soon become the new norm.

Many readers have been asking whether this attack was in retaliation for my recent series on the takedown of the DDoS-for-hire service vDOS, which coincided with the arrests of two young men named in my original report as founders of the service.

I can’t say for sure, but it seems likely related: Some of the POST request attacks that came in last night as part of this 620 Gbps attack included the string “freeapplej4ck,” a reference to the nickname used by one of the vDOS co-owners.

Update Sept. 22, 8:33 a.m. ET: Corrected the maximum previous DDoS seen by Akamai. It was 363, not 336 as stated earlier.

,

Planet DebianC.J. Adams-Collier: virt manager cannot find suitable emulator for x86 64

Looks like I was missing qemu-kvm.

$ sudo apt-get install qemu-kvm qemu-system

Google AdsenseGoogle Certified Publishing Partner Spotlight: learn about how Ezoic and SalesFrontier boosted publisher’s monetization

Whether you’re just starting out with ads, fine-tuning your existing ad setup or looking for brand new revenue sources, Certified Publishing Partners are ready to help you achieve your goals. Learn about how two of our partners helped websites like yours earn more revenue through innovative approaches.

Ezoic is an automated website testing company that helps publishers to evaluate and optimize ad placements and website layouts. According to an internal study commissioned by Ezoic, SimplyPsychology.org, a popular education site, increased revenue by more than 400% and saw an 84% rise in time spent on site by optimizing their website layout. John Cole, Chief Customer Officer of Ezoic says “What Ezoic does is use analytics to make sure that the user experience is superior. We’re tackling the enormously complex task of balancing user experience, content, and monetization. And we’re doing it through data that spans all platforms.” Read more in Ezoic’s partner spotlight.



SalesFrontier, a strong partner for mobile optimization, helps nearly 200 publishers optimize their digital strategies and increase revenue. James Lan, New Media Business Department Vice President of Sanlih E-Television, says “SalesFrontier is a highly recommended Google AdSense partner that dramatically increased our market share in digital advertising. We trust their professional technical support and outstanding consultant services.” Read how SalesFrontier helped Sanlih E-Television Co., Ltd., a large media publisher, grow its mobile revenue business. Read more in SalesFrontier’s partner spotlight.



Since its launch in 2015, the goal of the Certified Publisher Partner is to find the best partners to give publishers like you the extra support to grow your website. To learn more about the program or get started, check out the Certified Publisher Partner website.

Posted by Danielle Landress, from the AdSense team

Planet DebianMatthew Garrett: Microsoft aren't forcing Lenovo to block free operating systems

There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comment count unavailable comments

Sociological ImagesOn Burkini Bans and Institutional Racism

Originally posted at Racism Review.

The photos capture a woman lying serenely on a pebble beach. She is unaware of the four men as they approach. They wear guns and bulletproof vests, and demand the woman remove her shirt. They watch as she complies. This scene was reported in recent weeks by news outlets across the globe. More than twenty coastal towns and cities in France imposed bans on the burkini, the full body swimsuit favored by religious Muslim women.

Flickr photo by Bruno Sanchez-Andrade Nuño
Flickr photo by Bruno Sanchez-Andrade Nuño

French politicians have falsely linked the burkini with religious fundamentalism. They have employed both blatant and subtly racist language to express indignation at the sight of a non-white, non-Western female body in a public space designated as “white.” Like many, I have been transfixed by the images of brazen discrimination and shaming. Although the woman in the photographs, identified only as Siam, was not wearing a burkini, her body was targeted by a racist institution, the State.

Olivier Majewicz, the Socialist mayor of Oye-Plage, a town on the northern coast of France, described a Muslim woman on the beach as appearing “a bit wild, close to nature.” Her attire, he said, was not “what one normally expects from a beachgoer… we are in a small town and the beach is a small, family friendly place.” France’s Socialist Prime Minister, Manuel Valls, utilized more direct language, stating that the burkini enslaved women and that the “nation must defend itself.” Similarly blunt, Thierry Migoule, an official with the municipal services in Cannes, said the burkini “conveys an allegiance to the terrorist movements that are waging war against us.”

These quotes reflect the pernicious limitations of the white gaze. When I look at the photos of Siam, I see a woman, a mother, being forced to undress before a crowd of strangers. I can hear her children, terrified, crying nearby. Siam’s encounter was a scene of trauma, and as Henri Rossi, the vice president of the League of Human Rights in Cannes, said “this trauma has not been cured; the convalescence has not yet begun.”

Some sixty years ago, Frantz Fanon in Black Skin, White Masks, explored the relationships between the white gaze and the black body, specifically in France and its colonies. In the age of the burkini ban, Fanon’s observations ring poignant and true. He writes: “…we were given the occasion to confront the white gaze. An unusual weight descended on us. The real world robbed us of our share. In the white world, the man of color encounters difficulties in elaborating his body schema. The image of one’s body is solely negating. It’s an image in the third person. All around the body reigns an atmosphere of certain uncertainty.” Fanon’s words could serve as the soundtrack to Siam’s encounter with the police. She was robbed of her share, her body negated and deemed a public threat by the white gaze.

In the wake of recent terrorist attacks in France, politicians have capitalized on the politics of fear in order to renegotiate the boundaries of institutional racism as expressed in the public sphere. In Living with Racism, Joe Feagin and Melvin Sikes quote Arthur Brittan and Mary Maynard (Sexism, Racism and Oppression) about the ever-changing “terms of oppression.” Brittan and Maynard write:

the terms of oppression are not only dictated by history, culture, and the sexual and social division of labor. They are also profoundly shaped at the site of the oppression, and by the way in which oppressors and oppressed continuously have to renegotiate, reconstruct, and re-establish their relative positions in respect to benefits and power.

As the burkini affords Muslim women the benefit to participate in different arenas of public space, the state recalibrates its boundaries to create new or revive previous sites of oppression. In the case of the burkini, the sites of oppression are both public beaches and women’s bodies – common sites of attempted domination, not only in France, but also the US.

Fanon, Feagin and Sikes all point to institutional racism as an engine that fuels white supremacy and its policies of discrimination. As Feagin and Sikes observe, these:

recurring encounters with white racism can be viewed as a series of “life crises,” often similar to other serious life crises, such as the death of a loved one, that disturb an individual’s life trajectory.

The photos of Siam capture the unfolding of life crisis and illustrate the power of institutional racism to inflict both individual and collective traumas.

Julia Lipkins is an archivist and MA candidate in American Studies at The Graduate Center, CUNY. 

(View original at https://thesocietypages.org/socimages)

CryptogramTesla Model S Hack

Impressive remote hack of the Tesla Model S.

Details. Video.

The vulnerability has been fixed.

Remember, a modern car isn't an automobile with a computer in it. It's a computer with four wheels and an engine. Actually, it's a distributed 20-400-computer system with four wheels and an engine.

Worse Than FailureCache Congestion

Ic-photo-TI--TMX390Z55GF--(SuperSPARC-TMS390-Cache-Controller)

Recently, we featured the story of Alex, who worked in a little beach town trying to get seasonal work. But Alex isn't the only one with a job that depended entirely on the time of year.

For most seasonal work in IT, it's the server load that varies. Poor developers can get away with inefficient processes for three quarters of a year, only to have it bite them with a vengeance once the right season rolls around. Patrick, a Ruby developer, joined an educational technology company at the height of revision season. Their product, which consisted of two C#/Xamarin cross-platform mobile apps and one Ruby/Rails back-end server, was receiving its highest possible traffic rates. On his first day at the office, the entire tech team was called into a meeting with the CEO, Gregory, to address the problem.

Last year, the dev team had been at a similar meeting, facing similar slowness. Their verdict: there was nothing for it but to rewrite the app. The company had, surprisingly, gone in for it, giving them 6 months with no task but to refactor the app so they'd never face this kind of slowdown again. Now that the busy season had returned, Gregory was furious, and rightly so. The app was no faster than it had been last year.

"I don't want to yell at anyone," boomed Gregory, "but we spent 6 months rewriting, not adding any new features—and now, if anything, the app is slower than it was before! I'm not going to tell you how to do your jobs, because I don't know. But I need you to figure out how to get things faster, and I need you to figure it out in the next 2 weeks."

After he left, the devs sat around brainstorming the source of the problem.

"It's Xamarin," said Diego, the junior iOS Dev. "It's hopelessly unperformant. We need to rewrite the apps in Swift."

"And lose our Android customer base?" responded Juan, the senior Mobile Dev. "The problem isn't Xamarin, it's the architecture of the local database leading to locking problems. All we have to do is rewrite that from scratch. It'll only take a month or so."

"But exam season will be over in a month. We only have two weeks!" cried Rick, the increasingly fraught tech lead.

Patrick piped up, hoping against hope that he could cut through the tangled knot of bull and blame. "Could it be a problem with the back end?"

"Nah, the back end's solid," came the unanimous reply.

When they were kicked out of the meeting room, lacking a plan of action and more panicked than ever, Patrick sidled up to Rick. "What would you like me to work on? I'm a back end dev, but it sounds like it's the front end that needs all the work."

"Just spend a couple of weeks getting to grips with the codebase," Rick replied. "Once exam season is over we'll be doing some big rewrites, so the more you know the code the better."

So Patrick went back to his desk, put his head down, and started combing through the code.

This is a waste of time, he told himself. They said it was solid. Well, maybe I'll find something, like some inefficient sort.

At first, he was irritated by the lack of consistent indention. It was an unholy mess, mixing tabs, two spaces, and four spaces liberally. This seriously needs a linter, he thought to himself.

He tried to focus on the functionality, but even that was suspect. Whoever had written the backend clearly hadn't known much about the Rails framework. They'd built in lots of their own "smart" solutions for problems that Rails already solved. There was a test suite, but it had patchy coverage at best. With no CI in place, lots of the tests were failing, and had clearly been failing for over a year.

At least I found something to do, Patrick told himself, rolling up his sleeves.

While the mobile devs worked on rebuilding the apps, Patrick started fixing the tests. They were already using Github, so it was easy to hook up Travis CI so that code couldn't be merged until the tests passed. He adding Rubocop to detect and correct style inconsistencies, and set about tidying the codebase. He found that the tests took a surprisingly long time to run, but he didn't think much of it until Rick called him over.

"Do you know anything about Elastic Beanstalk auto-scaling? Every time we make a deployment to production, it goes a bit haywire. I've been looking at the instance health, and they're all pushing 100% CPU. I think something's failing out, but I'm not sure what."

"That's odd," Patrick said. "How many instances are there in production?"

"About 15."

Very odd. 15 beefy VMs, all running at > 90% CPU? On closer inspection, they were all working furiously, even during the middle of the night when no one was using the app.

After half a day of doing nothing but tracing the flow, Patrick found an undocumented admin webpage tacked onto the API that provided a ton of statistics about something called Delayed Job. Further research revealed it to be a daemon-based async job runner that had a couple of instances running on every web server VM. The stats page showed how many jobs there were in the backlog—in this case, about half a million of them, and increasing by the second.

How can that work? thought Patrick. At peak times, the only thing this does is make a few jobs per seccond to denormalising data. Those should take a fraction of a second to run. There's no way the queue should ever grow this big!

He reported back to Rick, frowning. "I think I've found the source of the CPU issue," he said, pointing at the Delayed Job queue. "All server resources are being chewed up by this massive queue. Are you sure this has nothing to do with the apps being slow? If it weren't for these background jobs, the server would be much more performant."

"No way," replied Rick. "That might be a contributing factor, but the problem is definitely with the apps. We're nearly finished rewriting the local database layer, you'll see real speedups then. See if you can find out why these jobs are running so slowly in the meantime, though. It's not like it'll hurt."

Skeptical, Patrick returned to his desk and went hunting for the cause of the problem. It didn't take long. Near the top of most of the models was a line like this: include CachedModel. This was Ruby's module mixin syntax; this CachedModel mixin was mixed into just about every model, forming a sort of core backbone for the data layer. CachedModel was a module that looked like this:


module CachedModel
 extend ActiveSupport::Concern

 included do
 after_save :delete_cache
 after_destroy :delete_cache
 end

 # snip 

 def delete_cache
 Rails.cache.delete_matched("#{self.class}/#{cache_id}/*")
 Rails.cache.delete_matched("#{self.class}/index/*")
 # snip
 end
end

Every time a model was saved or deleted, the delete_cache method was called. This method performed a wildcard string search on every key in the cache (ElastiCache in staging and production, flat files in dev and test), deleting strings that matched. And of course, the model saved after every CREATE or INSERT statement, and was removed on every DELETE. That added up to a lot of delete_cache calls.

As an experiment, Patrick cleared out the delete_cache method and ran the test suite. He did a double-take. Did I screw it up? he wondered, and ran the tests again. The result stood: what had once taken 2 minutes on the CI server now completed in 11 seconds.

Why the hell were they using such a monumentally non-performant cache clearing method?! he wondered. Morbidly curious, he looked for where the cache was written to and read using this pattern of key strings and found ... that it wasn't. The caching mechanism had been changed 6 months previously, during the big rewrite. This post-save callback trawled painfully slowly through every key in the cache and never found anything.

Patrick quietly added a pull request to delete the CachedModel module and every reference to it. Once deployed to production, the 15 servers breezed through the backlog processing jobs over the weekend, and then auto-scaled down to a mere 3 instances: 2 comfortably handling the traffic, with another to avoid lag in scaling. There was a noticeable impact on performance of the apps now that more resources were available, as the server endpoints were significantly more responsive. Or at least, the impact was noticeable to Patrick. The rest of the tech team were too busy trying to work out why their ground-up rewrite of the app database layer was benchmarking slower than the original. Before they figured it out, exam season was over for another year, and performance stopped being a priority.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

,

Google AdsenseJoin #AskAdSense on Google+ and Twitter

We’ve expanded AdSense support to our English AdSense Twitter and Google+ pages. Join our weekly #AskAdSense office hours and speak directly with our support specialists on topics like: ad placements, mobile implementation, account activation, account suspension, ad formats, and much more.


#AskAdSense office hours will be held every Thursday morning 9:30am Pacific Daylight Time beginning September 29th, 2016. Participating is easy:
  1. Follow AdSense on Twitter and Google+ 
  2. Tweet, post, comment, or reply to AdSense on Twitter or Google+ asking your question during the office hours. 
  3. Please do not provide personally identifiable information in your tweets or comments.
  4. If you can’t attend during our office hour times, be sure to use #AskAdSense in your tweet, post, comment or reply to AdSense and we’ll do our best to respond during our weekly office hours.


On October 27th, John Brown, Head of Publisher Policy Communications for Google, will be joining our office hours to provide transparency into our program policies. John is actively involved with the AdSense community helping to ensure that we continue to make a great web and advertising experience. You can also follow John on the SearchEngineJournal.com column "Ask the AdSense Guy" to learn more about Google ad network policies, processes, and best practices.

AdSense strives to provide many ways to help you when you need it, we’re happy to extend this to our Twitter and Google+ profiles. Be sure to follow us and we’re looking forward to speaking to you there. 


Posted by: Jay Castro from the AdSense Team

Planet DebianVincent Sanders: If I see an ending, I can work backward.

Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

Chocolate chip cookie are much tastier than HTTP cookiesThe HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

NetSurf cookie manager showing a supercookieThere is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk
I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode {
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
};

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode {
struct {
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
} label;
struct {
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
} child;
};

static const union pnode pnodes[8580] = {
/* root entry */
{ .label = { 0, 0, 1 } }, { .child = { 2, 1553 } },
/* entries 2 to 1794 */
{ .label = {37, 2, 1 } }, { .child = { 1795, 6 } },

...

/* entries 8577 to 8578 */
{ .label = {31820, 6, 1 } }, { .child = { 8579, 1 } },
/* entry 8579 */
{ .label = {0, 1, 0 } },

};

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

Krebs on SecurityDDoS Mitigation Firm Has History of Hijacks

Last week, KrebsOnSecurity detailed how BackConnect Inc. — a company that defends victims against large-scale distributed denial-of-service (DDoS) attacks — admitted to hijacking hundreds of Internet addresses from a European Internet service provider in order to glean information about attackers who were targeting BackConnect. According to an exhaustive analysis of historic Internet records, BackConnect appears to have a history of such “hacking back” activity.

On Sept. 8, 2016, KrebsOnSecurity exposed the inner workings of vDOS, a DDoS-for-hire or “booter” service whose tens of thousands of paying customers used the service to launch attacks against hundreds of thousands of targets over the service’s four-year history in business.

vDOS as it existed on Sept. 8, 2016.

vDOS as it existed on Sept. 8, 2016.

Within hours of that story running, the two alleged owners — 18-year-old Israeli men identified in the original report — were arrested in Israel in connection with an FBI investigation into the shady business, which earned well north of $600,000 for the two men.

In my follow-up report on their arrests, I noted that vDOS itself had gone offline, and that automated Twitter feeds which report on large-scale changes to the global Internet routing tables observed that vDOS’s provider — a Bulgarian host named Verdina[dot]net — had been briefly relieved of control over 255 Internet addresses (including those assigned to vDOS) as the direct result of an unusual counterattack by BackConnect.

Asked about the reason for the counterattack, BackConnect CEO Bryant Townsend confirmed to this author that it had executed what’s known as a “BGP hijack.” In short, the company had fraudulently “announced” to the rest of the world’s Internet service providers (ISPs) that it was the rightful owner of the range of those 255 Internet addresses at Verdina occupied by vDOS.

In a post on NANOG Sept. 13, BackConnect’s Townsend said his company took the extreme measure after coming under a sustained DDoS attack thought to have been launched by a botnet controlled by vDOS. Townsend explained that the hijack allowed his firm to “collect intelligence on the actors behind the botnet as well as identify the attack servers used by the booter service.”

Short for Border Gateway Protocol, BGP is a mechanism by which ISPs of the world share information about which providers are responsible for routing Internet traffic to specific addresses. However, like most components built into the modern Internet, BGP was never designed with security in mind, which leaves it vulnerable to exploitation by rogue actors.

BackConnect’s BGP hijack of Verdina caused quite an uproar among many Internet technologists who discuss such matters at the mailing list of the North American Network Operators Group (NANOG).

BGP hijacks are hardly unprecedented, but when they are non-consensual they are either done accidentally or are the work of cyber criminals such as spammers looking to hijack address space for use in blasting out junk email. If BackConnect’s hijacking of Verdina was an example of a DDoS mitigation firm “hacking back,” what would discourage others from doing the same, they wondered?

“Once we let providers cross the line from legal to illegal actions, we’re no better than the crooks, and the Internet will descend into lawless chaos,” wrote Mel Beckman, owner of Beckman Software Engineering and a computer networking consultant in the Los Angeles area. “BackConnect’s illicit action undoubtedly injured innocent parties, so it’s not self defense, any more than shooting wildly into a crowd to stop an attacker would be self defense.”

A HISTORY OF HIJACKS

Townsend’s explanation seemed to produce more questions than answers among the NANOG crowd (read the entire “Defensive BGP Hijacking” thread here if you dare). I grew more curious to learn whether this was a pattern for BackConnect when I started looking deeper into the history of two young men who co-founded BackConnect (more on them in a bit).

To get a better picture of BackConnect’s history, I turned to BGP hijacking expert Doug Madory, director of Internet analysis at Dyn, a cloud-based Internet performance management company. Madory pulled historic BGP records for BackConnect, and sure enough a strange pattern began to emerge.

Madory was careful to caution up front that not all BGP hijacks are malicious. Indeed, my DDoS protection provider — a company called Prolexic Communications (now owned by Akamai Technologies) — practically invented the use of BGP hijacks as a DDoS mitigation method, he said.

In such a scenario, an organization under heavy DDoS attack might approach Prolexic and ask for assistance. With the customer’s permission, Prolexic would use BGP to announce to the rest of the world’s ISPs that it was now the rightful owner of the Internet addresses under attack. This would allow Prolexic to “scrub” the customer’s incoming Web traffic to drop data packets designed to knock the customer offline — and forward the legitimate traffic on to the customer’s site.

Given that BackConnect is also a DDoS mitigation company, I asked Madory how one could reasonably tell the difference between a BGP hijack that BackConnect had launched to protect a client versus one that might have been launched for other purposes — such as surreptitiously collecting intelligence on DDoS-based botnets and their owners?

Madory explained that in evaluating whether a BGP hijack is malicious or consensual, he looks at four qualities: The duration of the hijack; whether it was announced globally or just to the target ISP’s local peers; whether the hijacker took steps to obfuscate which ISP was doing the hijacking; and whether the hijacker and hijacked agreed upon the action.

bcbgp

For starters, malicious BGP attacks designed to gather information about an attacking host are likely to be very brief — often lasting just a few minutes. The brevity of such hijacks makes them somewhat ineffective at mitigating large-scale DDoS attacks, which often last for hours at a time. For example, the BGP hijack that BackConnect launched against Verdina lasted a fraction of an hour, and according to the company’s CEO was launched only after the DDoS attack subsided.

Second, if the party conducting the hijack is doing so for information gathering purposes, that party may attempt to limit the number ISPs that receive the new routing instructions. This might help an uninvited BGP hijacker achieve the end result of intercepting traffic to and from the target network without informing all of the world’s ISPs simultaneously.

“If a sizable portion of the Internet’s routers do not carry a route to a DDoS mitigation provider, then they won’t be sending DDoS traffic destined for the corresponding address space to the provider’s traffic scrubbing centers, thus limiting the efficacy of any mitigation,” Madory wrote in his own blog post about our joint investigation.

Thirdly, a BGP hijacker who is trying not to draw attention to himself can “forge” the BGP records so that it appears that the hijack was performed by another party. Madory said this forgery process often fools less experienced investigators, but that ultimately it is impossible to hide the true origin of forged BGP records.

Finally, in BGP hijacks that are consensual for DDoS mitigation purposes, the host under attack stops “announcing” to the world’s ISPs that it is the rightful owner of an address block under siege at about the same time the DDoS mitigation provider begins claiming it. When we see BGP hijacks in which both parties are claiming in the BGP records to be authoritative for a given swath of Internet addresses, Madory said, it’s less likely that the BGP hijack is consensual.

Madory and KrebsOnSecurity spent several days reviewing historic records of BGP hijacks attributed to BackConnect over the past year, and at least three besides the admitted hijack against Verdina strongly suggest that the company has engaged in this type of intel-gathering activity previously. The strongest indicator of a malicious and non-consensual BGP hijack, Madory said, were the ones that included forged BGP records.

Working together, Madory and KrebsOnSecurity identified at least 17 incidents during that time frame that were possible BGP hijacks conducted by BackConnect. Of those, five included forged BGP records. One was an hours-long hijack against Ghostnet[dot]de, a hosting provider in Germany.

Two other BGP hijacks from BackConnect that included spoofed records were against Staminus Communications, a competing DDoS mitigation provider and a firm that employed BackConnect CEO Townsend for three years as senior vice president of business development until his departure from Staminus in December 2015.

“This hijack wasn’t conducted by Staminus. It was BackConnect posing as Staminus,” Dyn’s Madory concluded.

Two weeks after BackConnect hijacked the Staminus routes, Staminus was massively hacked. Unknown attackers, operating under the banner “Fuck ‘Em All,” reset all of the configurations on the company’s Internet routers, and then posted online Staminus’s customer credentials, support tickets, credit card numbers and other sensitive data. The intruders also posted to Pastebin a taunting note ridiculing the company’s security practices.

BackConnect's apparent hijack of address space owned by Staminus Communications on Feb. 20, 2016. Image: Dyn.

BackConnect’s apparent hijack of address space owned by Staminus Communications on Feb. 20, 2016. Image: Dyn.

POINTING FINGERS

I asked Townsend to comment on the BGP hijacks identified by KrebsOnSecurity and Dyn as having spoofed source information. Townsend replied that he could not provide any insight as to why these incidents occurred, noting that he and the company’s chief technology officer — 24-year-old Marshal Webb — only had access and visibility into the network after the company BackConnect Inc. was created on April 27, 2016.

According to Townsend, the current BackConnect Inc. is wholly separate from BackConnect Security LLC, which is a company started in 2014 by two young men: Webb and a 19-year-old security professional named Tucker Preston. In April 2016, Preston was voted out of the company by Webb and Townsend and forced to sell his share of the company, which was subsequently renamed BackConnect Inc.

“Before that, the original owner of BackConnect Security LLC was the only one that had the ability to access servers and perform any type of networking commands,” he explained. “We had never noticed these occurred until this last Saturday and the previous owner never communicated anything regarding these hijacks. Wish I could provide more insight, but Marshal and I do not know the reasons behind the previous owners decision to hijack those ranges or what he was trying to accomplish.”

In a phone interview, Preston told KrebsOnSecurity that Townsend had little to no understanding about the technical side of the business, and was merely “a sales guy” for BackConnect. He claims that Webb absolutely had and still has the ability to manipulate BackConnect’s BGP records and announcements.

Townsend countered that Preston was the only network engineer at the company.

“We had to self-learn how to do anything network related once the new company was founded and Tucker removed,” he said. “Marshal and myself didn’t even know how to use BGP until we were forced to learn it in order to bring on new clients. To clarify further, Marshal did not have a networking background and had only been working on our web panel and DDoS mitigation rules.”

L33T, LULZ, W00W00 AND CHIPPY

Preston said he first met Webb in 2013 after the latter admitted to launching DDoS attacks against one of Preston’s customers at the time. Webb had been painted with a somewhat sketchy recent history at the time — being fingered as a low-skilled hacker who went by the nicknames “m_nerva” and “Chippy1337.”

Webb, whose Facebook alias is “lulznet,” was publicly accused in 2011 by the hacker group LulzSec of snitching on the activities of the group to the FBI, claiming that information he shared with law enforcement led to the arrest of a teen hacker in England associated with LulzSec. Webb has publicly denied being an informant for the FBI, but did not respond to requests for comment on this story.

LulzSec members claimed that Webb was behind the hacking of the Web site for the video game “Deus Ex.” As KrebsOnSecurity noted in a story about the Deus Ex hack, the intruder defaced the gaming site with the message “Owned by Chippy1337.”

The defacement message left on deusex.com.

The defacement message left on deusex.com.

I was introduced to Webb at the Defcon hacking convention in Las Vegas in 2014. Since then, I have come to know him a bit more as a participant of w00w00, an invite-only Slack chat channel populated mainly by information security professionals who work in the DDoS mitigation business. Webb chose the handle Chippy1337 for his account in that Slack channel.

At the time, Webb was trying to convince me to take another look at Voxility, a hosting provider that I’ve previously noted has a rather checkered history and one that BackConnect appears to rely upon exclusively for its own hosting.

In our examination of BGP hijacks attributed to BackConnect, Dyn and KrebsOnSecurity identified an unusual incident in late July 2016 in which BackConnect could be seen hijacking an address range previously announced by Datawagon, a hosting provider with a rather dodgy reputation for hosting spammers and DDoS-for-hire sites.

That address range previously announced by Datawagon included the Internet address 1.3.3.7, which is hacker “leet speak” for the word “leet,” or “elite.” Interestingly, on the w00w00 DDoS discussion Slack channel I observed Webb (Chippy1337) offering other participants in the channel vanity addresses and virtual private connections (VPNs) ending in 1.3.3.7. In the screen shot below, Webb can be seen posting a screen shot demonstrating his access to the 1.3.3.7 address while logged into it on his mobile phone.

Webb, logged into the w00w00 DDoS discussion channel using his nickname "chippy1337," demonstrating that his mobile phone connection was being routed through the Internet address 1.3.3.7, which BackConnect BGP hijacked in July 2016.

Webb, logged into the w00w00 DDoS discussion channel using his nickname “chippy1337,” demonstrating that his mobile phone connection was being routed through the Internet address 1.3.3.7, which BackConnect BGP hijacked in July 2016.

THE MONEY TEAM

The Web address 1.3.3.7 currently does not respond to browser requests, but it previously routed to a page listing the core members of a hacker group calling itself the Money Team. Other sites also previously tied to that Internet address include numerous DDoS-for-hire services, such as nazistresser[dot]biz, exostress[dot]in, scriptkiddie[dot]eu, packeting[dot]eu, leet[dot]hu, booter[dot]in, vivostresser[dot]com, shockingbooter[dot]com and xboot[dot]info, among others.

The Money Team comprised a group of online gaming enthusiasts of the massively popular game Counterstrike, and the group’s members specialized in selling cheats and hacks for the game, as well as various booter services that could be used to knock rival gamers offline.

Datawagon’s founder is an 18-year-old American named CJ Sculti whose 15-minutes of fame came last year in a cybersquatting dispute after he registered the domain dominos.pizza. A cached version of the Money Team’s home page saved by Archive.org lists CJ at the top of the member list, with “chippy1337” as the third member from the top.

The MoneyTeam's roster as of November 2015. Image: Archive.org.

The MoneyTeam’s roster as of November 2015. Image: Archive.org.

Asked why he chose to start a DDoS mitigation company with a kid who was into DDoS attacks, Preston said he got to know Webb over several years before teaming up with him to form BackConnect LLC.

“We were friends long before we ever started the company together,” Preston said. “I thought Marshal had turned over a new leaf and had moved away from all that black hat stuff. He seem to stay true to that until we split and he started getting involved with the Datawagon guys. I guess his lulz mentality came back in a really stupid way.”

Townsend said Webb was never an FBI informant, and was never arrested for involvement with LulzSec.

“Only a search warrant was executed at his residence,” Townsend said. “Chippy is not a unique handle to Marshal and it has been used by many people. Just because he uses that handle today doesn’t mean any past chippy actions are his doing. Marshal did not even go by Chippy when LulzSec was in the news. These claims are completely fabricated.”

As for the apparent Datawagon hijack, Townsend said Datawagon gave BackConnect permission to announce the company’s Internet address space but later decided not to become a customer.

“They were going to be a client and they gave us permission to announce that IP range via an LOA [letter of authorization]. They did not become a client and we removed the announcement. Also note that the date of the screen shot you present of Marshal talking about the 1.3.3.7. is not even the same as when we announced Datawagons IPs.”

SOMETHING SMELLS BAD

When vDOS was hacked, its entire user database was leaked to this author. Among the more active users of vDOS in 2016 was a user who went by the username “pp412” and who registered in February 2016 using the email address mn@gnu.so.

The information about who originally registered the gnu.so domain has long been hidden behind WHOIS privacy records. But for several months in 2015 and 2016 the registration records show it was registered to a Tucker Preston LLC. Preston denies that he ever registered the gnu.so domain, and claims that he never conducted any booter attacks via vDOS. However, Preston also was on the w00w00 Slack channel along with Webb, and registered there using the email address tucker@gnu.so.

But whoever owned that pp412 account at vDOS was active in attacking a large number of targets, including multiple assaults on networks belonging to the Free Software Foundation (FSF).

Logs from the hacked vDOS attack database show the user pp4l2 attacked the Free Software Foundation in May 2016.

Logs from the hacked vDOS attack database show the user pp4l2 attacked the Free Software Foundation in May 2016.

Lisa Marie Maginnis, until very recently a senior system administrator at the FSF, said the foundation began evaluating DDoS mitigation providers in the months leading up to its LibrePlanet2016 conference in the third week of March. The organization had never suffered any real DDoS attacks to speak of previously, but NSA whistleblower Edward Snowden was slated to speak at the conference, and the FSF was concerned that someone might launch a DDoS attack to disrupt the streaming of Snowden’s keynote.

“We were worried this might bring us some extra unwanted attention,” she said.

Maginnis said the FSF had looked at BackConnect and other providers, but that it ultimately decided it didn’t have time to do the testing and evaluation required to properly vet a provider prior to the conference. So the organization tabled that decision. As it happened, the Snowden keynote was a success, and the FSF’s fears of a massive DDoS never materialized.

But all that changed in the weeks following the conference.

“The first attack we got started off kind of small, and it came around 3:30 on a Friday morning,” Maginnis recalled. “The next Friday at about the same time we were hit again, and then the next and the next.”

The DDoS attacks grew bigger with each passing week, she said, peaking at more than 200 Gbps — more than enough to knock large hosting providers offline, let alone individual sites like the FSF’s. When the FSF’s Internet provider succeeded in blacklisting the addresses doing the attacking, the attackers switched targets and began going after larger-scale ISPs further upstream.

“That’s when our ISP told us we had to do something because the attacks were really starting to impact the ISP’s other customers,” Maginnis said. “Routing all of our traffic through another company wasn’t exactly an ideal situation for the FSF, but the other choice was we would just be disconnected and there would be no more FSF online.”

In August, the FSF announced that it had signed up with BackConnect to be protected from DDoS attacks, in part because the foundation only uses free software to perform its work, and BackConnect advertises “open source DDoS protection and security,” and it agreed to provide the service without charge.

The FSF declined to comment for this story. Maginnis said she can’t be sure whether the foundation will continue to work with BackConnect. But she said the timing of the attacks is suspicious.

“The whole thing just smells bad,” she said. “It does feel like there could be a connection between the DDoS and BackConnect’s timing to approach clients. On the other hand, I don’t think we received a single attack until Tucker [Preston] left BackConnect.”

DDoS attacks are rapidly growing in size, sophistication and disruptive impact, presenting a clear and present threat to online commerce and free speech alike. Since reporting about the hack of vDOS and the arrest of its proprietors nearly two weeks ago, KrebsOnSecurity.com has been under near-constant DDoS attack. One assault this past Sunday morning maxed out at more than 210 Gbps — the largest assault on this site to date.

Addressing the root causes that contribute to these attacks is a complex challenge that requires cooperation, courage and ingenuity from a broad array of constituencies — including ISPs, hosting providers, policy and hardware makers, and even end users.

In the meantime, some worry that as the disruption and chaos caused by DDoS attacks continues to worsen, network owners and providers may be increasingly tempted to take matters into their own hands and strike back at their assailants.

But this is almost never a good idea, said Rich Kulawiec, an anti-spam activist who is active on the NANOG mailing list.

“It’s tempting (and even trendy these days in portions of the security world which advocate striking back at putative attackers, never mind that attack attribution is almost entirely an unsolved problem in computing),” Kulawiec wrote. “It’s emotionally satisfying. It’s sometimes momentarily effective. But all it really does [is] open up still more attack vectors and accelerate the spiral to the bottom.”

KrebsOnSecurity would like to thank Dyn and Doug Madory for their assistance in researching the technical side of this story. For a deep dive into the BGP activity attributed to BackConnect, check out Madory’s post, BackConnect’s Suspicious Hijacks.

Planet DebianGunnar Wolf: Proposing a GR to repeal the 2005 vote for declassification of the debian-private mailing list

For the non-Debian people among my readers: The following post presents bits of the decision-taking process in the Debian project. You might find it interesting, or terribly dull and boring :-) Proceed at your own risk.

My reason for posting this entry is to get more people to read the accompanying options for my proposed General Resolution (GR), and have as full a ballot as possible.

Almost three weeks ago, I sent a mail to the debian-vote mailing list. I'm quoting it here in full:

Some weeks ago, Nicolas Dandrimont proposed a GR for declassifying
debian-private[1]. In the course of the following discussion, he
accepted[2] Don Armstrong's amendment[3], which intended to clarify the
meaning and implementation regarding the work of our delegates and the
powers of the DPL, and recognizing the historical value that could lie
within said list.

[1] https://www.debian.org/vote/2016/vote_002
[2] https://lists.debian.org/debian-vote/2016/07/msg00108.html
[3] https://lists.debian.org/debian-vote/2016/07/msg00078.html

In the process of the discussion, several people objected to the
amended wording, particularly to the fact that "sufficient time and
opportunity" might not be sufficiently bound and defined.

I am, as some of its initial seconders, a strong believer in Nicolas'
original proposal; repealing a GR that was never implemented in the
slightest way basically means the Debian project should stop lying,
both to itself and to the whole free software community within which
it exists, about something that would be nice but is effectively not
implementable.

While Don's proposal is a good contribution, given that in the
aforementioned GR "Further Discussion" won 134 votes against 118, I
hereby propose the following General Resolution:

=== BEGIN GR TEXT ===

Title: Acknowledge that the debian-private list will remain private.

1. The 2005 General Resolution titled "Declassification of debian-private
   list archives" is repealed.
2. In keeping with paragraph 3 of the Debian Social Contract, Debian
   Developers are strongly encouraged to use the debian-private mailing
   list only for discussions that should not be disclosed.

=== END GR TEXT ===

Thanks for your consideration,
--
Gunnar Wolf
(with thanks to Nicolas for writing the entirety of the GR text ;-) )

Yesterday, I spoke with the Debian project secretary, who confirmed my proposal has reached enough Seconds (that is, we have reached five people wanting the vote to happen), so I could now formally do a call for votes. Thing is, there are two other proposals I feel are interesting, and should be part of the same ballot, and both address part of the reasons why the GR initially proposed by Nicolas didn't succeed:

So, once more (and finally!), why am I posting this?

  • To invite Iain to formally propose his text as an option to mine
  • To invite more DDs to second the available options
  • To publicize the ongoing discussion

I plan to do the formal call for votes by Friday 23.
[update] Kurt informed me that the discussion period started yesterday, when I received the 5th second. The minimum discussion period is two weeks, so I will be doing a call for votes at or after 2016-10-03.

Planet DebianMichal Čihař: wlc 0.6

wlc 0.6, a command line utility for Weblate, has been just released. There have been some minor fixes, but the most important news is that Windows and OS X are now supported platforms as well.

Full list of changes:

  • Fixed error when invoked without command.
  • Tested on Windows and OS X (in addition to Linux).

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version is 2.7 (it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

Planet DebianReproducible builds folks: Reproducible Builds: week 73 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday September 11 and Saturday September 17 2016:

Toolchain developments

Ximin Luo started a new series of tools called (for now) debrepatch, to make it easier to automate checks that our old patches to Debian packages still apply to newer versions of those packages, and still make these reproducible.

Ximin Luo updated one of our few remaining patches for dpkg in #787980 to make it cleaner and more minimal.

The following tools were fixed to produce reproducible output:

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible - in our current test setup - after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

The following 3 packages were not changed, but have become reproducible due to changes in their build-dependencies: jaxrs-api python-lua zope-mysqlda.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

462 package reviews have been added, 524 have been updated and 166 have been removed in this week, adding to our knowledge about identified issues.

25 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (10)
  • Filip Pytloun (1)
  • Santiago Vila (1)

diffoscope development

A new version of diffoscope 60 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Mattia Rizzolo:
    • Various packaging and testing improvements.
  • HW42:
    • minor wording fixes
  • Reiner Herrmann:
    • minor wording fixes

It also included from changes previous weeks; see either the changes or commits linked above, or previous blog posts 72 71 70.

strip-nondeterminism development

New versions of strip-nondeterminism 0.027-1 and 0.028-1 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Testing improvements, including better handling of timezones.

disorderfs development

A new version of disorderfs 0.5.1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Andrew Ayer and Chris Lamb:
    • Support relative paths for ROOTDIR; it no longer needs to be an absolute path.
  • Chris Lamb:
    • Print the behaviour (shuffle/reverse/sort) on startup to stdout.

It also included from changes previous weeks; see either the changes or commits linked above, or previous blog posts 70.

Misc.

This week's edition was written by Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC.

CryptogramMore on the Equities Debate

This is an interesting back-and-forth: initial post by Dave Aitel and Matt Tait, a reply by Mailyn Filder, a short reply by Aitel, and a reply to the reply by Filder.

Worse Than FailureCodeSOD: Exceptional Condition

“This is part of a home-grown transpiler…”, Adam wrote. I could stop there, but this particular transpiler has a… unique way of deciding if it should handle module imports.

Given a file, this Groovy code will check each line of the file to see if it includes an import line, and then return true or false, as appropriate.

private static boolean shouldConvert(String headerPath) {
    File headerFile = new File(headerPath)
    try {
        headerFile.eachLine {
            if (it.contains("MODULE_INCLUDE core.") ||
                    it.contains("import core.")) {
                throw new Exception("true, we found a good import")
            }
        }
        // Never found a valid MODULE_INCLUDE or import; don't convert it
        throw new Exception("false, no good import found")
    } catch (Exception e) {
        return e.getMessage().contains("true")
    }
}

Now, I’m no Groovy expert, but I suspect that there’s probably an easier way to return a boolean value without throwing exceptions and checking the content of the exception message.

Adam adds: “I especially envisioned changing the second exception to something like ‘false, it is true that we didn’t find a good import.’”

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

,

TED“What if?” The talks of TED@UPS

Wanis Kabbaj speaks at TED@UPS - September 15, 2016 at SCADshow, Atlanta, Georgia. Photo: Jason Hales / TED

What if traffic flowed through our streets as smoothly and powerfully as blood flowed through our veins? Wanis Kabbaj speaks at TED@UPS, September 15, 2016, in Atlanta. Photo: Jason Hales / TED

At the foundation of every significant transformation is a question: “What if?”

These two words unlock the imagination and invite us to explore possibilities. A sentiment of hope, of new ways of thinking, of dreaming and discovery, “What if?” unearths answers waiting to be found.

At the second installment of TED@UPS — part of the TED Institute, held on September 15, 2016, at SCADShow in Atlanta, Georgia — 14 speakers and performers dared to ask: What if we used our collective talents, knowledge and insights to provide the spark to an idea or movement that could make a positive impact on the world?

After opening remarks from Teresa Finley, UPS’s chief marketing and global business services officer, the talks in Session 1

The blood in our veins, the cars on our streets. Biology has all the attributes of a transportation genius,” says UPS’s director of global strategy in healthcare logistics (and transportation geek) Wanis Kabbaj. Take our cardiovascular system, for example, in which blood vessels flow from our heart to our outermost extremities using a transportation system that is three-dimensional, and effective. If you compare this to our highways and the stop-go-traffic during rush hour, Kabbaj says, you’ll see how much better biology is at moving things around. He asks us to consider how we might look within ourselves to design the transportation systems of the future, and he previews exciting concepts like suspended magnetic pods, modular buses and flying urban taxis that promise to change how we travel from one point to another.

The most dangerous animal in the world. Each year, mosquitoes kill more than one million people by spreading diseases like malaria, dengue fever, West Nile and Zika. While vaccines are the best weapon against this epidemic, 50 percent of vaccines go to waste due to improper handling and challenging logistics. Logistician Katie Francfort came to the TED@UPS stage with an inventive idea to use the problem, to fight the problem. Why not use bioengineering to build mosquitos that carry life-saving vaccines?

In defense of emojis. Marketing analyst and avid emoji-defender Jenna Schilstra knows firsthand how ambiguous digital communication can be, even with loved ones. A simple emoji can help clarify and amplify subtext so that we can better understand each other, but their benefits extend far beyond clarifying the dreaded “K.” She shows how emojis have been used in new ways, like helping abused children describe complex emotions to helpline service workers, or like making expression more accessible to people on the autism spectrum. Our attachment to emojis makes sense, says Schilstra, when you remember that they’re part of a long lineage of visual communication that began 40,000 years ago with the first cave art. However they continue to evolve, she’s confident that emojis “will not only provide the opportunity to leverage an age-old system of communication, but will profoundly deepen our emotional connections.”

Jenna Schilstra speaks at TED@UPS - September 15, 2016 at SCADshow, Atlanta, Georgia. Photo: Jason Hales / TED

What if we recognized that the key to global communication is … emojis? Jenna Schilstra speaks at TED@UPS, September 15, 2016 in Atlanta. Photo: Jason Hales / TED

Rediscovering heritage through dance. Coming to America at a young age from Indonesia, marketing manager Amelia Laytham decided to shed her language, accent, traditions and culture in order to fit into her new role as an “ordinary American teenager.” It wasn’t until she had her own children that she realized that there was enough room in her single identity for both her Indonesian and American selves. She performs a traditional Balinese Birds of Paradise dance to showcase her heritage and prove that it’s possible to live in duality and still be a whole.

Doing more with less in healthcare. We’ve made great progress through innovation in healthcare, but we’ve also made keeping each other healthy very complicated — and expensive. Over the last 20 years, healthcare spending doubled in the US, while our lifespans increased by only three years. Soon, “we won’t be able to afford the healthcare system as we know it today,” says UPS’s director of healthcare marketing and strategy Jan Denecker. “We’ll have to find new ways to keep healthcare affordable.” For inspiration, Denecker looks to the developing world, where constraints on resources have caused the healthcare industry to adopt a mindset of doing more with less. He provides three lessons for healthcare innovation, inspired by these places: look for alternatives, like replacing a $30,000 surgical drill with a $450 (sterilized and protected) power drill; keep it simple, like creating a stripped-down baby incubator that costs 70 percent less than a traditional one; and search for the answers that are right under our noses, like using barcodes to identify patients and their needs.

Why a Mexico-United States wall would backfire. The United States and Mexico are important trading partners, which means that $1.4 billion in goods crosses the border in both directions each day, sometimes multiple times as part of border-crossing production processes. So … what if we did build a 2,000-mile wall between the two countries to prevent illegal immigration, as some have suggested? In step-by-step detail, supply chain expert Augie Picado explains how the impact would ripple across the production process, raising the price of thousands of consumer goods, costing millions of jobs on both sides of the border, and ultimately aggravating the problem the wall was meant to solve. In Mexico, 20 percent of the workforce depends on jobs tied to US exports. And when those jobs disappear,Picado asks, “where do you think those out-of-work people will go?” 

Augie Picado speaks at TED@UPS - September 15, 2016 at SCADshow, Atlanta, Georgia. Photo: Jason Hales / TED

What if we looked beyond the heated rhetoric and started counting the true cost of building a Mexico-US border wall? Augie Picado speaks at TED@UPS, September 15, 2016, in Atlanta. Photo: Jason Hales / TED

A global village in your pocket. A smartphone reflects more than a swift leap in technology — hidden within each phone is the story of modern commerce. Following the globetrotting logistics behind an average smartphone in this animated short, “A Global Village in Your Pocket,” which reminds viewers of the complex global networks behind the products we use every day. Watch it here.

Finding a new frequency. Accompanied by drums, bass and a keyboard, UPS package car driver and musician John Bidden closed out the first session with a soulful and energetic performance of his original song, “New Frequency.”

In Session 2, speakers …

Jazzing things up. Accompanied by her three-piece band, jazz vocalist, Atlanta resident and wife of a UPS veteran Karla Harris opened Session 2, treating audiences to a soulful rendition of the Beatles’ “Blackbird.”

Karla Harris performs at TED@UPS - September 15, 2016 at SCADshow, Atlanta, Georgia. Photo: Jason Hales / TED

What if, all your life, you were only waiting for this moment to arise? Karla Harris covers “Blackbird” by the Beatles at TED@UPS, September 15, 2016, in Atlanta. Photo: Jason Hales / TED

The benefits of choice. As the mother of a 3-year-old and the curator of mammals at Zoo Atlanta, Stephanie Braccini Slade can attest that animals, just like humans, need to make choices to feel in control of their lives. Using her work with a troubled (and quite needy) chimpanzee named Holly as an example, she explains how creating more opportunities for choice in an animal’s environment can create positive behavioral outcomes and improve their quality of life. “I’m not an expert on humans,” Slade says in closing, “but I think we can learn a lot from the animal world.”

Claim property, claim other, claim yourself. Believe it or not, it’s often difficult to get people to claim abandoned funds, whether from a forgotten savings account, an uncashed check or a long-ago refund. Why is this? Unclaimed funds manager Monica Johnson explains that as a society we have developed a throwaway culture to such a degree that we’d rather toss aside even pieces of ourselves than deal with them. How did she come to know this? She shares her own heart-wrenching story of growing up unwanted, to the point where she began to abandon herself. On the TED@UPS stage, Johnson passionately asks that we abandon abandonment and recognize the impact we can make by embracing who we are and what we can do for others.

“When goods do not cross borders, armies do.” International trade expert Romaine Seguin came to TED@UPS with a question: Would the girls from Chibok, Nigeria, who were abducted by Boko Haram in 2014 still be in school today if the conditions that gave rise to the terrorist group had been different? The president of the Americas region at UPS International, Seguin believes that when communities are isolated from the global economy in the way that places like Chibok have been, they risk becoming breeding grounds for terrorist groups. The solution to that isolation: trade, which Seguin says is our most effective weapon against poverty and injustice. To illustrate her point, she tells the story of Deux Mains, a for-profit spinoff of the nonprofit REBUILD globally, which began employing people in Haiti to make sandals out of old tires and eventually caught the attention of Kenneth Cole. Deux Mains has produced more than 2,400 pairs of Kenneth Cole sandals to date, employing one Haitian for every 250 pairs of sandals sold. “When people have jobs, money and security, they don’t feel a need to take other people’s stuff,” Seguin explains. “Trade is a weapon against terrorism. Trade offers hope.”

Romaine Seguin speaks at TED@UPS - September 15, 2016 at SCADshow, Atlanta, Georgia. Photo: Jason Hales / TED

What if we could help solve global crisis by building trade networks? Romaine Seguin speaks at TED@UPS, September 15, 2016, in Atlanta. Photo: Jason Hales / TED

Building a better address. “Our current address system makes people do the legwork. Why don’t we instead let people put themselves on the map?” asks Mario Paluzzi, a logistics and technology specialist at UPS.  He suggests a way forward for precision shipping and delivery, taking inspiration from the developing world. Many people in the developing world lack a traditional street address like 123 Main Street, but more than 90 percent of the population has mobile phones that could give the exact geographical data of a person’s location — which could be used to deliver packages to them. What if we could disrupt an industry “so rooted in its infrastructure that we’re stuck with what we have instead of implementing the best solution?” 

The age of exploration. Explorers and mapmakers of the past organized the wild. Today, the pathfinders carrying on the tradition face a new challenge: how to map the community knowledge that will help us find our way in a constantly changing world. The animated short “The Age of Exploration,” suggests that we are still not finished discovering the secrets of the planet where we live. Watch it here.

The life lessons of … soap operas. The larger-than-life stories and characters of soap operas may be melodramatic, but to Kate Adams, managing editor of UPS.com, they reflect the intensity and drama of our own lives. “We cycle through tragedy and joy like these characters,” she says. “We cross thresholds, fight demons and find salvation unexpectedly.” Adams spent eight years as assistant casting director for As the World Turns, and she’s distilled four lessons for life and business from these dramas. First, surrender is not an option. Let All My Children’s Erica Kane inspire you as she faces down a grizzly bear. Next, sacrifice your ego … in the same way that Stephanie Forrester of The Bold and the Beautiful dropped her superiority complex and befriended her Valley Girl archenemy. Third, evolution is real. Just as soap opera characters are continually recast, we can evolve, too. Finally, resurrection is possible. Soap opera characters like Stefano DiMera from Days of Our Lives die and come back to life over and over, and we, too, can revive ourselves. “As long as there is breath in your body, it’s never too late to change your story,” Adams says.

Kate Adams speaks at TED@UPS - September 15, 2016 at SCADshow, Atlanta, Georgia. Photo: Jason Hales / TED

What if soap operas could teach us about life? Kate Adams speaks at TED@UPS, September 15, 2016, in Atlanta. Photo: Jason Hales / TED

The future of corporate social responsibility: Give us your data. Data engineer Mallory Soldner laid out three simple, resourceful ways that companies can make real contributions to humanitarian aid: by donating their data, their decision scientists and their technology to gather new data. Because a corporate data set — say, information on the flow of a new product to local markets — could help a nonprofit organization better understand the flow of vaccines and food aid to those same markets. “We can revolutionize the world of humanitarian aid, by bringing the right data to the right decisions,” she says. 

Signed, sealed, and promptly delivered. Karla Harris and John Bidden returned to the TED@UPS stage, wrapping up the show with a lively duet and sing-along version of Stevie Wonder’s “Signed, Sealed, Delivered, I’m Yours.”


Sociological ImagesThe 2016 Presidential Race and the Failed Art of Balance

Modern journalism is reliant on the idea of objectivity. Even when truth is elusive, if journalists write a balanced story, they can be said to have done a good job.

But what if a story doesn’t have two sides? Sometimes journalists continue to write as if they do, as they did in regards to human caused climate change for a decade. Other times they do so wholly disingenuously, counterposing authoritative voices against ones they know carry no weight with their audience, as they did and still do with coverage of female genital cutting. At still other times, they abandon objectivity altogether, counting on a national consensus so strong that no one could possibly accuse them of being biased, as many did after 9/11.

I think this is the source of some of the discomfort with the media coverage of this election.

What does a journalist do when the editorial board of the Washington Post calls one candidate a “unique threat to American democracy”; the New York Timescalls him a “poisonous messenger” appealing to “people’s worst instincts”; the Houston Chronicle’s calls him “dangerous to the nation and the world,” a man that should “make every American shudder”; and the far-right National Review’s calls him a “menace”? What does a journalist do when conservative newspapers like the Dallas Morning News call him “horrify[ing]” and endorse a Democrat for president for the first time in almost 100 years? Is this still the right time to be objective? Is this a 9/11 moment?

I suspect that journalists themselves do not know what to do, and so we are seeing all of the strategies playing out. Some are trying hard to hew to the traditional version of balance, but covering asymmetrical candidates symmetrically makes for some odd outcomes, hence accusations of false equivalence and misinforming the public. Some are counting on a consensus, at least on some issues, assuming that things like constitutional rights and anti-bigotry are widespread enough values that they can criticize Trump on these issues without seeming partisan, but it doesn’t always work. Still others are aiming down the middle, offering an imbalanced balance, as when journalists reference the support of David Duke and other white supremacists as their own kind of dog-whistle politics.

Meanwhile, readers each have our own ideas about whether this election deserves “balanced” coverage and what that might look like. And so do, of course, the thousands of pundits, none of whom are accountable to journalistic norms, and the millions of us on social media, sharing our own points of view.

It’s no wonder the election is giving us vertigo. It is itself out of balance, making it impossible for the country to agree on what objectivity looks like. Even the journalists, who are better at it than anyone, are failing. The election has revealed what is always true: that objectivity is a precarious performance, more an art than a science, and one that gains validity only in relation to the socially constructed realities in which we live.

It’s just that our socially constructed reality is suddenly in shambles. Post-truth politics doesn’t give us a leg to stand on, none of us can get a foothold anymore. Internet-era economic realities have replaced the news anchor with free-floating infotainment. Political polarization has ripped the country apart and the edifices we’ve clung to for stability—like the Republican Party—are suddenly themselves on shaky ground. The rise of Trump has made all of this dizzyingly clear.

We’re hanging on for dear life. I fear that journalists can do little to help us now.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureAnnouncements: Sponsor Announcement: Atalasoft

Let’s take a moment to talk about documents. I once worked on an application that needed to generate some documents for Sarbanes-Oxley compliance, and without confessing to too much of a WTF, let’s just say it involved SQL Server Reporting Services, SharePoint, and some rather cryptic web service calls that I’m almost certain have stopped working in the years since I built it. The solution belongs here.

I bring this up, because I’m happy to announce a new sponsor here at TDWTF: Atalasoft, which would have kept me from writing that awkward solution.

Atalasoft makes libraries for working with documents from your .NET applications. There are SDKs for manipulating images, working with PDFs, and mobile SDKs for doing document capture on iOS or Android devices, and WingScan provides interaction with TWAIN scanners right from inside of a web browser. Their products provide zero-footprint document viewing, easy interfaces for constructing and capturing documents, and come with top-tier support for helping you get your application built.

This sponsorship coincides with their latest release, which partners with Abbyy’s FineReader to add OCR support, the ability to interact with Office documents without Office installed, new PDF compression options, and a variety of improvements to their already excellent controls and SDKs.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

CryptogramPeriscope ATM Skimmers

"Periscope skimmers" are the most sophisticated kind of ATM skimmers. They are entirely inside the ATM, meaning they're impossible to notice.

They've been found in the US.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main October 2016 Meeting: Sending Linux to Antarctica, 2012-2017 / Annual General Meeting

Oct 4 2016 18:30
Oct 4 2016 20:30
Oct 4 2016 18:30
Oct 4 2016 20:30
Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Speakers:

• Scott Penrose, Sending Linux to Antarctica: 2012-2017
• Annual General Meeting and lightning talks

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

October 4, 2016 - 18:30

read more

Worse Than FailureAs Time Goes By…

In my formative years, I had experienced such things as Star Trek, and the advent of new games like Pong, Space Invaders and Asteroids. As I ventured out of college and into the Orwellian future of 1984, I began a 3+ decade long sojourn into the world of technology. I mused about the wondrous changes that these new-fangled gadgets would bring to all of our lives. Telescreens that connected us both visually and orally in real time. Big Brother. History could be rewritten. Technology would boldly take us where no one had gone before...

Hollerith cards were replaced with Teletypes, then CRTs and finally flat panel displays. You can still fold, spindle and mutilate a flat panel display; it just takes more effort.

Pneumatic tubes were replaced with email and finally text messages. Try as you might, there's simply no way to text someone a live critter.

Interactive Voice Response systems. Talking to a helpful customer service representative is no longer necessary now that we can listen to a recording droning on. After all, don't you just love doing a depth-first search through 17 sub-menus to get what you want?

ARPANET d/evolved into the internet. Google has eliminated the need to have bookshelves of manuals, or remember anything you've ever posted - because it's all there in perpetuity. Granted, a lot of it is filled with pron, but you don't actually have to look at it!

Programming languages. We went from assembly to FORTRAN to C to C++ to Java/.NET/... to scripting languages. While it's true that auto-GC'd languages make it easier to concentrate on what the program must do instead of interfacing with the machine, VB/PHP/Excel/etc. brought programming within reach of those who should not have it. COBOL lives on (as it turns out, the Enterprise does have a mainframe)

Communication. Snail-mail was slow. Email sped things along, but we got impatient so they invented texting. Apple leap frogged a great idea, but only for the truly nimble-fingered.

They still haven't gotten dictation-transcription to work properly; we're nowhere near the point of saying: Computer, build me a subroutine to... because the replicator would spit out a submarine.

Security: Challenge-response questions aren't a bad idea, but too often all the allowed questions can have multiple answers, which forces you to write the Q/A down and keep them nearby (I don't have an older cousin, neither of my parents has a middle name, my first pet was twins and the place I met my wife was in a village in a township in a district in a county).

Security: Requirements that vary wildly for the password-make-up, and change-frequency from system to system and company to company (requisite link). Hmm, 4-8/6-12 characters? Numbers/upper/lower case? Subsets of: ~!@#$%^&*()_+-={}[]:;"',.?/) Change it every 4/6/8/12 weeks? Maybe I'll just go with the fail safe PostIt. FWIW: I haven't had to change the password on my bank ATM account in 35 years because I. Don't. Tell. It. To. Anyone.

Now that the government has shown that any device, no matter how secure, can be cracked, we must all realize that encryption, no matter how sophisticated, ain't cutting it...

Security: We could just write everything in Perl; it would be completely secure after 24 hours (even without encryption) as nobody (including the author) would be able to decipher it (missed opportunity).

Editors: edlin, notepad, vi: when they were all you had, they were a blessing. Notepad++, vim, IDEs, etc: big improvements. But with convenience comes dependency. I once had to edit a config file for a co-worker because they couldn't figure out how to edit it on a *nix production system where Emacs wasn't installed!

Smart phones allow you to concentrate on that all-important email/text/call instead of driving. You can play games (like Pokemon-GO) while behind the wheel, so you can crash into a police car.

Of course, how many times have you texted someone about something only to end up sending an auto-corrected variant (Sweetheart, I'm going to duck you tonight).

Smart cars allow your navigation system to blue screen at highway speeds. This happened to my CR-V, and the dealer told me to disconnect the main battery for 30 seconds in order to reboot the car.

The computer can also modify your input on the gas pedal to make the car more efficient. This sounds like a good thing. Unless you stomp the accelerator through the floor (clearly demanding all the power the engine can give) and the computer decides otherwise, which leads to some very WTF looks from the truck driver that almost pancaked you.

Smart appliances: we no longer need to pester our spouses because, while at the supermarket, we can now contact our appliances directly to see if we need this or that. This will inevitably lead to weekly security-updates for our cars and appliances (you know the day is coming when your fridge and coffee maker start to automatically download and install a Windows-10 update).

Games: from Conway's Life to Sim*, Tetris to Angry Birds, the assorted 80's video and arcade games, Wolfenstein/Doom/Quake/etc., and everything that followed. Games have drastically improved over time and provide tremendous entertainment value. They have yet to build a computer that can count the number of hours of sleep lost to these games.

Miniaturization: they spent zillions creating monstrously large flat panel TVs and then zillions more to get us to watch movies on our phones. After they spent zillions making stuff smaller, they flooded those smaller devices with ads for stuff to enlarge things.

These topics were chosen randomly while thinking back on my career and wandering around my house, and of course, there are many more, but rather than having made drastic improvements in our lives, the changes seem oddly even...

On the other hand, I don't recall Scotty ever having to download a Windows update, and Lt. Uhura never got a robo-call from someone in the Federation (Enterprise: if you would like a scan of the 3rd planet of the system in sector 4, press 3), so maybe the future will be brighter after all.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianMike Gabriel: Rocrail changed License to some dodgy non-free non-License

The Background Story

A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+).

A month ago, or so, I collected my old Märklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son.

Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though).

This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015.

I should have moved on from here on...

Instead...

Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely.

Going deeper...

The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that.

Trivial Source Code Investigation...

So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:

Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.

The replacement happened with these Git commits:

commit cfee35f3ae5973e97a3d4b178f20eb69a916203e
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Fri Jul 17 16:09:45 2015 +0200

    update copyrights

commit df399d9d4be05799d4ae27984746c8b600adb20b
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 14:49:12 2015 +0200

    update licence

commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 10:17:13 2015 +0200

    update

Getting in touch again, still being really interested and wanting to help...

As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:

Hi Rob,

I just stumbled over this post [3] [link reference adapted for this
blog post), which probably is the one you have referred to above.

It seems that Rocrail contains features that require a key or such
for permanent activation.  Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will  not
appreciate that). As the GPL states that people can share the source
code, programmers can  easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they  like.

Furthermore, the current COPYING file is really non-protective at
all. It does not really protect   you as copyright holder of the
code. Meaning, if people crash their trains with your software, you  
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for  having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.

In that referenced post above, someone also writes about the nuisance
of license discussions in  this forum. I have seen various cases
where people produced software and did not really care for 
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code  under their copyright holdership and their
own commercial licensing scheme. This is not paranoia,  this is what
happens in the Free Software world from time to time.

A model that might be much more appropriate (and more protective to
you as the author), maybe, is a  dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:  
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a  way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g.   MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition  would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you  may use any
non-free license you want (but please not that COPYING file you have
now, it really  does not protect your copyright holdership).

The reason for releasing (a reduced set of features of a) software as
Free Software is to extend  the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud,  GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am  sure that the user base of Rocrail will increase.
There may also be developers popping up showing  an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that  won't waste their free time on working for a non-free
piece of software (without being paid).

If you follow (or want to follow) a business model with Rocrail, then
keep some interesting  features in the Commercial Edition and don't
ship that source code. People with deep interest may  opt for that.

Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail  you are free to juggle with licenses and
apply any license to a release you want. For example, this  can be
interesing for a free-again Rocrail being shipped via Apple's iStore. 

Last but not least, as you ship the complete source code with all
previous changes as a Git project  to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In 
the mail I received with my GitBlit credentials, there was some text
that  prohibits publishing the  code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage.  This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still  GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the  2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want  this.

Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it  does not come with a free
license anymore. I won't continue this discussion and move on, unless
you  are interested in any of the above information and ask for more
expertise. Ping me here or directly  via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the 
expertise is offered free of charge ;-).

light+love
Mike

Wow, the first time I got moderated somewhere... What an experience!

This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity...

The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):

Mike,

I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.

(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really.

Now the political part of this blog post...

Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning.

[Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another–this time highly idiotic–wind of change, where free speech may end up as a precious good.]

To other (Debian) Package Maintainers and Railroad Enthusiasts...

With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch.

Now really moving on...

Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier.

I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely...

References

light+love,
Mike

,

Planet DebianGregor Herrmann: RC bugs 2016/37

we're not running out of (perl-related) RC bugs. here's my list for this week:

  • #811672 – qt4-perl: "FTBFS with GCC 6: cannot convert x to y"
    add patch from upstream bug tracker, upload to DELAYED/5
  • #815433 – libdata-messagepack-stream-perl: "libdata-messagepack-stream-perl: FTBFS with new msgpack-c library"
    upload new upstream version (pkg-perl)
  • #834249 – src:openbabel: "openbabel: FTBFS in testing"
    propose a patch (build with -std=gnu++98), later upload to DELAYED/2
  • #834960 – src:libdaemon-generic-perl: "libdaemon-generic-perl: FTBFS too much often (failing tests)"
    add patch from ntyni (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    upload with patch from dkg (pkg-perl)
  • #835412 – src:libzmq-ffi-perl: "libzmq-ffi-perl: FTBFS too much often, makes sbuild to hang"
    add patch from upstream git (pkg-perl)
  • #835731 – src:libdbix-class-perl: "libdbix-class-perl: FTBFS: Tests failures"
    cherry-pick patch from upstream git (pkg-perl)
  • #837055 – src:fftw: "fftw: FTBFS due to bfnnconv.pl failing to execute m-ascii.pl (. removed from @INC in perl)"
    add patch to call require with "./", upload to DELAYED/2, rescheduled to 0-day on maintainer's request
  • #837221 – src:metacity-themes: "metacity-themes: FTBFS: Can't locate debian/themedata.pm in @INC"
    call helper scripts with "perl -I." in debian/rules, QA upload
  • #837242 – src:jwchat: "jwchat: FTBFS: Can't locate scripts/JWCI18N.pm in @INC"
    add patch to call require with "./", upload to DELAYED/2
  • #837264 – src:libsys-info-base-perl: "libsys-info-base-perl: FTBFS: Couldn't do SPEC: No such file or directory at builder/lib/Build.pm line 42."
    upload with patch from ntyni (pkg-perl)
  • #837284 – src:libimage-info-perl: "libimage-info-perl: FTBFS: Can't locate inc/Module/Install.pm in @INC"
    call perl with -I. in debian/rules, upload to DELAYED/2

Planet DebianPaul Tagliamonte: DNSync

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

Rondam RamblingsThe greatest traitors in American history

Who is the biggest traitor in U.S. history?  The usual suspects are John Walker Jr., the Rosenbergs, and of course the venerable favorite whose name has become almost synonymous with treachery, Benedict Arnold.  But today the bar has been raised.  I would like to nominate a new candidate for this ignominious title, the editorial board of the Washington Post, for publishing this editorial

Planet DebianEriberto Mota: Statistics to Choose a Debian Package to Help

In the last week I played a bit with UDD (Ultimate Debian Database). After some experiments I did a script to generate a daily report about source packages in Debian. This report is useful to choose a package that needs help.

The daily report has six sections:

  • Sources in Debian Sid (including orphan)
  • Sources in Debian Sid (only Orphan, RFA or RFH)
  • Top 200 sources in Debian Sid with outdated Standards-Version
  • Top 200 sources in Debian Sid with NMUs
  • Top 200 sources in Debian Sid with BUGs
  • Top 200 sources in Debian Sid with RC BUGs

The first section has several important data about all source packages in Debian, ordered by last upload to Sid. It is very useful to see packages without revisions for a long time. Other interesting data about each package are Standards-Version, packaging format, number of NMUs, among others. Believe it or not, there are packages uploaded to Sid for the last time 2003! (seven packages)

With the report, you can choose a ideal package to do QA uploads, NMUs or to adopt.

Well, if you like to review packages, this report is for you: https://people.debian.org/~eriberto/eriberto_stats.html. Enjoy!

 

Planet DebianNorbert Preining: Fixing packages for broken Gtk3

As mentioned on sunweaver’s blog Debian’s GTK-3+ v3.21 breaks Debian MATE 1.14, Gtk3 is breaking apps all around. But not only Mate, probably many other apps are broken, too, in particular Nemo (the file manager of Cinnamon desktop) has redraw issues (bug 836908), and regular crashes (bug 835043).

gtk-breakage

I have prepared packages for mate-terminal and nemo built from the most recent git sources. The new mate-terminal now does not crash anymore on profile changes (bug 835188), and the nemo redraw issues are gone. Unfortunately, the other crashes of nemo are still there. The apt-gettable repository with sources and amd64 binaries are here:

deb http://www.preining.info/debian/ gtk3fixes main
deb-src http://www.preining.info/debian/ gtk3fixes main

and are signed with my usual GPG key.

Last but not least, I quote from sunweaver’s blog:

Questions

  1. Isn’t GTK-3+ a shared library? This one was rhetorical… Yes, it is.
  2. One that breaks other application with every point release? Well, unfortunately, as experience over the past years has shown: Yes, this has happened several times, so far — and it happened again.
  3. Why is it that GTK-3+ uploads appear in Debian without going through a proper transition? This question is not rhetorical. If someone has an answer, please enlighten me.

(end of quote)

<rant>
My personal answer to this is: Gtk is strongly related to Gnome, Gnome is strongly related to SystemD, all this is pushed onto Debian users in the usual way of “we don’t care for breaking non-XXX apps” (for XXX in Gnome, SystemD). It is very sad to see this recklessness taking more and more space all over Debian.
</rant>

I finish with another quote from sunweaver’s blog:

already scared of the 3.22 GTK+ release, luckily the last development release of the GTK+ 3-series

,

Planet Linux AustraliaDave Hall: The Road to DrupalCon Dublin

DrupalCon Dublin is just around the corner. Earlier today I started my journey to Dublin. This week I'll be in Mumbai for some work meetings before heading to Dublin.

On Tuesday 27 September at 1pm I will be presenting my session Let the Machines do the Work. This lighthearted presentation provides some practical examples of how teams can start to introduce automation into their Drupal workflows. All of the code used in the examples will be available after my session. You'll need to attend my talk to get the link.

As part of my preparation for Dublin I've been road testing my session. Over the last few weeks I delivered early versions of the talk to the Drupal Sydney and Drupal Melbourne meetups. Last weekend I presented the talk at Global Training Days Chennai, DrupalCamp Ghent and DrupalCamp St Louis. It was exhausting presenting three times in less than 8 hours, but it was definitely worth the effort. The 3 sessions were presented using hangouts, so they were recorded. I gained valuable feedback from attendees and became aware of some bits of my talk needed some attention.

Just as I encourage teams to iterate on their automation, I've been iterating on my presentation. Over the next week or so I will be recutting my demos and polishing the presentation. If you have a spare 40 minutes I would really appreciate it if you watch one of the session recording below and leave a comment here with any feedback.

Global Training Days Chennai

Thumbnail frame from DrupalCamp Ghent presentation video

DrupalCamp Ghent

Thumbnail frame from DrupalCamp Ghent presentation video

Note: I recorded the audience not my slides.

DrupalCamp St Louis

Thumbnail frame from DrupalCamp St Louis presentation video

Note: There was an issue with the mic in St Louis, so there is no audio from their side.

Planet DebianJonas Meurer: apache rewritemap querystring

Apache2: Rewrite REQUEST_URI based on a bulk list of GET parameters in QUERY_STRING

Recently I searched for a solution to rewrite a REQUEST_URI based on GET parameters in QUERY_STRING. To make it even more complicated, I had a list of ~2000 parameters that have to be rewritten like the following:

if %{QUERY_STRING} starts with one of <parameters>:
    rewrite %{REQUEST_URI} from /new/ to /old/

Honestly, it took me several hours to find a solution that was satisfying and scales well. Hopefully, this post will save time for others with the need for a similar solution.

Research and first attempt: RewriteCond %{QUERY_STRING} ...

After reading through some documentation, particularly Manipulating the Query String, the following ideas came to my mind at first:

RewriteCond %{REQUEST_URI} ^/new/
RewriteCond %{QUERY_STRING} ^(param1)(.*)$ [OR]
RewriteCond %{QUERY_STRING} ^(param2)(.*)$ [OR]
...
RewriteCond %{QUERY_STRING} ^(paramN)(.*)$
RewriteRule /new/ /old/?%1%2 [R,L]

or instead of an own RewriteCond for each parameter:

RewriteCond %{QUERY_STRING} ^(param1|param2|...|paramN)(.*)$

There has to be something smarter ...

But with ~2000 parameters to look up, neither of the solutions seemed particularly smart. Both scale really bad and probably it's rather heavy stuff for Apache to check ~2000 conditions for every ^/new/ request.

Instead I was searching for a solution to lookup a string from a compiled list of strings. RewriteMap seemed like it might be what I was searching for. I read the Apache2 RewriteMap documentation here and here and finally found a solution that worked as expected, with one limitation. But read on ...

The solution: RewriteMap and RewriteCond ${mapfile:%{QUERY_STRING}} ...

Finally, the solution was to use a RewriteMap with all parameters that shall be rewritten and check given parameters in the requests against this map within a RewriteCond. If the parameter matches, the simple RewriteRule applies.

For the inpatient, here's the rewrite magic from my VirtualHost configuration:

RewriteEngine On
RewriteMap RewriteParams "dbm:/tmp/rewrite-params.map"
RewriteCond %{REQUEST_URI} ^/new/
RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND
RewriteRule ^/new/ /old/ [R,L]

A more detailed description of the solution

First, I created a RewriteMap at /tmp/rewrite-params.txt with all parameters to be rewritten. A RewriteMap requires two field per line, one with the origin and the other one with the replacement part. Since I use the RewriteMap merely for checking the condition, not for real string replacement, the second field doesn't matter to me. I ended up putting my parameters in both fields, but you could choose every random value for the second field:

/tmp/rewrite-params.txt:

param1 param1
param2 param2
...
paramN paramN

Then I created a DBM hash map file from that plain text map file, as DBM maps are indexed, while TXT maps are not. In other words: with big maps, DBM is a huge performance boost:

httxt2dbm -i /tmp/rewrite-params.txt -o /tmp/rewrite-params.map

Now, let's go through the VirtualHost configuration rewrite magic from above line by line. First line should be clear: it enables the Apache Rewrite Engine:

RewriteEngine On

Second line defines the RewriteMap that I created above. It contains the list of parameters to be rewritten:

RewriteMap RewriteParams "dbm:/tmp/rewrite-params.map"

The third line limits the rewrites to REQUEST_URIs that start with /new/. This is particularly required to prevent rewrite loops. Without that condition, queries that have been rewritten to /old/ would go through the rewrite again, resulting in an endless rewrite loop:

RewriteCond %{REQUEST_URI} ^/new/

The fourth line is the core condition: it checks whether QUERY_STRING (the GET parameters) is listed in the RewriteMap. A fallback value 'NOT_FOUND' is defined if the lookup didn't match. The condition is only true, if the lookup was successful and the QUERY_STRING was found within the map:

RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND

The last line is a simple RewriteRule from /new/ to /old/. It is executed only if all previous conditions are met. The flags are R for redirect (issuing a HTTP redirect to browser) and L for last (causing mod_rewrite to stop processing immediately after that rule):

RewriteRule ^/new/ /old/ [R,L]

Known issues

A big limitation of this solution (compared to the ones above) is, that it looks up the whole QUERY_STRING in RewriteMap. Therefore, it works only if param is the only GET parameter. In case of additional GET parameters, the second rewrite condition fails and nothing is rewritten even if the first GET parameter is listed in RewriteMap.

If anyone comes up with a solution to this limitation, I would be glad to learn about it :)

Planet DebianNorbert Preining: Android 7.0 Nougat – Root – PokemonGo

Since my switch to Android my Nexus 6p is rooted and I have happily fixed the Android (<7) font errors with Japanese fonts in English environment (see this post). The recently released Android 7 Nougat finally fixes this problem, so it was high time to update.

In addition, a recent update to Pokemon Go excluded rooted devices, so I was searching for a solution that allows me to: update to Nougat, keep root, and run PokemonGo (as well as some bank security apps etc).

android-nougat-root-poke

After some playing around here are the steps I took:

Installation of necessary components

Warning: The following is for Nexus6p device, you need different image files and TWRP recovery for other devices.

Flash Nougat firmware images

Get it from the Google Android Nexus images web site, unpack the zip and the included zip one gets a lot of img files.

unzip angler-nrd90u-factory-7c9b6a2b.zip
cd angler-nrd90u/
unzip image-angler-nrd90u.zip

As I don’t want my user partition to get flashed, I did not use the included flash script, but did it manually:

fastboot flash bootloader bootloader-angler-angler-03.58.img
fastboot reboot-bootloader
sleep 5
fastboot flash radio radio-angler-angler-03.72.img
fastboot reboot-bootloader
sleep 5
fastboot erase system
fastboot flash system system.img
fastboot erase boot
fastboot flash boot boot.img
fastboot erase cache
fastboot flash cache cache.img
fastboot erase vendor
fastboot flash vendor vendor.img
fastboot erase recovery
fastboot flash recovery recovery.img
fastboot reboot

After that boot into the normal system and let it do all the necessary upgrades. Once this is done, let us prepare for systemless root and possible hiding of it.

Get the necessary file

Get Magisk, SuperSU-magisk, as well as the Magisk-Manager.apk from this forum thread (direct links as of 2016/9: Magisk-v6.zip, SuperSU-v2.76-magisk.zip, Magisk-Manager.apk).

Transfer these two files to your device – I am using an external USB stick that can be plugged into the device, or copy it via your computer or via a cloud service.

Also we need to get a custom recovery image, I am using TWRP. I used the version 3.0.2-0 of TWRP I had already available, but that version didn’t manage to decrypt the file system and hangs. One needs to get at least version 3.0.2-2 from the TWRP web site.

Install latest TWRP recorvery

Reboot into boot-loader, then use fastboot to flash twrp:

fastboot erase recovery
fastboot flash recovery twrp-3.0.2-2-angler.img
fastboot reboot-bootloader

After that select Recovery with the up-down buttons and start twrp. You will be asked you pin if you have one set.

Install Magisk-v6.zip

Select “Install” in TWRP, select the Magisk-v6.zip file, and see you device being prepared for systemless root.

Install SuperSU, Magisk version

Again, boot into TWRP and use the install tool to install SuperSU-v2.76-magisk.zip. After reboot you should have a SuperSU binary running.

Install the Magisk Manager

From your device browse to the .apk and install it.

How to run safety net programs

Those programs that check for safety functions (Pokemon Go, Android Pay, several bank apps) need root disabled. Open the Magisk Manager and switch the root switch to the left (off). After this starting the program should bring you past the special check.

,

Planet DebianSteinar H. Gunderson: BBR opensourced

This is pretty big stuff for anyone who cares about TCP. Huge congrats to the team at Google.

TEDA 3D printed dress for the Paralympics, biodiversity in the heart of the city, and a camera that can read a closed book

As usual, the TED community has lots of news to share this week. Below, some highlights.

Man vs. machine? It took Danit Peleg just 100 hours to print the dress worn by fellow TEDster Amy Purdy in the opening ceremony of the Paralympics in Rio (if that sounds slow, consider that it took her 300 hours to print a dress a year ago). Peleg had never met Purdy before the first fitting, so she used Nettelo, an app that allows users to create a 3D scan of their body, to make sure the dress fit Purdy perfectly. Since Peleg used a soft material called Filaflex to print the dress, it moved beautifully as Purdy, a Paralympic medal-winner who lost both legs to bacterial meningitis at age 19, mesmerized audiences with a bionic samba routine. (Highlighting the fact that Purdy was also a finalist on Dancing With the Stars.) The dress was perfectly in line with Purdy’s dance, a reflection on the human  relationship to technology and its ability to allow Paralympic athletes to reach their full potential — one point, Purdy even danced with a robotic arm. (Watch Danit’s TED Talk and Amy’s TED Talk)

For the problems that affect us all, start small. Our national and international political institutions are hopelessly ill equipped to solve the complex, interdependent problems of the 21st century, says Benjamin Barber, but a solution is close at hand — cities, and the mayors who govern them. Barber has long dreamed of building on the urban networks that already exist in specific policy domains to form a global parliament of mayors, and with the inaugural convening of the Global Parliament of Mayors in The Hague, September 9-11, that dream is now a reality. More than 60 mayors agreed on The Hague Global Mayors Call to Action and discussed future governance of the GPM. They also discussed action-oriented plans for such issues as climate change, migration and refugees. (Watch Benjamin’s TED Talk)

Taking the measure of fragile cities. Robert Muggah’s Igarapé Institute is behind a data visualization platform on fragile cities, which launched at Barber’s Global Parliament of Mayors and includes information on more than 2,100 cities with populations of 250,000 or greater. Developed along with United Nations University, World Economic Forum, and 100 Resilient Cities, the cities were graded on 11 variables, including city population growth, unemployment, inequality, pollution, climate risk, homicide, and exposure to terrorism. Surprisingly, the analysis revealed that fragility is more widely distributed than previously thought. (Watch Robert’s TED Talk and read this Ideas piece co-written by Barber and Muggah)

screen-shot-2016-09-16-at-4-41-54-pm

Image permission granted by Robert Muggah.

Biodiversity in The City of Lights. Shubhendu Sharma’s project to promote biodiversity in Paris has been selected as one of 37 projects to improve the city that will be put to a public vote. The vote is part of the city’s Participatory Budget Initiative where residents submit proposals on concrete ways to improve their district or the city at large. The proposals are narrowed down before being voted on by residents (projects for the city at large and projects for specific districts are voted on separately). All residents, not just those who submitted the project, can help bring winning projects to life. Between 2014 and 2020, Paris has dedicated 5% of their capital budget to fund these projects and in 2016, that commitment totals €100-million. By 2020, the investment will total close to half a billion Euros. (Watch Shubhendu’s TED Talk)

The poetry of dissonance. “I don’t remember the last time police / sirens didn’t feel like gasping for air,” writes Clint Smith in his debut poetry collection, Counting Descent, released on September 15. Weaving between personal and political histories, Smith masterfully tells a coming-of-age story exploring the cognitive dissonance that occurs when the community you belong to and the world you live in send you two very different messages. Specifically, he renders the dissonance stemming from straddling a world that frequently depicts blackness as a caricature of fear and communities that ardently celebrate black humanity. (Watch Clint’s TED Talk and read his Ideas post)

Listen up, language lovers. Many of us lament the shifts that occur in language over time, maintaining that language is steadily deteriorating as it succumbs to a steady onslaught of acronyms from our text messaging habits or a misuse of words that grows to be common and accepted over time, like the use of “literally” to mean “figuratively.” But linguist John McWhorter thinks you should think twice before complaining. His new book, Words on the Move, published September 6, explains why the evolution of language is not only natural, but good. (Watch John’s TED Talk and watch for a new talk from John this fall.)

Art that mixes oil and water. Fabian Oefner is on a quest to unite art and science. As he told audiences at TEDGlobal 2013, “On one hand, science is a very rational approach to its surroundings, whereas art on the other hand is usually an emotional approach to its surroundings. I’m trying to bring those two views into one so that my images both speak to the viewer’s heart but also to the viewer’s brain.” His latest work, Oil Spill, is no exception. The photographs show the captivating result of mixing oil and water. The bright colors result from the refraction and reflection of light as it travels through the lens of the camera. (Watch Fabian’s TED Talk)

Do judge a book by its cover. How do you read a closed book? It sounds like a trick question, but Ramesh Raskar and colleagues have developed a camera that can do just that. In order to test the prototype, the researchers used a stack of papers, each sheet with one letter printed on it, and the camera was able to correctly identify the letters on the first 9 sheets. The camera uses a type of electromagnetic radiation called terahertz radiation and could eventually allow academics and researchers to access ancient books and documents too fragile to open. The system could also be applied for analysis of other materials that occur in thin layers, such as the coatings on machine parts or pharmaceuticals. On September 13, Raskar was awarded the Lemelson-MIT prize for his co-invention of many breakthrough imaging solutions including this camera, a camera that can see around corners, and low-cost eye care solutions. (Watch Ramesh’s TED Talk, and read more about the technology behind the camera in this Ideas piece.)

The real meaning of conspiracies. In The New York Times, Zeynep Tufekci explains how the prevalence of conspiracy theories in America’s current election cycle — think Hillary Clinton’s body double or the head of her Secret Service who’s really her hypnotist — is not an anomaly, but a symptom of problems that run much deeper. Conspiracy theories are nothing new, she says, but the growth of technology and declining trust in public institutions means that their number is only growing. “Conspiracy theories are like mosquitoes that thrive in swamps of low-trust societies, weak institutions, secretive elites and technology that allows theories unanchored from truth to spread rapidly. Swatting them one at a time is mostly futile: The real answer is draining the swamps.” (Watch Zeynep’s TED Talk and watch for a new talk from her this fall.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.


CryptogramFriday Squid Blogging: Giant Squid on Japanese Television

I got this video from PZ Myers's blog. I know absolutely nothing about it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramHacking Bridge-Hand Generation Software

Interesting:

Roughly three weeks later, there is a operation program available to crack ACBL hand records.

  • Given three consecutive boards, all the remaining boards for that session can be determined.
  • The program can be easily parallelized. This analysis can be finished while sessions are still running

this would permit the following type of attack:

  • A confederate watch boards 1-3 of the USBF team trials on vugraph
  • The confederate uses Amazon web services to crack all the rest of the boards for that session
  • The confederate texts the hands to a players smart phone
  • The player hits the head, whips out his smart phone, and ...

CryptogramHacking Wireless Tire-Pressure Monitoring System

Research paper: "Security and Privacy Vulnerabilities of In-Car Wireless Networks: A Tire Pressure Monitoring System Case Study," by Ishtiaq Rouf, Rob Miller, Hossen Mustafa, Travis Taylor, Sangho Oh, Wenyuan Xu, Marco Gruteser, Wade Trapper, Ivan Seskar:

Abstract: Wireless networks are being integrated into the modern automobile. The security and privacy implications of such in-car networks, however, have are not well understood as their transmissions propagate beyond the confines of a car's body. To understand the risks associated with these wireless systems, this paper presents a privacy and security evaluation of wireless Tire Pressure Monitoring Systems using both laboratory experiments with isolated tire pressure sensor modules and experiments with a complete vehicle system. We show that eavesdropping is easily possible at a distance of roughly 40m from a passing vehicle. Further, reverse-engineering of the underlying protocols revealed static 32 bit identifiers and that messages can be easily triggered remotely, which raises privacy concerns as vehicles can be tracked through these identifiers. Further, current protocols do not employ authentication and vehicle implementations do not perform basic input validation, thereby allowing for remote spoofing of sensor messages. We validated this experimentally by triggering tire pressure warning messages in a moving vehicle from a customized software radio attack platform located in a nearby vehicle. Finally, the paper concludes with a set of recommendations for improving the privacy and security of tire pressure monitoring systems and other forthcoming in-car wireless sensor networks.

Worse Than FailureError'd: Something Seems to be Wrong with the Internet

"Just perfect. This is not a good day for the entire Internet to break. Thanks for the heads up, Moodle," writes Đuro M.

 

"You know, Amazon, I don't really think I'll be saving much by buying a two pack," writes Rohit D.

 

Eric wrote, "With your help, you can make their '150k dream' come true."

 

Pascal wrote, "I was so worried about how much time it would take ot install these 50 place cards, but Amazon has me covered."

 

"Oh man. I hear you there Yahoo Sports! Every football season, I too get nostalgic for those classic commercials," writes David K.

 

"VPS from another universe?" Adam K. wrote, "Who makes that happen? A Quantum Sysadmin?"

 

Peter S. writes, "Carfax has an interesting definition of 'unlimited'."

 

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

CryptogramOrganizational Doxing and Disinformation

In the past few years, the devastating effects of hackers breaking into an organization's network, stealing confidential data, and publishing everything have been made clear. It happened to the Democratic National Committee, to Sony, to the National Security Agency, to the cyber-arms weapons manufacturer Hacking Team, to the online adultery site Ashley Madison, and to the Panamanian tax-evasion law firm Mossack Fonseca.

This style of attack is known as organizational doxing. The hackers, in some cases individuals and in others nation-states, are out to make political points by revealing proprietary, secret, and sometimes incriminating information. And the documents they leak do that, airing the organizations' embarrassments for everyone to see.

In all of these instances, the documents were real: the email conversations, still-secret product details, strategy documents, salary information, and everything else. But what if hackers were to alter documents before releasing them? This is the next step in organizational doxing­ -- and the effects can be much worse.

It's one thing to have all of your dirty laundry aired in public for everyone to see. It's another thing entirely for someone to throw in a few choice items that aren't real.

Recently, Russia has started using forged documents as part of broader disinformation campaigns, particularly in relation to Sweden's entering of a military partnership with NATO, and Russia's invasion of Ukraine.

Forging thousands -- or more -- documents is difficult to pull off, but slipping a single forgery in an actual cache is much easier. The attack could be something subtle. Maybe a country that anonymously publishes another country's diplomatic cables wants to influence yet a third country, so adds some particularly egregious conversations about that third country. Or the next hacker who steals and publishes email from climate change researchers invents a bunch of over-the-top messages to make his political point even stronger. Or it could be personal: someone dumping email from thousands of users making changes in those by a friend, relative, or lover.

Imagine trying to explain to the press, eager to publish the worst of the details in the documents, that everything is accurate except this particular email. Or that particular memo. That the salary document is correct except that one entry. Or that the secret customer list posted up on WikiLeaks is correct except that there's one inaccurate addition. It would be impossible. Who would believe you? No one. And you couldn't prove it.

It has long been easy to forge documents on the Internet. It's easy to create new ones, and modify old ones. It's easy to change things like a document's creation date, or a photograph's location information. With a little more work, pdf files and images can be altered. These changes will be undetectable. In many ways, it's surprising that this kind of manipulation hasn't been seen before. My guess is that hackers who leak documents don't have the secondary motives to make the data dumps worse than they already are, and nation-states have just gotten into the document leaking business.

Major newspapers do their best to verify the authenticity of leaked documents they receive from sources. They only publish the ones they know are authentic. The newspapers consult experts, and pay attention to forensics. They have tense conversations with governments, trying to get them to verify secret documents they're not actually allowed to admit even exist. This is only possible because the news outlets have ongoing relationships with the governments, and they care that they get it right. There are lots of instances where neither of these two things are true, and lots of ways to leak documents without any independent verification at all.

No one is talking about this, but everyone needs to be alert to the possibility. Sooner or later, the hackers who steal an organization's data are going to make changes in them before they release them. If these forgeries aren't questioned, the situations of those being hacked could be made worse, or erroneous conclusions could be drawn from the documents. When someone says that a document they have been accused of writing is forged, their arguments at least should be heard.

This essay previously appeared on TheAtlantic.com.

Planet DebianDirk Eddelbuettel: anytime 0.0.2: Added functionality

anytime arrived on CRAN via release 0.0.1 a good two days ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

This new release 0.0.2 adds two new functions to gather conversion formats -- and set new ones. It also fixed a minor build bug, and robustifies a conversion which was seen to be not quite right under some time zones.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.2 (2016-09-15)

  • Refactored to use a simple class wrapped around two vector with (string) formats and locales; this allow for adding formats; also adds accessor for formats (#4, closes #1 and #3).

  • New function addFormats() and getFormats().

  • Relaxed one tests which showed problems on some platforms.

  • Added as.POSIXlt() step to anydate() ensuring all POSIXlt components are set (#6 fixing #5).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Krebs on SecurityRansomware Getting More Targeted, Expensive

I shared a meal not long ago with a source who works at a financial services company. The subject of ransomware came up and he told me that a server in his company had recently been infected with a particularly nasty strain that spread to several systems before the outbreak was quarantined. He said the folks in finance didn’t bat an eyelash when asked to authorize several payments of $600 to satisfy the Bitcoin ransom demanded by the intruders: After all, my source confessed, the data on one of the infected systems was worth millions — possibly tens of millions — of dollars, but for whatever reason the company didn’t have backups of it.

This anecdote has haunted me because it speaks volumes about what we can likely expect in the very near future from ransomware — malicious software that scrambles all files on an infected computer with strong encryption, and then requires payment from the victim to recover them.

Image: Kaspersky Lab

What we can expect is not only more targeted and destructive attacks, but also ransom demands that vary based on the attacker’s estimation of the value of the data being held hostage and/or the ability of the victim to pay some approximation of what it might be worth.

In an alert published today, the U.S. Federal Bureau of Investigation (FBI) warned that recent ransomware variants have targeted and compromised vulnerable business servers (rather than individual users) to identify and target hosts, thereby multiplying the number of potential infected servers and devices on a network.

“Actors engaging in this targeting strategy are also charging ransoms based on the number of host (or servers) infected,” the FBI warned. “Additionally, recent victims who have been infected with these types of ransomware variants have not been provided the decryption keys for all their files after paying the ransom, and some have been extorted for even more money after payment.”

According to the FBI, this recent technique of targeting host servers and systems “could translate into victims paying more to get their decryption keys, a prolonged recovery time, and the possibility that victims will not obtain full decryption of their files.”

fbipsi-ransom

Today there are dozens of ransomware strains, most of which are sold on underground forums as crimeware packages — with new families emerging regularly. These kits typically include a point-and-click software interface for selecting various options that the ransom installer may employ, as well as instructions that tell the malware where to direct the victim to pay the ransom. Some kits even bundle the HTML code needed to set up the Web site that users will need to visit to pay and recover their files.

To some degree, a variance in ransom demands based on the victim’s perceived relative wealth is already at work. Lawrence Abrams, owner of the tech-help site BleepingComputer, said his analysis of multiple ransomware kits and control channels that were compromised by security professionals indicate that these kits usually include default suggested ransom amounts that vary depending on the geographic location of the victim.

“People behind these scams seem to be setting different rates for different countries,” Abrams said. “Victims in the U.S. generally pay more than people in, say, Spain. There was one [kit] we looked at recently that showed while victims in the U.S. were charged $200 in Bitcoin, victims in Italy were asked for just $20 worth of Bitcoin by default.”

In early 2016, a new ransomware variant dubbed “Samsam” (PDF) was observed targeting businesses running outdated versions of Red Hat‘s JBoss enterprise products. When companies were hacked and infected with Samsam, Abrams said, they received custom ransom notes with varying ransom demands.

“When these companies were hacked, they each got custom notes with very different ransom demands that were much higher than the usual amount,” Abrams said. “These were very targeted.”

Which brings up the other coming shift with ransomware: More targeted ransom attacks. For the time being, most ransomware incursions are instead the result of opportunistic malware infections. The first common distribution method is spamming the ransomware installer out to millions of email addresses, disguising it as a legitimate file such as an invoice.

More well-heeled attackers may instead or also choose to spread ransomware using “exploit kits,” a separate crimeware-as-a-service product that is stitched into hacked or malicious Web sites and lying in wait for someone to visit with a browser that is not up to date with the latest security patches (either for the browser itself or for a myriad of browser plugins like Adobe Flash or Adobe Reader).

But Abrams said that’s bound to change, and that the more targeted attacks are likely to come from individual hackers who can’t afford to spend thousands of dollars a month renting exploit kits.

“If you throw your malware into a good exploit kit, you can achieve a fairly wide distribution of it in a short amount of time,” Abrams said. “The only problem is the good kits are very expensive and can cost upwards of $4,000 per month. Right now, most of these guys are just throwing the ransomware up in the air and wherever it lands is who they’re targeting. But that’s going to change, and these guys are going to start more aggressively targeting really data intensive organizations like medical practices and law and architectural firms.”

Earlier this year, experts began noticing that ransomware purveyors appeared to be targeting hospitals — organizations that are extremely data-intensive and heavily reliant on instant access to patient records. Indeed, the above-mentioned SamSAM ransomware family is thought to be targeting healthcare firms.

According to a new report by Intel Security, the healthcare sector is experiencing over 20 data loss incidents per day related to ransomware attacks. The company said it identified almost $100,000 in payments from hospital ransomware victims to specific bitcoin accounts so far in 2016.

RUSSIAN ROULETTE

An equally disturbing trend in ransomware is the incidence of new strains which include the ability to randomly delete an encrypted file from the victim’s machine at some predefined interval –and to continue doing so unless and until the ransom demand is paid or there are no more files to destroy.

Abrams said the a ransomware variant known as “Jigsaw” debuted this capability in April 2016. Jigsaw also penalized victims who tried to reboot their computer in an effort to rid the machine of the infection, by randomly deleting 1,000 encrypted files for each reboot.

“Basically, what it would do is show a two hour countdown clock, and when that clock got to zero it would delete a random encrypted file,” Abrams said. “And then every hour after that it would double the number of files it deleted unless you paid.”

Part of the ransom note left behind by Jigsaw. Image: Bleepingcomputer.com

Part of the ransom note left behind by Jigsaw. Image: Bleepingcomputer.com

Abrams said this same Russian Roulette feature recently has shown up in other ransomware strains, including one called “Stampado” and another dubbed “Philadelphia.”

“Philadelphia has a similar feature where [one] can specify how many files it deletes and how often,” he said.

Most ransomware variants have used some version of the countdown clock, with victims most often being told they have 72 hours to pay the ransom or else kiss their files goodbye forever. In practice, however, the people behind these schemes are usually happy to extend that deadline, but the ransom demands almost invariably increase significantly at that point.

The introduction of a destructive element tied to a countdown clock is especially worrisome given how difficult it can be for the unlearned to obtain the virtual Bitcoin currency needed to pay the ransom, Abrams said.

“I had an architectural firm reach out to me, and they’d decided to pay the ransom,” he said. “So I helped my contact there figure out how to create an account at Coinbase.com and get funds into there, but the whole process took almost a week.”

Hoping to get access to his files more immediately, Abrams’ contact at the architectural firm inquired about more speedy payment options. Abrams told him about localbitcoins.com, which helps people meet in person to exchange bitcoins for cash. In the end, however, the contact wasn’t comfortable with this option.

“It’s not hard to see why,” he said. “Some of the exchangers on there have crazy demands, like ‘Meet me at the local Starbucks, and absolutely no phones!’ It really sort of feels like a drug deal.”

The ransom demand left by Stampado.

The ransom demand left by Stampado. Image: Bleepingcomputer.com

HOW TO PREVENT ATTACKS & WHAT TO DO IF YOU’RE A VICTIM

In its alert published today, the FBI urged victims of ransomware incidents to report the crimes to federal law enforcement to help the government “gain a more comprehensive view of the current threat and its impact on U.S. victims.”

Specifically, the FBI is asking victims to report the date of infection; the ransomware variant; how the infection occurred; the requested ransom amount; the actors Bitcoin wallet address; the ransom amount paid (if any); the overall losses associated with the ransomware infection; and a victim impact statement.

Previous media reports have quoted an FBI agent saying that the agency condones paying such ransom demands. But today’s plea from the feds to ransomware victims is unequivocal on this point:

“The FBI does not support paying a ransom to the adversary,” the agency advised. “Paying a ransom does not guarantee the victim will regain access to their data; in fact, some individuals or organizations are never provided with decryption keys after paying a ransom.”

What can businesses do to lessen the chances of becoming the next ransomware victim? The FBI has the following tips:

  • Regularly back up data and verify the integrity of those backups. Backups are critical in ransomware incidents; if you are infected, backups may be the best way to recover your critical data.
  • Secure your backups. Ensure backups are not connected to the computers and networks they are backing up. Examples might include securing backups in the cloud or physically storing them offline. It should be noted, some instances of ransomware have the capability to lock cloud-based backups when systems continuously back up in real-time, also known as persistent synchronization.
  • Scrutinize links contained in e-mails and do not open attachments included in unsolicited e-mails.
  • Only download software – especially free software – from sites you know and trust. When possible, verify the integrity of the software through a digital signature prior to execution.
  • Ensure application patches for the operating system, software, and firmware are up to date, including Adobe Flash, Java, Web browsers, etc.
  • Ensure anti-virus and anti-malware solutions are set to automatically update and regular scans are conducted.
  • Disable macro scripts from files transmitted via e-mail. Consider using Office Viewer software to open Microsoft Office files transmitted via e-mail instead of full Office Suite applications.
  • Implement software restrictions or other controls to prevent the execution of programs in common ransomware locations, such as temporary folders supporting popular Internet browsers, or compression/decompression programs, including those located in the AppData/LocalAppData folder.

Additional considerations for businesses include the following:

  • Focus on awareness and training. Because end users are often targeted, employees should be made aware of the threat of ransomware, how it is delivered, and trained on information security principles and techniques.
  • Patch all endpoint device operating systems, software, and firmware as vulnerabilities are discovered. This precaution can be made easier through a centralized patch management system.
  • Manage the use of privileged accounts by implementing the principle of least privilege. No users should be assigned administrative access unless absolutely needed. Those with a need for administrator accounts should only use them when necessary; they should operate with standard user accounts at all other times.
  • Configure access controls with least privilege in mind. If a user only needs to read specific files, he or she should not have write access to those files, directories, or shares.
  • Use virtualized environments to execute operating system environments or specific programs.
  • Categorize data based on organizational value, and implement physical/logical separation of networks and data for different organizational units. For example, sensitive research or business data should not reside on the same server and/or network segment as an organization’s e-mail environment.
  • Require user interaction for end user applications communicating with Web sites uncategorized by the network proxy or firewall. Examples include requiring users to type in information or enter a password when the system communicates with an uncategorized Web site.
  • Implement application whitelisting. Only allow systems to execute programs known and permitted by security policy.

Planet DebianCraig Sanders: Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

It’s Alive!

The day before yesterday (at Infoxchange, a non-profit whose mission is “Technology for Social Justice”, where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it.

Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK.

I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie).

After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23.

Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6).

In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven’t actually tested installing jessie’s libc6 on squeeze – if it works, I expect it’ll require a lot of extra stuff to be installed too.

I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie.

To build it, I had to use a system which hadn’t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker – crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn’t great, but it was OK…and it worked. Docker has native support for ZFS, so that’s what I’m using on my real hardware.

I started with the base wheezy image we’re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:

APT::Default-Release "wheezy";

Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app’s container again.

I then installed the following:

apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev

To minimise the risk of incompatible updates, it’s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn’t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them – I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it’s working OK now and we’re probably better off with libssl-dev from jessie anyway).

Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn’t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly.

That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I’m adding to it, but I expect there’ll be a few more yaks to shave before I’m finished.

When I finish what I’m currently working on, I’ll take a look at what needs to be done to get this app running on jessie. It’s on the TODO list at work, but everyone else is too busy – a perfect job for an unpaid volunteer. Wheezy’s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg.

Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

Sociological ImagesWhat to Do with All the Wild Horses?

Rumors are circulating that the Bureau of Land Management (BLM) has plans to euthanize 44,000 wild horses. The rumor is partly true. An advisory board has authorized the BLM to do so; they have yet to make a decision as to whether they will. Even the possibility of such a widespread cull, though, has understandably sparked outrage. Yet the reality of the American mustang is not as simple as the love and admiration for these animals suggests.

Mustangs are powerful symbols of the American West. The modern mustang is the descendant of various breeds of horses worked by everyone from Spanish conquistadors to pioneers in wagon trains into the Western US. Some inevitably escaped over time and formed herds of feral horses. Wild herds in the east were generally either driven west or recaptured over time as the frontier moved ever westward (the wild ponies of Assateague Island off the coast of Virginia being a famous exception). Over time, they became inextricably entwined with perceptions of the West as still wild and free, not yet fully domesticated. The image of a herd of beautiful horses against a gorgeous but austere Western landscape is a striking one, perhaps something like this:

8

So how do we get from that to these mustangs penned up in a pasture running after a feed truck in Oklahoma (a screenshot from the video below):

2 (1)

It’s a complicated story involving conflicts surrounding federal land management, public attitudes toward mustangs, and unintended consequences of public policies.

Wild horses fall under the purview of the BLM because most live on public range (particularly in Nevada, California, and Idaho, as well as Washington, Wyoming, and other Western states). Mustangs have no natural predators in the West; mountain lions, bears, and wolves kill some horses each year, but their numbers simply aren’t large enough to be a systematic form of population control for wild horse herds, especially given that horses aren’t necessarily their first choice for a meal. So wild horse herds can grow fairly rapidly. Currently the BLM estimates there are about 67,000 wild horses and burros on public land in the West, 40,000 more than the BLM thinks the land can reasonably sustain.

Of course, managing wild horses is one small part of the BLM’s mission. The agency is tasked with balancing various uses of federal lands, including everything from resource extraction (such as mining and logging), recreational uses for the public, grazing range for cattle ranchers, wildlife habitat conservation, preservation of archaeological and historical sites, providing water for irrigation as well as residential use, and many, many more. And many of these uses conflict to some degree. Setting priorities among various potential uses of BLM land has, over time, become a very contentious process, as different groups battle, often through the courts, to have their preferred use of BLM land prioritized over others.

The important point here is that managing wild horse numbers is part, but only a small part, of the BLM’s job. They decide on the carrying capacity of rangeland — that is, how many wild horses it can sustainably handle — by taking into account competing uses, like how many cattle will be allowed on the same land, its use as wildlife habitat, possible logging or mining activities, and so on. And much of the time the BLM concludes that, given their balance of intended uses, there are too many horses.

So what does the BLM do when they’ve decided there are too many horses?

For many years, the BLM simply allowed them to be killed; private citizens had a more or less free pass to kill them. There wasn’t a lot of oversight regarding how many could be killed or the treatment of the horses during the process. Starting in the late 1950s, the BLM began to get negative press, and a movement to protect wild horses emerged. It culminated in the Wild Free-Roaming Horses and Burros Act, passed in 1971. The law didn’t ban killing wild horses, but it provided some protection for them and required the BLM to ensure humane treatment, guarantee the presence of wild horses on public lands, and encourage other methods of disposing of excess horses.

One such method is making such horses (and burros) available to the general public for adoption. The BLM holds periodic adoption events. However, currently the demand for these animals isn’t nearly large enough to absorb the supply. For instance, in 2010, 9,715 wild horses were removed from public lands, while 2,742 were adopted.

So, there aren’t enough people to adopt them and killing them has become increasingly unpopular. Controlling herd populations through some form of birth control hasn’t been widely implemented and has led to lawsuits. What to do?

One solution was for the federal government to pay private citizens to care for mustangs removed from public lands. Today there are 46,000 wild horses penned up on private lands, fed by feed trucks. Something for which the American taxpayer pays $49 million dollars a year. Holding wild horses has become a business. Here’s a news segment about one of these wild horse operations:

The ranch in video is owned by the Drummond family, a name that might ring a bell if you’re familiar with the incredibly popular website The Pioneer Woman, by Ree Drummond. They are just one of several ranching families in north central Oklahoma that have received contracts to care for wild horses.

In addition to the sheer cost involved, paying private citizens to hold wild horses brings a whole new set of controversies, as well as unintended consequences for the region. Federal payments for the wild horse and burro maintenance program are public information. A quick look at the federal contracts database shows that in just the first three financial quarters of 2009, for example, the Drummonds (a large, multi-generational ranching family) received over $1.6 million. Overall, two-thirds of the BLM budget for managing wild horses goes to paying for holding animals that have been removed from public lands, either in short-term situations before adoptions or in long-term contracts like the ones in Oklahoma.

This is very lucrative. Because prices are guaranteed in advance, holding wild horses isn’t as risky as raising cattle. And, if a horse dies, the BLM just gives the rancher a new one. But this income-generating opportunity isn’t available to everyone; generally only the very largest landowners get a chance. From the BLM’s perspective, it’s more efficient to contract with one operation to take 2,000 horses than to contract with 20 separate people to take 100 each. So almost all small and mid-size operations are shut out of the contracts. This has led to an inflow of federal money to operations that were already quite prosperous by local standards. These landowners then have a significant advantage when it comes to trying to buy or lease pastures that become available in the area; other ranchers have almost no chance of competing with the price they can pay. The result is more concentration of land ownership as small and medium-sized ranchers, or those hoping to start up a ranch from scratch, are priced out of the market. In other words, the wild horse holding program contributes to the wealth of the 1%, while everyone else’s economic opportunities are harmed.

This is why the BLM is considering a cull. Not because they love the idea of killing off mustangs, but because they’re caught between a dozen rocks and hard places, trying to figure out how to best manage a very complicated problem, with no resolution in sight.

Revised and updated; originally posted in 2011. Cross-posted at Scientopia and expanded for Contexts.

Gwen Sharp, PhD is a professor of sociology and the Associate Dean of liberal arts and sciences at Nevada State College. 

(View original at https://thesocietypages.org/socimages)

CryptogramRecovering an iPhone 5c Passcode

Remember the San Bernardino killer's iPhone, and how the FBI maintained that they couldn't get the encryption key without Apple providing them with a universal backdoor? Many of us computer-security experts said that they were wrong, and there were several possible techniques they could use. One of them was manually removing the flash chip from the phone, extracting the memory, and then running a brute-force attack without worrying about the phone deleting the key.

The FBI said it was impossible. We all said they were wrong. Now, Sergei Skorobogatov has proved them wrong. Here's his paper:

Abstract: This paper is a short summary of a real world mirroring attack on the Apple iPhone 5c passcode retry counter under iOS 9. This was achieved by desoldering the NAND Flash chip of a sample phone in order to physically access its connection to the SoC and partially reverse engineering its proprietary bus protocol. The process does not require any expensive and sophisticated equipment. All needed parts are low cost and were obtained from local electronics distributors. By using the described and successful hardware mirroring process it was possible to bypass the limit on passcode retry attempts. This is the first public demonstration of the working prototype and the real hardware mirroring process for iPhone 5c. Although the process can be improved, it is still a successful proof-of-concept project. Knowledge of the possibility of mirroring will definitely help in designing systems with better protection. Also some reliability issues related to the NAND memory allocation in iPhone 5c are revealed. Some future research directions are outlined in this paper and several possible countermeasures are suggested. We show that claims that iPhone 5c NAND mirroring was infeasible were ill-advised.

Susan Landau explains why this is important:

The moral of the story? It's not, as the FBI has been requesting, a bill to make it easier to access encrypted communications, as in the proposed revised Burr-Feinstein bill. Such "solutions" would make us less secure, not more so. Instead we need to increase law enforcement's capabilities to handle encrypted communications and devices. This will also take more funding as well as redirection of efforts. Increased security of our devices and simultaneous increased capabilities of law enforcement are the only sensible approach to a world where securing the bits, whether of health data, financial information, or private emails, has become of paramount importance.

Or: The FBI needs computer-security expertise, not backdoors.

Patrick Ball writes about the dangers of backdoors.

TEDTEDWomen 2016 speaker lineup announced!

TEDWomen curator and host Pat Mitchell, onstage at a previous TEDWomen, This year's event lineup features 40+ women and men from many fields -- all speaking to the theme: It's About Time. Photo: Marla Aufmuth/TED

Curator and host Pat Mitchell introduces a session of TEDWomen. This year’s speaker lineup features 40+ women and men from many fields, all speaking to the theme: It’s About Time. Photo: Marla Aufmuth/TED

Many people ask, “How are speakers selected for TEDWomen? The answer is that speakers, like ideas, come from many different sources. TED has an open recommendation process on TED.com, and we review those as well as suggestions that come in from everywhere. Sometimes people self nominate but mostly, fans of TEDTalks submit names of women and men whose ideas, work and stories they have discovered and that they feel would make strong TEDTalks.

This year our initial list was more than 150 names and each one a potential TEDTalk, making our final choices very challenging. In part, we review the speakers for the relevance of their ideas to the conference theme which this year is “It’s about time.”   We also take into account the important fact that TEDWomen is a global conference with multiple TEDxWomen conferences convening simultaneously on every continent, taking a live stream of TEDWomen, so global perspective and a diversity of backgrounds are significant factors in our selections, too.

As the editorial director and curator, I work with the amazing TED team of curators and my awesome colleague, Betsy Scolnik, to make selections of speakers, and we’re thrilled to present this year’s speaker program: 40* extraordinary speakers, women and men, in six sessions.

Browse the entire 2016 lineup »

Partial list of TEDWomen 2016 speakers. Click to see the entire lineup.

I believe this year’s program further affirms that TEDWomen is not a conference about women so much as it is a conference where women and their ideas — on everything from race to nuclear weapons to philanthropy and time management — are the reason that nearly 1,000 women and men will gather in San Francisco in October.

This is not a conference about well-known people, though you may recognize a few names and faces. It is a conference whose speakers are working hard to make the presentation of their stories and ideas memorable and important for you to hear. The TEDWomen team and I can’t wait to share them with you.

Warm regards,

Pat Mitchell, Betsy Scolnik and the TED Team

*And we have a few surprises that we are not announcing today … so watch for other names to come!


The theater is sold out, but we have decided to offer discounted registrations that include all conference activities except for guaranteed seats in the theater. These registrations provide comfortable viewing in our Simulcast Lounge, where everyone gathers during breaks between sessions. Find out more at the TEDWomen website.


Planet Linux AustraliaPia Waugh: Moving to …

Last October data.gov.au was moved from the Department of Finance to the Department of Prime Minister and Cabinet (PM&C) and I moved with the team before going on maternity leave in January. In July of this year, whilst still on maternity leave, I announced that I was leaving PM&C but didn’t say what the next gig was. In choosing my work I’ve always tried to choose new areas, new parts of the broader system to better understand the big picture. It’s part of my sysadmin background – I like to understand the whole system and where the config files are so I can start tweaking and making improvements. These days I see everything as a system, and anything as a “config file”, so there is a lot to learn and tinker with!

Over the past 3 months, my little family (including new baby) has been living in New Zealand on a bit of a sabbatical, partly to spend time with the new bub during that lovely 6-8 months period, but partly for us to have the time and space to consider next steps, personally and professionally. Whilst in New Zealand I was invited to spend a month working with the data.govt.nz team which was awesome, and to share some of my thoughts on digital government and what systemic “digital transformation” could mean. It was fun and I had incredible feedback from my work there, which was wonderful and humbling. Although tempting to stay, I wanted to return to Australia for a fascinating new opportunity to expand my professional horizons.

Thus far I’ve worked in the private sector, non-profits and voluntary projects, political sphere (as an advisor), and in the Federal and State/Territory public sectors. I took some time whilst on maternity leave to think about what I wanted to experience next, and where I could do some good whilst building on my experience and skills to date. I had some interesting offers but having done further tertiary study recently into public policy, governance, global organisations and the highly complex world of international relations, I wanted to better understand both the regulatory sphere and how international systems work. I also wanted to work somewhere where I could have some flexibility for balancing my new family life.

I’m pleased to say that my next gig ticks all the boxes! I’ll be starting next week at AUSTRAC, the Australian financial intelligence agency and regulator where I’ll be focusing on international data projects. I’m particularly excited to be working for the brilliant Dr Maria Milosavljevic (Chief Innovation Officer for AUSTRAC) who has a great track record of work at a number of agencies, including as CIO of the Australian Crime Commission. I am also looking forward to working with the CEO, Paul Jevtovic APM, who is a strong and visionary leader for the organisation, and I believe a real change agent for the broader public sector.

It should be an exciting time and I look forward to sharing more about my work over the coming months! Wish me luck :)

Worse Than FailureCodeSOD: It Takes One Function

Longleat maze

This anonymous submission is the result of our submitter decompiling a Flash application to see how it worked. It’s not often that one thinks to himself, “Wow, some case statements would’ve been a lot better,” but here we are.


      private function onMessage(param1:MessageEvent) : void
      {
         e = param1;
         var decoded:Object = JSON.parse(e.data.body);
         if(!(decoded.tags && decoded.tags[0] == "trnupdate"))
         {
            f_bwt("[XMPPConnectionManager] - Message received:\n",JSON.stringify(e.data.body));
         }
         if(decoded.tags)
         {
            if(decoded.tags[1] == "ok")
            {
               var _loc3_:* = decoded.tags[0];
               if("getcashgames" !== _loc3_)
               {
                  if("lobby_subscribe" !== _loc3_)
                  {
                     if("getcgtickets" !== _loc3_)
                     {
                        if("gettourtickets" !== _loc3_)
                        {
                           if("getflopsseen" !== _loc3_)
                           {
                              if("getbonuses" !== _loc3_)
                              {
                                 if("getplayerstatus" !== _loc3_)
                                 {
                                    if("getdeeplink" !== _loc3_)
                                    {
                                       if("getbuyins" !== _loc3_)
                                       {
                                          if("getmoneyinplay" !== _loc3_)
                                          {
                                             if("createidentity" !== _loc3_)
                                             {
                                                if("setidentity" !== _loc3_)
                                                {
                                                   if("setachbadge" !== _loc3_)
                                                   {
                                                      if("gethands" !== _loc3_)
                                                      {
                                                         if("gethanddetails" !== _loc3_)
                                                         {
                                                            if("gettournames" !== _loc3_)
                                                            {
                                                               if("getssoticket" !== _loc3_)
                                                               {
                                                                  if("getregistrations" !== _loc3_)
                                                                  {
                                                                     if("addPlayer" !== _loc3_)
                                                                     {
                                                                        if("removePlayer" !== _loc3_)
                                                                        {
                                                                           if("checkFilter" !== _loc3_)
                                                                           {
                                                                              if("leave" !== _loc3_)
                                                                              {
                                                                                 if("getnotiscount" !== _loc3_)
                                                                                 {
                                                                                    if("getnotis" !== _loc3_)
                                                                                    {
                                                                                       if("acknotis" !== _loc3_)
                                                                                       {
                                                                                          if("delnotis" !== _loc3_)
                                                                                          {
                                                                                             if("play" !== _loc3_)
                                                                                             {
                                                                                                if("getflopsneeded" !== _loc3_)
                                                                                                {
                                                                                                   if("getRail" === _loc3_)
                                                                                                   {
                                                                                                      dispatchEventWith("xmppConnectionManager.getRail",false,decoded.payLoad);
                                                                                                   }
                                                                                                }
                                                                                                else
                                                                                                {
                                                                                                   f_bwt(decoded.payLoad);
                                                                                                }
                                                                                             }
                                                                                             else
                                                                                             {
                                                                                                dispatchEventWith("xmppConnectionManager.slotPlaySuccess",false,decoded.payLoad);
                                                                                             }
                                                                                          }
                                                                                          else
                                                                                          {
                                                                                             dispatchEventWith("xmppConnectionManager.deleteNotification",false,decoded.payLoad);
                                                                                          }
                                                                                       }
                                                                                       else
                                                                                       {
                                                                                          dispatchEventWith("xmppConnectionManager.archiveNotification",false,decoded.payLoad);
                                                                                       }
                                                                                    }
                                                                                    else
                                                                                    {
                                                                                       dispatchEventWith("xmppConnectionManager.getNotifications",false,decoded.payLoad);
                                                                                    }
                                                                                 }
                                                                                 else
                                                                                 {
                                                                                    dispatchEventWith("xmppConnectionManager.getNotificationCount",false,decoded.payLoad);
                                                                                 }
                                                                              }
                                                                              else
                                                                              {
                                                                                 i_gbi.instance.s_xks(decoded);
                                                                              }
                                                                           }
                                                                           else
                                                                           {
                                                                              dispatchEventWith("xmppConnectionManager.tournamentValidityCheckSuccess",false,decoded.payLoad);
                                                                           }
                                                                        }
                                                                        else
                                                                        {
                                                                           dispatchEventWith("xmppConnectionManager.tournamentUnregistrationSuccess",false,decoded.payLoad);
                                                                           GameAndTournamentData.instance.y_gvy(decoded.payLoad.trid).unregister();
                                                                           dispatchEventWith("xmppConnectionManager.tournamentUpdateReceived",false,decoded.payLoad);
                                                                           x_shy();
                                                                        }
                                                                     }
                                                                     else
                                                                     {
                                                                        dispatchEventWith("xmppConnectionManager.tournamentRegistrationSuccess",false,decoded.payLoad);
                                                                        GameAndTournamentData.instance.y_gvy(decoded.payLoad.trid).register();
                                                                        dispatchEventWith("xmppConnectionManager.tournamentUpdateReceived",false,decoded.payLoad);
                                                                        x_shy();
                                                                     }
                                                                  }
                                                                  else
                                                                  {
                                                                     d_vin.instance.tournamentRegistrations = decoded.payLoad;
                                                                     dispatchEventWith("xmppConnectionManager.tournamentRegistrationsReceived",false,decoded.payLoad);
                                                                     if(!_subscribedToTournamentFeeds)
                                                                     {
                                                                        t_ecg("tournament",2);
                                                                        t_ecg("sitngo",1);
                                                                        _subscribedToTournamentFeeds = true;
                                                                        var i:int = 0;
                                                                        while(i < _subscribedSpecificTournaments.length)
                                                                        {
                                                                           sendMessage({
                                                                              "action":"details_subscribe",
                                                                              "params":{"ids":[_subscribedSpecificTournaments[i]]}
                                                                           },_JIDs["target_jid_tournament_monitor"],_monConnectionAvail);
                                                                           i = Number(i) + 1;
                                                                        }
                                                                     }
                                                                     x_dyr.write("getregistrations-ok");
                                                                  }
                                                               }
                                                               else
                                                               {
                                                                  dispatchEventWith("xmppConnectionManager.ssoTicketReceived",false,decoded.payLoad);
                                                               }
                                                            }
                                                            else
                                                            {
                                                               dispatchEventWith("xmppConnectionManager.tournamentNamesReceived",false,decoded.payLoad);
                                                            }
                                                         }
                                                         else
                                                         {
                                                            dispatchEventWith("xmppConnectionManager.handDetailsReceived",false,decoded.payLoad);
                                                         }
                                                      }
                                                      else
                                                      {
                                                         dispatchEventWith("xmppConnectionManager.handsReceived",false,decoded.payLoad);
                                                      }
                                                   }
                                                   else
                                                   {
                                                      d_vin.instance.j_kuf = decoded.payLoad.achbadge;
                                                      dispatchEventWith("xmppConnectionManager.achievementBadgeUpdated",false,decoded.payLoad);
                                                   }
                                                }
                                                else
                                                {
                                                   d_vin.instance.l_ksj(decoded.payLoad.alias);
                                                   dispatchEventWith("xmppConnectionManager.setIdentitySuccess",false,decoded.payLoad);
                                                }
                                             }
                                             else
                                             {
                                                dispatchEventWith("xmppConnectionManager.createIdentitySuccess",false,decoded.payLoad);
                                                q_scp();
                                             }
                                          }
                                          else if(decoded.payLoad.moneyinplay != null)
                                          {
                                             d_vin.instance.w_gwk = decoded.payLoad.moneyinplay;
                                             dispatchEventWith("xmppConnectionManager.balanceChanged");
                                          }
                                       }
                                       else
                                       {
                                          dispatchEventWith("xmppConnectionManager.buyInsReceived",false,decoded.payLoad);
                                          if(_monConnectionAvail && !_monConnection.active && !Constants.y_xxs)
                                          {
                                             _monConnection.connect();
                                          }
                                          else
                                          {
                                             if(Constants.y_xxs)
                                             {
                                                h_dya();
                                                _monConnection.connect();
                                             }
                                             u_isa();
                                          }
                                       }
                                    }
                                    else
                                    {
                                       dispatchEventWith("xmppConnectionManager.deepLinkReceived",false,decoded.payLoad);
                                    }
                                 }
                                 else
                                 {
                                    d_vin.instance.t_ets(decoded.payLoad);
                                    dispatchEventWith("xmppConnectionManager.playerStatusReceived",false,decoded.payLoad);
                                 }
                              }
                              else
                              {
                                 d_vin.instance.g_lbb = decoded.payLoad;
                                 dispatchEventWith("xmppConnectionManager.bonusesReceived",false,decoded.payLoad);
                              }
                           }
                           else
                           {
                              d_vin.instance.o_iik = decoded.payLoad.flopsseen;
                              dispatchEventWith("xmppConnectionManager.flopsSeenReceived",false,decoded.payLoad);
                           }
                        }
                        else
                        {
                           d_vin.instance.m_xtn = decoded.payLoad.tourtickets;
                           GameAndTournamentData.instance.updateTournamentTickets();
                           dispatchEventWith("xmppConnectionManager.tournamentTicketsReceived",false,decoded.payLoad);
                        }
                     }
                     else
                     {
                        d_vin.instance.r_qmn = decoded.payLoad.cgtickets;
                        dispatchEventWith("xmppConnectionManager.cashTicketsReceived",false,decoded.payLoad);
                     }
                  }
                  else if(decoded.payLoad != null)
                  {
                     dispatchEventWith("xmppConnectionManager.tournamentSubscriptionSuccess",false);
                     x_dyr.write("lobby_subscribe-ok, " + decoded.payLoad.type);
                  }
                  else
                  {
                     dispatchEventWith("xmppConnectionManager.cashgameSubscriptionSuccess",false);
                  }
               }
               else
               {
                  GameAndTournamentData.instance.setCashGames(decoded.payLoad.cashgames);
                  dispatchEventWith("xmppConnectionManager.cashGamesReceived",false,decoded.payLoad.cashgames);
                  m_rqq();
                  x_dyr.write("Cash game data received");
               }
            }
            else if(decoded.tags[0] == "trnupdate")
            {
               checkUpdate = function():void
               {
                  if(getTimer() - _lastUpdate >= 1500)
                  {
                     _showTournamentDataLoader = false;
                     dispatchEventWith("xmppConnectionManager.tournamentUpdateReceived",false,decoded.payLoad);
                     dispatchEventWith("xmppConnectionManager.tournamentListUpdated");
                  }
               };
               i_gbi.instance.s_xks(decoded);
               _lastUpdate = getTimer();
               if(_firstTournamentUpdate)
               {
                  _firstTournamentUpdate = false;
                  Starling.juggler.delayCall(function():void
                  {
                     dispatchEventWith("xmppConnectionManager.tournamentUpdateReceived",false,decoded.payLoad);
                  },2);
               }
               else
               {
                  Starling.juggler.delayCall(checkUpdate,1.7);
               }
            }
            else if(decoded.tags[0] == "sngupdate")
            {
               i_gbi.instance.s_xks(decoded);
               dispatchEventWith("xmppConnectionManager.sngUpdateReceived",false,decoded.payLoad);
               if(!_allowSNGRejoin)
               {
                  _savedSNGJoinMessage = {};
               }
            }
            else if(decoded.tags[0] == "cgupdate")
            {
               GameAndTournamentData.instance.updateCashGames(decoded.payLoad);
            }
            else if(decoded.tags[0] == "mscounter" || decoded.tags[0] == "msfactors")
            {
               e_dql.instance.handleMessage(decoded);
               if(_tableConnection)
               {
                  _tableConnection.l_fpt(decoded);
               }
            }
            else if(decoded.tags[1] == "error")
            {
               try
               {
                  x_dyr.write(!!("error: [" + decoded.tags[0] + "][" + decoded.tags[1] + "]" + decoded.payLoad)?" - code: " + decoded.payLoad.errorcode + "; message: " + decoded.payLoad.message:" (no payLoad)");
               }
               catch(err:Error)
               {
                  f_bwt("[XMPPConnectionManager] - Decoded.tags[1] error: " + err.message);
               }
               var _loc5_:* = decoded.tags[0];
               if("createidentity" !== _loc5_)
               {
                  if("addPlayer" !== _loc5_)
                  {
                     if("removePlayer" !== _loc5_)
                     {
                        if("gethanddetails" !== _loc5_)
                        {
                           if("getchallenges" !== _loc5_)
                           {
                              if("discardchallenge" !== _loc5_)
                              {
                                 if("challengehistory" !== _loc5_)
                                 {
                                    if("play" !== _loc5_)
                                    {
                                       if("checkFilter" !== _loc5_)
                                       {
                                          if("joinSNGQueues" !== _loc5_)
                                          {
                                             if("gettournames" === _loc5_)
                                             {
                                                dispatchEventWith("xmppConnectionManager.tournamentNamesReceived",false,decoded.payLoad);
                                             }
                                          }
                                          else if(decoded.payLoad.errorcode == "INSUFFICIENT_FUNDS")
                                          {
                                             t_bya(decoded.payLoad.instance);
                                          }
                                          else if(_allowSNGRejoin)
                                          {
                                             t_eqj = function():void
                                             {
                                                if(_savedSNGJoinMessage && _savedSNGTargetJID)
                                                {
                                                   sendMessage(_savedSNGJoinMessage,_savedSNGTargetJID);
                                                }
                                             };
                                             Starling.juggler.delayCall(t_eqj,1);
                                          }
                                       }
                                       else if(decoded.payLoad.errorcode == "WRONG_PASSWORD")
                                       {
                                          dispatchEventWith("xmppConnectionManager.tournamentPasswordWrong",false,decoded.payLoad);
                                       }
                                       else
                                       {
                                          dispatchEventWith("xmppConnectionManager.tournamentValidityCheckError",false,decoded.payLoad);
                                       }
                                    }
                                    else
                                    {
                                       dispatchEventWith("xmppConnectionManager.slotPlayError",false,decoded.payLoad);
                                    }
                                 }
                                 else
                                 {
                                    dispatchEventWith("xmppConnectionManager.challengehistoryReceived");
                                 }
                              }
                              else
                              {
                                 dispatchEventWith("xmppConnectionManager.discardchallengeReceived");
                              }
                           }
                           else
                           {
                              dispatchEventWith("xmppConnectionManager.challengesReceived");
                           }
                        }
                        else
                        {
                           dispatchEventWith("xmppConnectionManager.handDetailsReceived",false,decoded.payLoad);
                        }
                     }
                     else
                     {
                        dispatchEventWith("xmppConnectionManager.tournamentRegistrationError",false,decoded.payLoad);
                     }
                  }
                  else
                  {
                     dispatchEventWith("xmppConnectionManager.tournamentRegistrationError",false,decoded.payLoad);
                  }
               }
               else
               {
                  dispatchEventWith("xmppConnectionManager.createIdentityFailed",false,decoded.payLoad);
               }
               if("errorcode" in decoded.payLoad && decoded.payLoad.errorcode == "REGISTRATION_CLOSED")
               {
                  if(decoded.payLoad.trid)
                  {
                     GameAndTournamentData.instance.y_gvy(decoded.payLoad.trid).registrationEnded = true;
                  }
                  else if(decoded.payLoad.sngid)
                  {
                     GameAndTournamentData.instance.y_gvy(decoded.payLoad.sngid).registrationEnded = true;
                  }
                  dispatchEventWith("xmppConnectionManager.tournamentUpdateReceived",false,decoded.payLoad);
               }
            }
            if(decoded && "payLoad" in decoded && "errorcode" in decoded.payLoad && decoded.payLoad.errorcode == "PLAYER_DISABLED")
            {
               x_okk();
            }
            if(decoded.tags[0] == "hello")
            {
               b_hds(e.data.from.bareJID,decoded.payLoad.instance + "-table");
               i_gbi.instance.s_xks(decoded);
            }
            if(decoded.tags[0] == "hello" && decoded.tags[1] == "init" && false)
            {
               if(decoded.payLoad.refid)
               {
                  if(decoded.payLoad.sngid)
                  {
                     z_ftr(decoded.payLoad.refid);
                  }
                  if(Root.e_bdt && Root.e_bdt.tournamentData)
                  {
                     Root.e_bdt.tournamentData.sngTournamentId = decoded.payLoad.refid;
                  }
               }
               else if(Root.e_bdt && Root.e_bdt.tournamentData)
               {
                  Root.e_bdt.tournamentData.sngTournamentId = -1;
               }
            }
         }
         if(decoded.payLoad && !(decoded.payLoad is Number))
         {
            if(decoded.payLoad.instance || decoded.payLoad.p && decoded.payLoad.p[0])
            {
               if(decoded.tags[0] == "notify" && decoded.tags[1] == "sngqueue")
               {
                  b_hds(e.data.from.bareJID,decoded.payLoad.instance + "-table");
                  i_gbi.instance.s_xks(decoded);
               }
               var instance:String = !!decoded.payLoad.instance?decoded.payLoad.instance:decoded.payLoad.p[0];
               decoded.from = e.data.from.bareJID;
               if(_tableConnection)
               {
                  _tableConnection.p_ymi(instance,decoded);
               }
               z_pjv.record(instance,e.data.body);
               x_xdz.instance.update(decoded);
               if(decoded.tags[0] == "pturn" && decoded.payLoad.p[1] == decoded.payLoad.d[0])
               {
                  dispatchEventWith("xmppConnectionManager.gameUserTurn",false,{"instanceId":decoded.payLoad.p[0]});
               }
               else if(decoded.tags[0] == "act" && decoded.payLoad.p[1] == decoded.payLoad.d[0])
               {
                  dispatchEventWith("xmppConnectionManager.gameUserAct",false,{"instanceId":decoded.payLoad.p[0]});
               }
               else if(decoded.tags[0] == "select" && decoded.tags[1] == "buyin")
               {
                  dispatchEventWith("xmppConnectionManager.gameUserTurn",false,{"instanceId":decoded.payLoad.instance});
               }
               else if(decoded.tags[0] == "buyin" && decoded.tags[1] == "ok")
               {
                  dispatchEventWith("xmppConnectionManager.gameUserAct",false,{"instanceId":decoded.payLoad.instance});
               }
            }
            else if(decoded.tags[0] == "notify")
            {
               if(_tableConnection)
               {
                  _tableConnection.l_fpt(decoded);
               }
               i_qdz.instance.s_xks(decoded.payLoad);
               if(decoded.tags[1] == "balance")
               {
                  if(decoded.payLoad.balance is Number)
                  {
                     d_vin.instance.balance = decoded.payLoad.balance;
                  }
                  else
                  {
                     d_vin.instance.balance = decoded.payLoad.balance.eur;
                     var eur:Number = decoded.payLoad.balance.eur;
                     var ntv:Number = decoded.payLoad.balance.native;
                     var rate:Number = int(ntv / eur * 100000) / 100000;
                     d_vin.instance.exchangeRate = rate;
                     m_vuc.instance.g_qwf("currencyExchangeRate",rate);
                  }
                  dispatchEventWith("xmppConnectionManager.balanceChanged",false,decoded.payLoad);
               }
               else if(decoded.tags[1] == "ach")
               {
                  if(d_vin.instance.l_asy)
                  {
                     d_vin.instance.e_lbz(decoded.payLoad.props.achid,"achieved");
                     EventProxy.instance.dispatchEventWith("updateAchievements");
                  }
                  else
                  {
                     p_krj();
                  }
               }
               if(decoded.tags[1] != "balance" && decoded.tags[1] != "sitouttime" && decoded.tags[1] != "tour")
               {
                  dispatchEventWith("xmppConnectionManager.notificationReceived",false,decoded);
               }
               if(decoded.payLoad.msg)
               {
                  if(decoded.payLoad.msg[0] == 9 || decoded.payLoad.msg[0] == 11 || decoded.payLoad.msg[0] == 15)
                  {
                     j_dja();
                  }
                  else if(decoded.payLoad.msg[0] == 16 || decoded.payLoad.msg[0] == 19)
                  {
                     u_whk();
                  }
                  else if(decoded.payLoad.msg[0] == 17 || decoded.payLoad.msg[0] == 27)
                  {
                     x_shy();
                  }
                  if(decoded.payLoad.msg[0] == 5)
                  {
                     q_scp();
                  }
               }
            }
            else if(decoded.tags[0] == "getmissions")
            {
               o_maz.instance.l_kym(decoded.payLoad);
               _tableConnection.l_fpt(decoded);
            }
            else if(decoded.tags[0] == "besthandspromo")
            {
               p_kmb.instance.n_mbf(decoded.payLoad);
               _tableConnection.l_fpt(decoded);
            }
            else if(decoded.tags[0] == "getachievements")
            {
               d_vin.instance.l_asy = decoded.payLoad.latest;
               dispatchEventWith("xmppConnectionManager.achievementsReceived");
            }
            else if(decoded.tags[0] == "getchallenges")
            {
               dispatchEventWith("xmppConnectionManager.challengesReceived",false,decoded.payLoad);
            }
            else if(decoded.tags[0] == "discardchallenge")
            {
               dispatchEventWith("xmppConnectionManager.discardchallengeReceived",false,decoded.payLoad);
            }
            else if(decoded.tags[0] == "challengehistory")
            {
               dispatchEventWith("xmppConnectionManager.challengehistoryReceived",false,decoded.payLoad);
            }
            else if(decoded.tags[0] == "getspins")
            {
               q_xph.instance.f_nac = decoded.payLoad;
            }
            else if(decoded.tags[0] == "slotupdate")
            {
               q_xph.instance.k_rho = decoded.payLoad;
            }
            else if(decoded.tags[0] == "getflopsneeded")
            {
               q_xph.instance.n_smu = decoded.payLoad;
            }
         }
         if(decoded.tags[0] == "trnupdate")
         {
            if(_tableConnection)
            {
               if(decoded.payLoad.tournamentType == "SNG")
               {
                  f_bwt("[XMPPConnectionManager] - SNG trnupdate message received:");
                  f_bwt(JSON.stringify(decoded));
                  _tableConnection.l_roq(decoded.payLoad.trid,decoded);
               }
               else
               {
                  _tableConnection.v_ljh(decoded.payLoad.trid,"MTT");
               }
            }
         }
         else if(decoded.tags[0] == "sngupdate" && decoded.tags[1] == "rules")
         {
            if(_tableConnection)
            {
               _tableConnection.v_ljh(decoded.payLoad.sngid,"SNG");
            }
         }
         if(decoded.tags[1] == "removed")
         {
            f_bwt("[XMPPConnectionManager] - Tournament " + decoded.payLoad.trid + " was removed.");
            f_bwt(e.data.body);
         }
      }
      
      
[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

CryptogramSomeone Is Learning How to Take Down the Internet

Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don't know who is doing this, but it feels like a large nation state. China or Russia would be my first guesses.

First, a little background. If you want to take a network off the Internet, the easiest way to do it is with a distributed denial-of-service attack (DDoS). Like the name says, this is an attack designed to prevent legitimate users from getting to the site. There are subtleties, but basically it means blasting so much data at the site that it's overwhelmed. These attacks are not new: hackers do this to sites they don't like, and criminals have done it as a method of extortion. There is an entire industry, with an arsenal of technologies, devoted to DDoS defense. But largely it's a matter of bandwidth. If the attacker has a bigger fire hose of data than the defender has, the attacker wins.

Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they're used to seeing. They last longer. They're more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

The attacks are also configured in such a way as to see what the company's total defenses are. There are many different ways to launch a DDoS attack. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they've got to defend themselves. They can't hold anything back. They're forced to demonstrate their defense capabilities for the attacker.

I am unable to give details, because these companies spoke with me under condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there's a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn't have the level of detail I heard from the companies I spoke with, the trends are the same: "in Q2 2016, attacks continued to become more frequent, persistent, and complex."

There's more. One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.

Who would do this? It doesn't seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It's not normal for companies to do that. Furthermore, the size and scale of these probes -- and especially their persistence -- points to state actors. It feels like a nation's military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US's Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.

What can we do about this? Nothing, really. We don't know where the attacks come from. The data I see suggests China, an assessment shared by the people I spoke with. On the other hand, it's possible to disguise the country of origin for these sorts of attacks. The NSA, which has more surveillance in the Internet backbone than everyone else combined, probably has a better idea, but unless the US decides to make an international incident over this, we won't see any attribution.

But this is happening. And people should know.

This essay previously appeared on Lawfare.com.

EDITED TO ADD: Slashdot thread.

EDITED TO ADD (9/15): Podcast with me on the topic.

CryptogramApple's Cloud Key Vault

Ever since Ian Krstić, Apple's Head of Security Engineering and Architecture, presented the company's key backup technology at Black Hat 2016, people have been pointing to it as evidence that the company can create a secure backdoor for law enforcement.

It's not. Matthew Green and Steve Bellovin have both explained why not. And the same group of us that wrote the "Keys Under Doormats" paper on why backdoors are a bad idea have also explained why Apple's technology does not enable it to build secure backdoors for law enforcement. Michael Specter did the bulk of the writing.

The problem with Tait's argument becomes clearer when you actually try to turn Apple's Cloud Key Vault into an exceptional access mechanism. In that case, Apple would have to replace the HSM with one that accepts an additional message from Apple or the FBI­ -- or an agency from any of the 100+ countries where Apple sells iPhones­ -- saying "OK, decrypt," as well as the user's password. In order to do this securely, these messages would have to be cryptographically signed with a second set of keys, which would then have to be used as often as law enforcement access is required. Any exceptional access scheme made from this system would have to have an additional set of keys to ensure authorized use of the law enforcement access credentials.

Managing access by a hundred-plus countries is impractical due to mutual mistrust, so Apple would be stuck with keeping a second signing key (or database of second signing keys) for signing these messages that must be accessed for each and every law enforcement agency. This puts us back at the situation where Apple needs to protect another repeatedly-used, high-value public key infrastructure: an equivalent situation to what has already resulted in the theft of Bitcoin wallets, RealTek's code signing keys, and Certificate Authority failures, among many other disasters.

Repeated access of private keys drastically increases their probability of theft, loss, or inappropriate use. Apple's Cloud Key Vault does not have any Apple-owned private key, and therefore does not indicate that a secure solution to this problem actually exists.

It is worth noting that the exceptional access schemes one can create from Apple's CKV (like the one outlined above) inherently entails the precise issues we warned about in our previous essay on the danger signs for recognizing flawed exceptional access systems. Additionally, the Risks of Key Escrow and Keys Under Doormats papers describe further technical and nontechnical issues with exceptional access schemes that must be addressed. Among the nontechnical hurdles would be the requirement, for example, that Apple run a large legal office to confirm that requests for access from the government of Uzbekistan actually involved a device that was located in that country, and that the request was consistent with both US law and Uzbek law.

My colleagues and I do not argue that the technical community doesn't know how to store high-value encryption keys­ -- to the contrary that's the whole point of an HSM. Rather, we assert that holding on to keys in a safe way such that any other party (i.e. law enforcement or Apple itself) can also access them repeatedly without high potential for catastrophic loss is impossible with today's technology, and that any scheme running into fundamental sociotechnical challenges such as jurisdiction must be evaluated honestly before any technical implementation is considered.

,

Krebs on SecurityAdobe, Microsoft Push Critical Updates

Adobe and Microsoft on Tuesday each issued updates to fix multiple critical security vulnerabilities in their software. Adobe pushed a patch that addresses 29 security holes in its widely-used Flash Player browser plug-in. Microsoft released some 14 patch bundles to correct at least 50 flaws in Windows and associated software, including a zero-day bug in Internet Explorer.

brokenwindowsHalf of the updates Microsoft released Tuesday earned the company’s most dire “critical” rating, meaning they could be exploited by malware or miscreants to install malicious software with no help from the user, save for maybe just visiting a hacked or booby-trapped Web site. Security firms Qualys and Shavlik have more granular writeups on the Microsoft patches.

Adobe’s advisory for this Flash Update is here. It brings Flash to v. 23.0.0.162 for Windows and Mac users. If you have Flash installed, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

brokenflash-aIf you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually check for updates in Chrome an restart the browser to get the latest Flash version).

As always, if you run into any issues installing any of these updates, please feel free to leave a comment about your experience below.

Sociological ImagesUnderstanding Latinos For Trump

As the 2016 presidential campaign enters the final stretch, Donald Trump has doubled down on his hard-line stance on immigration. In his August 31st immigration policy speech, Trump proposed implementing extreme vetting and employing a deportation force, and opposed amnesty for more than 11 million undocumented immigrants already in the U.S. Polling by Latino Decisions, a leader in Latino political opinion research, indicates Trump’s current poll numbers among Latinos have slipped to 19%. However, given Trump’s proposed policies and charged rhetoric against Latinos, it might seem perplexing that even that many Latinos still support Trump.

Recently on MSNBC’s All in With Chris Hayes, Joy Reid asked Latinos for Trump co-founder Marco Gutierrez whether Trump’s immigration policies would fundamentally drive Latinos away from the Republican party. Gutierrez replied that Trump’s message was “tough” but necessary; asked to clarify, he responded with the comment that immediately spawned a new internet meme:

My [Mexican] culture is a very dominant culture. And it’s imposing. And it’s causing problems. If you don’t do something about it, you’re gonna have taco trucks every corner.

Gutierrez defended his assessment, saying “you guys defend a Mexico that doesn’t exist anymore. There is a new Mexico that’s rising with crime and we need to stop that. And that stops right here [in America].”

His comments illustrate important concepts related to the psychology of ethnic identity. First, people differ in how strongly they affiliate with their Mexican or Latino identity; some feel more strongly identified and others less so. Second, Latinos in the U.S. navigate two cultural identities: their ethnic identity and their American identity. And these identity differences are linked to political ideology.

My co-authors and I asked 323 U.S.-born Mexican Americans about their political ideology and socioeconomic status, the strength of their identification with Mexican and American cultures, and their attitudes toward acculturating to American culture. Those who strongly identified with Mexican culture were more likely to support the integration of both their Mexican and American identities into one unified identity, such as maintaining their own cultural traditions while also adapting to Anglo-American customs. These leaned more liberal. In contrast, those who held weak Mexican identification were more likely to support full assimilation to American culture. These were more moderate or conservative in their ideologies.

Their socioeconomic status also influenced their political ideology. Those with higher socioeconomic status were significantly less liberal, but this was most true for those participants who both belonged to higher social classes and had the weakest identification with Mexican culture.

This may explain why some Latinos aren’t put off by Trump’s anti-immigrant rhetoric. Latinos who support Trump may feel less strongly identified with their ethnic culture and have a stronger desire to identify with American culture. They probably also believe that other Latinos should assimilate fully into American culture and minimize ties or connections to their heritage culture. These beliefs comport with Trump’s message that immigrants need to “successfully assimilate” in order to join our country.

Given that Mexican Americans with a strong ethnic identification were more likely to be liberal and support biculturalism over assimilation attitudes, it’s quite unlikely that Trump will be successful in winning over many Latino constituents who don’t already support him. In fact, being photographed eating taco salad and exclaiming “I love Hispanics!” could backfire with  conservative Latinos who do support him because that type of appeal makes salient a cultural identity that is unimportant to them, or worse, lumps them into a cultural group they have actively sought to minimize.

Laura P. Naumann, PhD is a personality psychologist who teaches in the Department of Social Sciences at Nevada State College. Her research interests include the expression and perception of personality as well as individual differences in racial/ethnic identity development. You can learn more about her here.

(View original at https://thesocietypages.org/socimages)

Planet DebianMike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.1)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Tuesday, Sep 13th, version 3.5.99.1 of nx-libs has been released [1].

This release brings some code cleanups regarding displayed copyright information and an improvement when it comes to reconnecting to an already running session from an X11 server with a color depths setup that is different from the X11 server setup where the NX/X11 session was originally created on. Furthermore, an issue reported to the X2Go developers has been fixed that caused problems on Windows clients on copy+paste actions between the NX/X11 session and the underlying MS Windows system. For details see X2Go BTS, Bug #952 [3].

Change Log

A list of recent changes (since 3.5.99.0) can be obtained from here.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

References

Google AdsenseAdSense help, when and where you need it

Whether you need help urgently or just want to learn, Adsense provides different ways to provide help when you need it.  In this post we’ll share the different ways we offer support to our AdSense partners.

Did you know you can get help on any AdSense issue from within your AdSense account using the help widget? You can find the help widget by clicking on the Help button on the upper right corner of your AdSense account. This will take you directly to informative articles related to the topic or issue you provide.





We hope that this widget will help solve your problems directly within your AdSense account, eliminating the need to switch back and forth between tasks.

Additionally, if you consistently earn more than $25 per week (or the local equivalent), you may be eligible to email the AdSense support team. If you don’t meet the earnings threshold, you can still get help through the issue-based troubleshooters in the AdSense Help Center or by using these relevant resources:The AdSense support team is here to help so you can continue to focus on creating amazing content for your audience. Use the support resources noted above when you require assistance and let us know on Twitter or Google+ how we can improve your support experience. 

Posted by Melina Lopez, from the AdSense team

Worse Than FailureLearning to Share

Maintenance programming is an important job in nearly any software shop. As developers move on to other positions or companies, the projects they leave behind still need someone to take care of them, to fix bugs or implement customer requests. Often, these products have been cobbled together by a variety of programmers who no longer work for the company, many of whom had only a loose understanding of the product (or even programming in general).

Martin was one such maintenance programmer. After being hired, management quickly realized he had a knack for digging into old systems and figuring them out well enough to update them, which often meant a full rewrite to make everything consistent and sane.

One such system that quickly fell into his lap was essentially a management appliance, a Linux virtual machine (VM) prepackaged with a web-based management interface to control the system. The web application worked well enough but the test suite had…trouble.

The tests used Selenium to deploy a fresh VM and perform some pre-defined actions on the user interface. Most of the test suite was written by a former employee named Jackson who, as far as Martin could tell from his notes and source control commit messages, had very odd assumptions about how things worked, especially involving concurrency.

The test suite had some serious performance issues, as well as a ton of inexplicably failing test cases. The system did not scale up as more VMs were deployed, at all, and Martin uncovered the scary truth that Jackson had wrapped everything in synchronization primitives and routed all actions through a global singleton which stored all state for all VMs. Only one test operation at a time was supported, across all test VMs, forcing them to queue up and run sequentially.

Seeing how all test state was stored in a global singleton, Martin realized that a huge number of the test suite’s failures had to do with leaky state. One test VM would set some state, then give up its lock between tests, providing a small window for another VM to grab the lock and then fail because the state wasn’t valid for that specific test.

He asked around the office to see if anyone knew more about the test system, and though nobody knew the specifics, his coworkers did recall that Jackson was hugely concerned that state would leak between test VMs and cause problems and had spent most of his time designing the system to avoid that. So Martin started reviewing source control history and commit messages, and found that Jackson was ignorant of anything beyond basic programming. Somehow, he believed the singleton would prevent state from being shared. Commit messages spelled it out: “Used a singleton to avoid shared state for concurrency.”

And so Martin spent a few months improving the system by removing the singleton and mutexes, and generally cleaning up the tests’ code. During testing, Jackson’s shared state woes never surfaced, and when Martin was finished the test suite scaled very well by the number of VMs. Most of the spurious test failures simply disappeared and the entire suite ran in a fraction of the time.

A sign promoting sharing

And now Martin understood why Jackson was no longer with the company. His solution for dealing with concurrency problems from “potential” shared state was to rewrite the framework to use “assuredly” shared state.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

Planet DebianJohn Goerzen: Two Boys, An Airplane, Plus Hundreds of Old Computers

“Was there anything you didn’t like about our trip?”

Jacob’s answer: “That we had to leave so soon!”

That’s always a good sign.

When I first heard about the Vintage Computer Festival Midwest, I almost immediately got the notion that I wanted to go. Besides the TRS-80 CoCo II up in my attic, I also have fond memories of an old IBM PC with CGA monitor, a 25MHz 486, an Alpha also in my attic, and a lot of other computers along the way. I didn’t really think my boys would be interested.

But I mentioned it to them, and they just lit up. They remembered the Youtube videos I’d shown them of old line printers and punch card readers, and thought it would be great fun. I thought it could be a great educational experience for them too — and it was.

It also turned into a trip that combined being a proud dad with so many of my other interests. Quite a fun time.

IMG_20160911_061456

(Jacob modeling his new t-shirt)

Captain Jacob

Chicago being not all that close to Kansas, I planned to fly us there. If you’re flying yourself, solid flight planning is always important. I had already planned out my flight using electronic tools, but I always carry paper maps with me in the cockpit for backup. I got them out and the boys and I planned out the flight the old-fashioned way.

Here’s Oliver using a scale ruler (with markings for miles corresponding to the scale of the map) and Jacob doing calculating for us. We measured the entire route and came to within one mile of the computer’s calculation for each segment — those boys are precise!

20160904_175519

We figured out how much fuel we’d use, where we’d make fuel stops, etc.

The day of our flight, we made it as far as Davenport, Iowa when a chance of bad weather en route to Chicago convinced me to land there and drive the rest of the way. The boys saw that as part of the exciting adventure!

Jacob is always interested in maps, and had kept wanting to use my map whenever we flew. So I dug an old Android tablet out of the attic, put Avare on it (which has aviation maps), and let him use that. He was always checking it while flying, sometimes saying this over his headset: “DING. Attention all passengers, this is Captain Jacob speaking. We are now 45 miles from St. Joseph. Our altitude is 6514 feet. Our speed is 115 knots. We will be on the ground shortly. Thank you. DING”

Here he is at the Davenport airport, still busy looking at his maps:

IMG_20160909_183813

Every little airport we stopped at featured adults smiling at the boys. People enjoyed watching a dad and his kids flying somewhere together.

Oliver kept busy too. He loves to help me on my pre-flight inspections. He will report every little thing to me – a scratch, a fleck of paint missing on a wheel cover, etc. He takes it seriously. Both boys love to help get the plane ready or put it away.

The Computers

Jacob quickly gravitated towards a few interesting things. He sat for about half an hour watching this old Commodore plotter do its thing (click for video):

VID_20160910_142044

His other favorite thing was the phones. Several people had brought complete analog PBXs with them. They used them to demonstrate various old phone-related hardware; one had several BBSs running with actual modems, another had old answering machines and home-security devices. Jacob learned a lot about phones, including how to operate a rotary-dial phone, which he’d never used before!

IMG_20160910_151431

Oliver was drawn more to the old computers. He was fascinated by the IBM PC XT, which I explained was just about like a model I used to get to use sometimes. They learned about floppy disks and how computers store information.

IMG_20160910_195145

He hadn’t used joysticks much, and found Pong (“this is a soccer game!”) interesting. Somebody has also replaced the guts of a TRS-80 with a Raspberry Pi running a SNES emulator. This had thoroughly confused me for a little while, and excited Oliver.

Jacob enjoyed an old TRS-80, which, through a modern Ethernet interface and a little computation help in AWS, provided an interface to Wikipedia. Jacob figured out the text-mode interface quickly. Here he is reading up on trains.

IMG_20160910_140524

I had no idea that Commodore made a lot of adding machines and calculators before they got into the home computer business. There was a vast table with that older Commodore hardware, too much to get on a single photo. But some of the adding machines had their covers off, so the boys got to see all the little gears and wheels and learn how an adding machine can do its printing.

IMG_20160910_145911

And then we get to my favorite: the big iron. Here is a VAX — a working VAX. When you have a computer that huge, it’s easier for the kids to understand just what something is.

IMG_20160910_125451

When we encountered the table from the Glenside Color Computer Club, featuring the good old CoCo IIs like what I used as a kid (and have up in my attic), I pointed out to the boys that “we have a computer just like this that can do these things” — and they responded “wow!” I think they are eager to try out floppy disks and disk BASIC now.

Some of my favorites were the old Unix systems, which are a direct ancestor to what I’ve been working with for decades now. Here’s AT&T System V release 3 running on its original hardware:

IMG_20160910_144923

And there were a couple of Sun workstations there, making me nostalgic for my college days. If memory serves, this one is actually running on m68k in the pre-Sparc days:

IMG_20160910_153418

Returning home

After all the excitement of the weekend, both boys zonked out for awhile on the flight back home. Here’s Jacob, sleeping with his maps still up.

IMG_20160911_132952

As we were nearly home, we hit a pocket of turbulence, the kind that feels as if the plane is dropping a bit (it’s perfectly normal and safe; you’ve probably felt that on commercial flights too). I was a bit concerned about Oliver; he is known to get motion sick in cars (and even planes sometimes). But what did I hear from Oliver?

“Whee! That was fun! It felt like a roller coaster! Do it again, dad!”

Krebs on SecuritySecret Service Warns of ‘Periscope’ Skimmers

The U.S. Secret Service is warning banks and ATM owners about a new technological advance in cash machine skimming known as “periscope skimming,” which involves a specialized skimming probe that connects directly to the ATM’s internal circuit board to steal card data.

At left, the skimming control device. Pictured right is the skimming control device with wires protruding from the periscope.

At left, the skimming control device. Pictured right is the skimming control device with wires protruding from the periscope. These were recovered from a cash machine in Connecticut.

According to a non-public alert released to bank industry sources by a financial crimes task force in Connecticut, this is thought to be the first time periscope skimming devices have been detected in the United States. The task force warned that the devices may have the capability to remain powered within the ATM for up to 14 days and can store up to 32,000 card numbers before exhausting the skimmer’s battery strength and data storage capacity.

The alert documents the first known case of periscope skimming in the United States, discovered Aug. 19, 2016 at an ATM in Greenwich, Conn. A second periscope skimmer was reportedly found hidden inside a cash machine in Pennsylvania on Sept. 3.

The periscope device.

The periscope device.

The task force alert notes that in both cases the crooks were able to gain direct access to the insides of the ATMs (referred to as “top-hat” entry) with a key. The suspects then installed two devices connected together by wiring. The first device — the periscope skimming probe — is installed through a pre-existing hole on the frame of the motorized card reader.

The probe is set in place to connect to the circuit board and directly onto the pad that transfers cardholder data stored on the magnetic stripe on the backs of customer payment cards. The probe is then held in place with fast-drying superglue to the card reader frame.

According to the Secret Service, the only visible part of this skimming device once the top-hat is opened will be the wire extending from the periscope probe that leads to the second part of this skimmer — called a “skimming control device.” This second device contains the battery source and data storage unit, and looks similar to a small external hard drive.

As I’ve noted in previous stories in my series All About Skimmers, the emergence of this type of skimming attack is thought to be response to the widespread availability of third party anti-skimming technology which is successful at preventing the operation of a traditional skimmer placed on the outside of the ATM.

The Connecticut task task force notes that authorities there did not find hidden cameras or other methods of capturing customer PINs at the ATMs compromised by periscope skimmers, suggesting these attacks involved mere prototypes and that the thieves responsible are in the process of refining their technology.

Nevertheless, crooks who are serious about this type of crime eventually will want to capture your PIN so they can later drain your debit account at another ATM. So it’s important to remember that covering the PIN pad with your hand defeats the hidden camera from capturing your PIN. Occasionally, skimmer thieves will use PIN pad overlays, but these are comparatively rare and quite a bit more expensive; hidden cameras are used on the vast majority of the more than three dozen ATM skimming incidents that I’ve documented here.

The periscope skimming device found at an ATM in Pennsylvania.

Another periscope skimming device found at an ATM in Pennsylvania.

Shockingly, few people bother to take this simple, effective step, as detailed in this skimmer tale from 2012, wherein I obtained hours worth of video seized from two ATM skimming operations and saw customer after customer walk up, insert their cards and punch in their digits — all in the clear.

Many readers have asked whether the incidence of such skimming scams will decrease as more banks begin issuing more secure chip-based payment cards. The answer is probably not. That’s because even after most U.S. banks put in place chip-capable ATMs, the magnetic stripe will still be needed because it’s an integral part of the way ATMs work: Most ATMs in use today require a magnetic stripe for the card to be accepted into the machine.

The principal reason for this is to ensure that customers are putting the card into the slot correctly, as embossed letters and numbers running across odd spots in the card reader can take their toll on the machines over time. As long as the cardholder’s data remains stored on a chip card’s magnetic stripe, thieves will continue building and placing these types of skimmers.

Also, the thieves conducting these periscope skimming attacks don’t necessarily need a key to access the ATMs. As I’ve noted in past skimming stories, crooks who specialize in compromising ATMs with malicious software often target stand-alone cash machines that may be easier to access from the top-hat. My advice? Stick to ATMs that are installed in the wall at a bank or otherwise not exposed from the top.

Most importantly, watch out for your own physical safety while using an ATM. Keep your wits about you as you transact in and leave the area, and try to be keenly aware of your immediate surroundings. Use only machines in public, well-lit areas, and avoid ATMs in secluded spots.

CryptogramLeaked Stingray Manuals

The Intercept has published the manuals for Harris Corporation's IMSI catcher: Stingray. It's an impressive surveillance device.

Planet DebianDirk Eddelbuettel: anytime 0.0.1: New package for 'anything' to POSIXct (or Date)

anytime just arrived on CRAN as a very first release 0.0.1.

So why (yet another) package dealing with dates and times? R excels at computing with dates, and times. By using typed representation we not only get all that functionality but also of the added safety stemming from proper representation.

But there is a small nuisance cost: How often have we each told as.POSIXct() that the origin is epoch '1970-01-01'? Do we have to say it a million more times? Similarly, when parsing dates that are in some recogniseable form of the YYYYMMDD format, do we really have to manually convert from integer or numeric or factor or ordered to character first? Having one of several common separators and/or date / time month forms (YYYY-MM-DD, YYYY/MM/DD, YYYYMMDD, YYYY-mon-DD and so on, with or without times, with or without textual months and so on), do we really need a format string?

anytime() aims to help as a small general purpose converter returning a proper POSIXct (or Date) object nomatter the input (provided it was somewhat parseable), relying on Boost date_time for the (efficient, performant) conversion.

See some examples on the anytime page or the GitHub README.md, or in the screenshot below. And then just give it try!

anytime examples

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureCodeSOD: unstd::toupper

C++ is a language with a… checkered past. It’s grown, matured and changed over the decades, and modern C++ looks very little like the C++ of yesteryear. Standard libraries have grown and improved- these days, std feels nearly as big and complicated as parts of Java’s class library.

One useful function is std::toupper. Given a char, it will turn that char into an upper-case version, in a locale-aware fashion. What if you want to turn an entire string to upper-case?

You might be tempted to use a function like std::transform, which is C++’s version of “map”. It alters the string in-place, turning it into an upper-cased version. With a single line of code, you could easily convert strings to upper-case.

Or, you could do what Tomek’s highly-paid consultant did.

std::string toupper(std::string val)
{
    std::string out;
    if (val.empty())
        return "";
    std::for_each(val.begin(), val.end(), std::toupper);
    return out;
}

Like a true highly-paid consultant, the developer knew that programmer time is more expensive than memory or CPU time, so instead of wasting keystrokes passing the input as a const-reference, they passed by value. Sure, that means every time this function is called, the string must be copied in its entirety, but think of the developer productivity gains!

It’s always important to be a defensive programmer, so in true consultant fashion, we’ll check to ensure that the input string isn’t already empty. Of course, since we manipulate the string with std::for_each, we don’t actually need that check, it’s better to be explicit.

Speaking of for_each, it has one advantage over transform- it won’t modify the string in place. In fact, it won’t modify the string at all, at least as written here. Everyone knows immutable objects cut down on many common bugs, so this is an excellent design choice.

And finally, they return out, the string variable declared at the top of the function, and never initialized. This, of course, is because while your requirements said you needed to turn strings into fully upper-case versions, you don’t actually need that. This is a better solution that’s more suited to your business needs.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 140 work hours have been dispatched among 10 paid contributors. Their reports are available:

  • Balint Reczey did 9.5 hours (out of 14.75 hours allocated + 2 remaining, thus keeping 7.25 extra hours for September).
  • Ben Hutchings did 14 hours (out of 14.75 hours allocated + 0.7 remaining, keeping 1.45 extra hours for September).
  • Brian May did 14.75 hours.
  • Chris Lamb did 15 hours (out of 14.75 hours, thus keeping 0.45 hours for next month).
  • Emilio Pozuelo Monfort did 13.5 hours (out of 14.75 hours allocated + 0.5 remaining, thus keeping 2.95 hours extra hours for September).
  • Guido Günther did 9 hours.
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 15.2 hours (out of 14.5 hours assigned + 0.7 remaining).
  • Roberto C. Sanchez did 11 hours (out of 14.75h allocated, thus keeping 3.75 extra hours for September).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours rised to 167 hours per month thanks to UR Communications BV joining as gold sponsor (funding 1 day of work per month)!

In practice, we never distributed this amount of work per month because some sponsors did not renew in time and some of them might not even be able to renew at all.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 29. It’s a small bump compared to last month but almost all issues are affected to someone.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramTalk by the Former Head of French SIGINT

The former head of French SIGINT gave a talk (removed from YouTube) where he talked about a lot of things he probably shouldn't have.

If anyone has 1) a transcript of the talk, or 2) can read the French articles better than I can, I would appreciate details.

EDITED TO ADD (9/13): Better link to the video. Improved translation of the Le Monde article. Summary of points from the first article. English article about the talk.

CryptogramMalware Infects Network Hard Drives

The malware "Mal/Miner-C" infects Internet-exposed Seagate Central Network Attached Storage (NAS) devices, and from there takes over connected computers to mine for cryptocurrency. About 77% of all drives have been infected.

Slashdot thread.

EDITED TO ADD (9/13): More news.

Planet DebianJoey Hess: PoW bucket bloom: throttling anonymous clients with proof of work, token buckets, and bloom filters

An interesting side problem in keysafe's design is that keysafe servers, which run as tor hidden services, allow anonymous data storage and retrieval. While each object is limited to 64 kb, what's to stop someone from making many requests and using it to store some big files?

The last thing I want is a git-annex keysafe special remote. ;-)

I've done a mash-up of three technologies to solve this, that I think is perhaps somewhat novel. Although it could be entirely old hat, or even entirely broken. (All I know so far is that the code compiles.) It uses proof of work, token buckets, and bloom filters.


Each request can have a proof of work attached to it, which is just a value that, when hashed with a salt, starts with a certain number of 0's. The salt includes the ID of the object being stored or retrieved.

The server maintains a list of token buckets. The first can be accessed without any proof of work, and subsequent ones need progressively more proof of work to be accessed.

Clients will start by making a request without a PoW, and that will often succeed, but when the first token bucket is being drained too fast by other load, the server will reject the request and demand enough proof of work to allow access to the second token bucket. And so on down the line if necessary. At the worst, a client may have to do 8-16 minutes of work to access a keysafe server that is under heavy load, which would not be ideal, but is acceptible for keysafe since it's not run very often.

If the client provides a PoW good enough to allow accessing the last token bucket, the request will be accepted even when that bucket is drained. The client has done plenty of work at this point, so it would be annoying to reject it. To prevent an attacker that is willing to burn CPU from abusing this loophole to flood the server with object stores, the server delays until the last token bucket fills back up.


So far so simple really, but this has a big problem: What prevents a proof of work from being reused? An attacker could generate a single PoW good enough to access all the token buckets, and flood the server with requests using it, and so force everyone else to do excessive amounts of work to use the server.

Guarding against that DOS is where the bloom filters come in. The server generates a random request ID, which has to be included in the PoW salt and sent back by the client along with the PoW. The request ID is added to a bloom filter, which the server can use to check if the client is providing a request ID that it knows about. And a second bloom filter is used to check if a request ID has been used by a client before, which prevents the DOS.

Of course, when dealing with bloom filters, it's important to consider what happens when there's a rare false positive match. This is not a problem with the first bloom filter, because a false positive only lets some made-up request ID be used. A false positive in the second bloom filter will cause the server to reject the client's proof of work. But the server can just request more work, or send a new request ID, and the client will follow along.

The other gotcha with bloom filters is that filling them up too far sets too many bits, and so false positive rates go up. To deal with this, keysafe just keeps count of how many request IDs it has generated, and once it gets to be too many to fit in a bloom filter, it makes a new, empty bloom filter and starts storing request IDs in it. The old bloom filter is still checked too, providing a grace period for old request IDs to be used. Using bloom filters that occupy around 32 mb of RAM, this rotation only has to be done every million requests of so.

But, that rotation opens up another DOS! An attacker could cause lots of request IDs to be generated, and so force the server to rotate its bloom filters too quickly, which would prevent any requests from being accepted. To solve this DOS, just use one more token bucket, to limit the rate that request IDs can be generated, so that the time it would take an attacker to force a bloom filter rotation is long enough that any client will have plenty of time to complete its proof of work.


This sounds complicated, and probably it is, but the implementation only took 333 lines of code. About the same number of lines that it took to implement the entire keysafe HTTP client and server using the amazing servant library.

There are a number of knobs that may need to be tuned to dial it in, including the size of the token buckets, their refill rate, the size of the bloom filters, and the number of argon2 iterations in the proof of work. Servers may eventually need to adjust those on the fly, so that if someone decides it's worth burning large quantities of CPU to abuse keysafe for general data storage, the server throttles down to a rate that will take a very long time to fill up its disk.

This protects against DOS attacks that fill up the keysafe server storage. It does not prevent a determined attacker, who has lots of CPU to burn, from flooding so many requests that legitimate clients are forced to do an expensive proof of work and then time out waiting for the server. But that's an expensive attack to keep running, and the proof of work can be adjusted to make it increasingly expensive.

,

Planet DebianNorbert Preining: Farewell academics talk: Colloquium Logicum 2016 – Gödel Logics

Today I had my invited talk at the Colloquium Logicum 2016, where I gave an introduction to and overview of the state of the art of Gödel Logics. Having contributed considerably to the state we are now, it was a pleasure to have the opportunity to give an invited talk on this topic.

cl16-preining

It was also somehow a strange talk (slides are available here), as it was my last as “academic”. After the rejection of extension of contract by the JAIST (foundational research, where are you going? Foreign faculty, where?) I have been unemployed – not a funny state in Japan, but also not the first time I have been, my experiences span Austrian and Italian unemployment offices. This unemployment is going to end this weekend, and after 25 years in academics I say good-bye.

Considering that I had two invited talks, one teaching assignment for the ESSLLI, submitted three articles (another two forthcoming) this year, JAIST is missing out on quite a share of achievements in their faculty database. Not my problem anymore.

It was a good time in academics, and I will surely not stop doing research, but I am looking forward to new challenges and new ways of collaboration and development. I will surely miss academics, but for now I will dedicate my energy to different things in life.

Thanks to all the colleagues who did care, and for the rest, I have already forgotten you.

Planet DebianKeith Packard: hopkins

Hopkins Trailer Brake Controller in Subaru Outback

My minivan transmission gave up the ghost last year, so I bought a Subaru outback to pull my t@b travel trailer. There isn't a huge amount of space under the dash, so I didn't want to mount a trailer brake controller in the 'usual' spot, right above my right knee.

Instead, I bought a Hopkins InSIGHT brake controller, 47297. That comes in three separate pieces which allows for very flexible mounting options.

I stuck the 'main' box way up under the dash on the left side of the car. There was a nice flat spot with plenty of space that was facing the right direction:

The next trick was to mount the display and control boxes around the storage compartment in the center console:

Routing the cables from the controls over to the main unit took a piece of 14ga solid copper wire to use as a fishing line. The display wire was routed above the compartment lid, the control wire was routed below the lid.

I'm not entirely happy with the wire routing; I may drill some small holes and then cut the wires to feed them through.

CryptogramUSB Kill Stick

It costs less than $60.

For just a few bucks, you can pick up a USB stick that destroys almost anything that it's plugged into. Laptops, PCs, televisions, photo booths -- you name it.

Once a proof-of-concept, the pocket-sized USB stick now fits in any security tester's repertoire of tools and hacks, says the Hong Kong-based company that developed it. It works like this: when the USB Kill stick is plugged in, it rapidly charges its capacitors from the USB power supply, and then discharges -- all in the matter of seconds.

On unprotected equipment, the device's makers say it will "instantly and permanently disable unprotected hardware".

You might be forgiven for thinking, "Well, why exactly?" The lesson here is simple enough. If a device has an exposed USB port -- such as a copy machine or even an airline entertainment system -- it can be used and abused, not just by a hacker or malicious actor, but also electrical attacks.

Slashdot thread.

Planet DebianShirish Agarwal: mtpfs, feh and not being able to share the debconf experience.

I have been sick for about 2 weeks now hence haven’t written. I had joint pains and still am weak. There has been lot of reports of malaria, chikungunya and dengue fever around the city. The only thing I came to know is how lucky I am to be able to move around on 2 legs and how powerless and debilitating it feels when you can’t move. In the interim I saw ‘Me Before You‘ and after going through my most miniscule experience, I could relate with Will Taylor’s character. If I was in his place, I would probably make the same choices.

But my issues are and were slightly different. Last month I was supposed to share my debconf experience in the local PLUG meet. For that purpose, I took some pictures from my phone on a pen-drive to share. But when reached the venue, found out that I had forgotten to take the pen-drive. What I had also done is used the mogrify command from the imagemagick stable to lossy compress the images on the pen-drive so it is easier on image viewers.

But that was not to be and at the last moment had to use my phone plugged into the USB drive of the lappy and show some pictures. This was not good. I had known that it was mounted somewhere but hadn’t looked at where.

After coming back home, it took me hardly 10 minutes to find out where it was mounted. It is not mounted under /media/shirish but under /run/user/1000/gvfs . If I do list under it shows mtp:host=%5Busb%3A005%2C007%5D .

I didn’t need any packages under debian to make it work. Interestingly, the only image viewer which seems to be able to work with all the images is ‘feh’ which is a command-line image viewer in Debian.

[$] aptitude show feh
Package: feh
Version: 2.16.2-1
State: installed
Automatically installed: no
Priority: optional
Section: graphics
Maintainer: Debian PhotoTools Maintainers
Architecture: amd64
Uncompressed Size: 391 k
Depends: libc6 (>= 2.15), libcurl3 (>= 7.16.2), libexif12 (>= 0.6.21-1~), libimlib2 (>= 1.4.5), libpng16-16 (>= 1.6.2-1), libx11-6, libxinerama1
Recommends: libjpeg-progs
Description: imlib2 based image viewer
feh is a fast, lightweight image viewer which uses imlib2. It is commandline-driven and supports multiple images through slideshows, thumbnail
browsing or multiple windows, and montages or index prints (using TrueType fonts to display file info). Advanced features include fast dynamic
zooming, progressive loading, loading via HTTP (with reload support for watching webcams), recursive file opening (slideshow of a directory
hierarchy), and mouse wheel/keyboard control.
Homepage: http://feh.finalrewind.org/

I did try various things to get it to mount under /media/shirish/ but as of date have no luck. Am running Android 6.0 – Marshmallow and have enabled ‘USB debugging’ with help from my friend ‘Akshat’ . I even changed the /etc/fuse.conf options but even that didn’t work.

#cat /etc/fuse.conf
[sudo] password for shirish:
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)

# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
mount_max = 1

# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other

One way which I haven’t explored is adding/making an entry into /etc/fstab. If anybody knows of a solution which doesn’t involve changing content of /etc/fstab. At the same time you are able to get the card and phone directories mounted under /media// , in my case /media/shirish would be interested to know. I would like the /etc/fstab to remain as it is.

I am using Samsung J5 (unrooted) –

Btw I tried all the mtpfs packages in Debian testing but without any meaningful change😦

Look forward to tips.


Filed under: Miscellenous Tagged: #Android, #Debconf16, #debian, #mptfs, feh, FUSE, PLUG

Planet DebianSteve Kemp: If your code accepts URIs as input..

There are many online sites that accept reading input from remote locations. For example a site might try to extract all the text from a webpage, or show you the HTTP-headers a given server sends back in response to a request.

If you run such a site you must make sure you validate the schema you're given - also remembering to do that if you're sent any HTTP-redirects.

Really the issue here is a confusion between URL & URI.

The only time I ever communicated with Aaron Swartz was unfortunately after his death, because I didn't make the connection. I randomly stumbled upon the html2text software he put together, which had an online demo containing a form for entering a location. I tried the obvious input:

file:///etc/passwd

The software was vulnerable, read the file, and showed it to me.

The site gives errors on all inputs now, so it cannot be used to demonstrate the problem, but on Friday I saw another site on Hacker News with the very same input-issue, and it reminded me that there's a very real class of security problems here.

The site in question was http://fuckyeahmarkdown.com/ and allows you to enter a URL to convert to markdown - I found this via the hacker news submission.

The following link shows the contents of /etc/hosts, and demonstrates the problem:

http://fuckyeahmarkdown.example.com/go/?u=file:///etc/hosts&read=1&preview=1&showframe=0&submit=go

The output looked like this:

..
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 stage
127.0.0.1 files
127.0.0.1 brettt..
..

In the actual output of '/etc/passwd' all newlines had been stripped. (Which I now recognize as being an artifact of the markdown processing.)

UPDATE: The problem is fixed now.

Sociological ImagesHow Our Media Bubble Protects Our Ideologies

How are media sources from opposing sides of the political spectrum covering the election? Most of us have no idea. We live in a media “bubble,” one in which we usually only consume “friendly” material: news and opinion from outlets and commentators who share our lean.

At Facebook, employees followed a sample of 10.1 million users who publicly identified their political leanings. They then looked at the forces that created the bubble: (1) “ideological homophily,” the degree to which friends shared the same leanings; (2) Facebook’s algorithm, feeding you things it thinks you want to see; (3) and click-through behavior, which links were ignored and which attracted interaction.

They concluded that “individuals’ choices played a stronger role in limiting exposure” to politically diverse content than did their algorithm. (You can get the data yourself here.)

At the Wall Street Journal, you can take a look at these different media bubbles side-by-side. They frame the data as what you might see in your Facebook feed if most of your friends identify as “very liberal” or “very conservative.” More broadly, what the data represents is the use of Facebook data as an insight into the bigger media bubbles we all live in both on- and off-line.

Here’s the first four results for posts about “Barack Obama”:

1

On the left you have a critical article about Obama’s light treatment of private prison corporations, but also a headline calling Donald Trump a “douchebag.” On the right you have a link to a video “banned by Obama himself” which purports to out him as an Islamist and a communist and a headline that says that Obama “gave into Sharia law.”

Liberal-leaning and conservative-leaning headlines and updates related to Donald Trump and Hillary Clinton read like this:

Liberal: “Clinton surges past 270 electoral votes…”

Conservative: “After Leading by 18 Points — Hillary’s Lead Over Trump Shrinks to Margin-of-Error”

Liberal: “Reagan’s Son Says His Dad Would be ‘Humiliated’ by Trump”

Conservative: “FBI Caves: Will Hand Over Notes from Clinton Interview”

Liberal: “Fox News is the Origin Story of Trump’s Bigotry”

Conservative:”Hillary Mobilizes Illegal Army”

Liberal: “Brian Stelter Blasts Sean Hannity for Spreading Conspiracy Theories Regarding Clinton’s Health”

Conservative: “Trump Releases Bombshell Report Linking Obama and Hillary to Rise of ISIS”

You get the picture.

It’s interesting that the narrative of America being a united country is so widely promulgated by both liberal and conservative sides alike. If the politicians really want us to come together (and I doubt they do), the media isn’t helping. Granted, these are the extremes, but the sources on the side I oppose look like delusional conspiracy hubs to me, whereas I recognize many of the outlets on the side to which I lean. To me, those are “good” news sources, ones I count on. Presumably someone on the other side would feel the same about theirs and be equally horrified about mine.

The stories these different sources tell are not compatible. The “very liberal” and “very conservative” side are two wholly different worlds. It’s no wonder each side has such a difficult time understanding the other. I fear what it means about the future of our democracy.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Google AdsenseThe secret to share-worthy content



People love to share things with their friends, family, and colleagues. Presenting your users with relevant and unique content is the best way to encourage sharing and give you a competitive advantage. Content that is underscored by emotion, in particular, can help you strengthen your brand’s presence.

Here are some tips to help you get into the minds, and hearts, of your audience:


1. Be relatable

Taking the time to research what your visitors are interested in is the first step to creating shareable content. Diversify to cover multiple facets of your audience’s interests by including work, lifestyle and social topics to keep things fresh and engaging. Contextualizing your stories also increases the chances of your audience relating to and wanting to share your site.


2. Know what’s trending

Staying up-to-the-minute can require a lot of time and research. Luckily, online tools such as Google Trends can help you identify what’s important to your audience as well as offer helpful insights about people in specific regions you are trying to target. Being a thought leader means not only sharing up-to-the-minute industry news with your audience, but also offering insightful observations and predictions of what this may mean down the road. A trusted source is a followed source.


3. Increase your content’s shelf life

While timely articles are great for generating instant buzz, having content that maintains its importance over time yields higher traffic rate overall. Evergreen content is information that stays relevant no matter when it is consumed. Make sure to interlace trendier content with evergreen pieces every so often. 
 

To learn more about familiarizing your audience with your brand, check out the AdSense Guide to Audience Engagement.



Posted by Jay Castro
Audience Development Specialist
@jayciro

Planet Linux AustraliaBinh Nguyen: Diplomacy Part 2, Russia Vs USA Part 2, and More

This is obviously a continuation of my last post, http://dtbnguyen.blogspot.com/2016/09/diplomacy-russia-vs-usa-and-more.html - if you've ever been exposed to communism then you'll realise that there are those who have been exposed to the 'benefits' of capitalism look back upon it harshly. It's taken me a long while but I get where they're coming from. Many times regional and the global

Planet Linux AustraliaPia Waugh: Pia, Thomas and Little A’s Excellent Adventure – Final days

Well, the last 3 months just flew past on our New Zealand adventure! This is the final blog post. We meant to blog more often but between limited internet access and being busy getting the most of our much needed break, we ended up just doing this final post. Enjoy!

Photos were added every week or so to the flickr album.
Our NZ Adventure

Work

I was invited to spend 4 weeks during this trip working with the Department of Internal Affairs in the New Zealand Government on beta.data.govt.nz and a roadmap for data.govt.nz. The team there were just wonderful to work with as were the various people I met from across the NZ public sector. It was particularly fascinating to spend some time with the NZ Head Statistician Liz MacPherson who is quite a data visionary! It was great to get to better know the data landscape in New Zealand and contribute, even in a small way, to where the New Zealand Government could go next with open data, and a more data-driven public sector. I was also invited to share my thoughts on where government could go next more broadly, with a focus on “gov as an API” and digital transformation. It really made me realise how much we were able to achieve both with data.gov.au from 2013-2015 and in the 8 months I was at the Digital Transformation Office. Some of the strategies, big picture ideas and clever mixes of technology and system thinking created some incredible outcomes, things we took for granted from the inside, but things that are quite useful to others and are deserving of recognition for the amazing public servants who contributed. I shared with my New Zealand colleagues a number of ideas we developed at the DTO in the first 8 months of the “interim DTO”, which included the basis for evidence based service design, delivery & reporting, and a vision for how governments could fundamentally change from siloed services to modular and mashable government. “Mashable government” enables better service and information delivery, a competitive ecosystem of products and services, and the capability to automate system to system transactions – with citizen permission of course – to streamline complex user needs. I’m going to do a dedicated blog post later on some of the reflections I’ve had on that work with both data.gov.au and the early DTO thinking, with kudos to all those who contributed.

I mentioned in July that I had left the Department of Prime Minister and Cabinet (where data.gov.au was moved to in October 2015, and I’ve been on maternity leave since January 2016). My next blog post will be about where I’m going and why. You get a couple of clues: yes it involves data, yes it involves public sector, and yes it involves an international component. Also, yes I’m very excited about it!! Stay tuned ;)

Fishing

When we planned this trip to New Zealand, Thomas has some big numbers in mind for how many fish we should be able to catch. As it turned out, the main seasonal run of trout was 2 months later than usual so for the first month and a half of our trip, it looked unlikely we would get anywhere near what we’d hoped. We got to about 100 fish, fighting for every single one (and keeping only about 5) and then the run began! For 4 weeks of the best fishing of the season I was working in Wellington Mon-Fri, with Little A accompanying me (as I’m still feeding her) leaving Thomas to hold the fort. I did manage to get some great time on the water after Wellington, with my best fishing session (guided by Thomas) resulting in a respectable 14 fish (over 2 hours). Thomas caught a lazy 42 on his best day (over only 3 hours), coming home in time for breakfast and a cold compress for his sprained arm. All up our household clocked up 535 big trout (mostly Thomas!) of which we only kept 10, all the rest were released to swim another day. A few lovely guests contributed to the numbers so thank you Bill, Amanda, Amelia, Miles, Glynn, Silvia and John who together contributed about 40 trout to our tally!

Studies

My studies are going well. I now have only 1.5 subjects left in my degree (the famously elusive degree, which was almost finished and then my 1st year had to be repeated due to doing it too long ago for the University to give credit for, gah!). To finish the degree, a Politics degree with loads of useful stuff for my work like public policy, I quite by chance chose a topic on White Collar Crime which was FASCINATING!

Visitors

Over the course of the 3 months we had a number of wonderful guests who contributed to the experience and had their own enjoyable and relaxing holidays with us in little Turangi: fishing, bushwalking, going to the hot pools and thermal walks, doing high tea at the Tongariro Chateau at Whakaapa Village, Huka Falls in Taupo, and even enjoying some excellent mini golf. Thank you all for visiting, spending time with us and sharing in our adventure. We love you all!

Little A

Little A is now almost 8 months old and has had leaps and bounds in development from a little baby to an almost toddler! She has learned to roll and commando crawl (pulling herself around with her arms only) around the floor. She loves to sit up and play with her toys and is eating her way through a broad range of foods, though pear is still her favourite. She is starting to make a range of noises and the race is on as to whether she’ll say ma or da first :) She has quite the social personality and we adore her utterly! She surprised Daddy with a number of presents on Father’s Day, and helped to make our first family Father’s Day memorable indeed.

Salut Turangi

And so it’s with mixed feelings that we bid adieu to the sleepy town of Turangi. It’s been a great adventure, with lots of wonderful memories and a much-needed chance to get off the grid for a while, but we’re both looking forward to re-entering respectable society, catching up with those of you that we haven’t seen for a while, and planning our next great adventure. We’ll be back in Turangi in February for a different adventure with friends of ours from the US, but that will be only a week or so. Turangi is a great place, and if you’re ever in the area stop into the local shopping centre and try one of the delicious pork and watercress or lamb, mint and kumara pies available from the local bakeries – reason enough to return again and again.

Planet DebianRitesh Raj Sarraf: apt-offline 1.7.1 released

I am happy to mention the release of apt-offline, version 1.7.1.

This release includes many bug fixes, code cleanups and better integration.

  • Integration with PolicyKit
  • Better integration with apt gpg keyring
  • Resilient to failures when a sub-task errors out
  • New Feature: Changelog
    • This release adds the ability to deal with package changelogs ('set' command option: --generate-changelog) based on what is installed, extract changelog (Currently support with python-apt only) from downloaded packages and display them during installation ('install' command opiton: --skip-changelog, if you want to skip display of changelog)
  • New Option: --apt-backend
    • Users can now opt to choose an apt backend of their choice. Currently support: apt, apt-get (default) and python-apt

 

Hopefully, there will be one more release, before the release to Stretch.

apt-offline can be downloaded from its homepage or from Github page. 

 

Update: The PolicyKit integration requires running the apt-offline-gui command with pkexec (screenshot). It also work fine with sudo, su etc.

 

Categories: 

Keywords: 

Like: 

Worse Than FailureRed Black Trees

In a good organization, people measure twice and cut once. For example, an idea is born: let's create a data center that is set up properly. First, you figure out how much equipment is needed, how much space is required and how much power is needed to run and cool it. Next, you size back-up batteries and fuel-powered generators to provide uninterruptible power. And so forth.

In a good organization, each of these tasks is designed, reviewed, built, tested and verified, and approved. These things need to be right. Not sort-of right, but right!

A power outlet painted over

Close only counts in horseshoes, hand grenades and thermonuclear war.

Here's a tale of an organization doing almost everything right... almost.

In the late noughties, Mel was working at something that wasn't yet called DevOps at a German ISP. It was a pretty good place to work, in a spanking new office near the French border, paid for by several million customers, a couple of blocks from one of the region's largest data centers that housed said customers' mail and web sites. The data center had all kinds of fancy security features and of course a state-of-the-art UPS. 15 minutes worth of batteries in the basement and a couple of fat diesels to take it from there, with enough fuel to stay on-line, in the true spirit of the Internet, even during a small-time nuclear war. Everything was properly maintained and drills were frequently held to ensure stuff would actually work in case they were attacked or lightning hit.

The computing center only had a few offices for the hardware guys and the core admin team. But as you don't want administrator's root shells to be disconnected (while they were in the middle of something) due to a power outage either, they had connected the office building to the same UPS. And so as not to reduce the backup run time unnecessarily, there were differently-colored outlets: red for the PCs, monitors and network hardware, and gray for coffee makers, printers and other temporarily dispensable devices that wouldn't need a UPS.

Now Germany happens to be known as one of the countries with the best electric grid in the world. Its "Customer Average Interruption Duration Index" is on the order of 15 minutes a year and in some places years can pass without so much as a second of blackout. So the drills were the only thing that had happened since they moved into the office, and not being part of the data center, they weren't even involved in testing. The drills were frequent and pervasive; all computer power cut over to batteries, then generators, and it was verified at the switch that all was well. Of course, during the tests, land-line power was still present in the building on the non-UPS-protected circuits, so nothing actually ever shut off in the offices, which was kind of the whole point of the tests.

When it inevitably hit the fan in the form of an exploding transformer in a major substation, and plunged a quarter million people into darkness, the data center kept going just fine. The admins would probably have noticed a Nagios alert about discharging batteries first, then generators spinning up and so forth. The colleagues in their building hardly noticed as they had ongoing power.

However, on Mels' floor, the coffee maker was happily gurgling along in the silence that had suddenly fallen when all the PCs and monitors went dark.

It turned out that their floor had been wired with the UPS grid on the gray outlets from the beginning and nobody had ever bothered to test it.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianReproducible builds folks: Reproducible Builds: week 72 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday September 4 and Saturday September 10 2016:

Reproducible work in other projects

Python 3.6's dictonary type now retains the insertion order. Thanks to themill for the report.

In coreboot, Alexander Couzens committed a change to make their release archives reproducible.

Patches submitted

Reviews of unreproducible packages

We've been adding to our knowledge about identified issues. 3 issue types have been added:

1 issue type has been updated:

16 have been have updated:

13 have been removed, not including removed packages:

100s of packages have been tagged with the more generic captures_build_path, and many with captures_kernel_version, user_hostname_manually_added_requiring_further_investigation, user_hostname_manually_added_requiring_further_investigation, captures_shell_variable_in_autofoo_script, etc.

Particular thanks to Emanuel Bronshtein for his work here.

Weekly QA work

FTBFS bugs have been reported by:

  • Aaron M. Ucko (1)
  • Chris Lamb (7)

diffoscope development

strip-nondeterminism development

tests.reproducible-builds.org:

  • F-Droid:
    • Hans-Christoph Steiner found after extensive debugging that for kvm-on-kvm, vagrant from stretch is needed (or a backport, but that seems harder than setting up a new VM).
  • FreeBSD:
    • Holger updated the VM for testing FreeBSD to FreeBSD 10.3.

Misc.

This week's edition was written by Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Planet Linux AustraliaColin Charles: Speaking in September 2016

A few events, but mostly circling around London:

  • Open collaboration – an O’Reilly Online Conference, at 10am PT, Tuesday September 13 2016 – I’m going to be giving a new talk titled Forking Successfully. I’ve seen how the platform works, and I’m looking forward to trying this method out (its like a webminar but not quite!)
  • September MySQL London Meetup – I’m going to focus on MySQL, a branch, Percona Server and the fork MariaDB Server. This will be interesting because one of the reasons you don’t see a huge Emacs/XEmacs push after about 20 years? Feature parity. And the work that’s going into MySQL 8.0 is mighty interesting.
  • Operability.io should be a fun event, as the speakers were hand-picked and the content is heavily curated. I look forward to my first visit there.

Planet Linux AustraliaStewart Smith: Compiling your own firmware for the S822LC for HPC

IBM (my employer) recently announced  the new S822LC for HPC POWER8+NVLINK NVIDIA P100 GPUs server (press release, IBM Systems Blog, The Register). The “For HPC” suffix on the model number is significant, as the S822LC is a different machine. What makes the “for HPC” variant different is that the POWER8 CPU has (in addition to PCIe), logic for NVLink to connect the CPU to NVIDIA GPUs.

There’s also the NVIDIA Tesla P100 GPUs which are NVIDIA’s latest in an SXM2 form factor, but instead of delving into GPUs, I’m going to tell you how to compile the firmware for this machine.

You see, this is an OpenPOWER machine. It’s an OpenPOWER machine where the vendor (in this case IBM) has worked to get all the needed code upstream, so you can see exactly what goes into a firmware build.

To build the latest host firmware (you can cross compile on x86 as we use buildroot to build a cross compiler):

git clone --recursive https://github.com/open-power/op-build.git
cd op-build
. op-build-env
op-build garrison_defconfig
op-build

That’s it! Give it a while and you’ll end up with output/images/garrison.pnor – which is a firmware image to flash onto PNOR. The machine name is garrison as that’s the code name for the “S822LC for HPC” (you may see Minsky in the press, but that’s a rather new code name, Garrison has been around for a lot longer as a name).

,

Planet DebianGregor Herrmann: RC bugs 2016/34-36

as before, my work on release-critical bugs was centered around perl issues. here's the list of bugs I worked on:

  • #687904 – interchange-ui: "interchange-ui: cannot install this package"
    (re?)apply patch from #625904, upload to DELAYED/5
  • #754755 – src:libinline-java-perl: "libinline-java-perl: FTBFS on mips: test suite issues"
    prepare a preliminary fix (pkg-perl)
  • #821994 – src:interchange: "interchange: Build arch:all+arch:any but is missing build-{arch,indep} targets"
    apply patch from sanvila to add targets, upload to DELAYED/5
  • #834550 – src:interchange: "interchange: FTBFS with '.' removed from perl's @INC"
    patch to "require ./", upload to DELAYED/5
  • #834731 – src:kdesrc-build: "kdesrc-build: FTBFS with '.' removed from perl's @INC"
    add patch from Dom to "require ./", upload to DELAYED/5
  • #834738 – src:libcatmandu-mab2-perl: "libcatmandu-mab2-perl: FTBFS with '.' removed from perl's @INC"
    add patch from Dom to "require ./" (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    add some debugging info
  • #835133 – libnet-jabber-perl: "libnet-jabber-perl: FTBFS in testing"
    add patch from CPAN RT (pkg-perl)
  • #835206 – src:munin: "munin: FTBFS with '.' removed from perl's @INC"
    add patch from Dom to call perl with -I., upload to DELAYED/5, then cancelled on maintainer's request
  • #835353 – src:pari: "pari: FTBFS with '.' removed from perl's @INC"
    add patch to call perl with -I., upload to DELAYED/5
  • #835711 – src:libconfig-identity-perl: "libconfig-identity-perl: FTBFS: Tests failures"
    run tests under gnupg1 (pkg-perl)
  • #837136 – libgtk3-perl: "libgtk3-perl: FTBFS: t/overrides.t failure"
    add patch from CPAN RT (pkg-perl)
  • #837237 – src:libtest-file-perl: "libtest-file-perl: FTBFS: Tests failures"
    add patch so tests find their common files again (pkg-perl)
  • #837249 – src:libconfig-record-perl: "libconfig-record-perl: FTBFS: lib/Config/Record.pm: No such file or directory at Config-Record.spec.PL line 13."
    fix build in debian/rules (pkg-perl)

Planet DebianNiels Thykier: Unseen changes to lintian.d.o

We have been making a lot of minor changes to lintian.d.o and the underlying report framework. Most of them were hardly noticeable to the naked. In fact, I probably would not have spotted any of them, if I had not been involved in writing them.  Nonetheless, I felt like sharing them, so here goes.🙂

User “visible” changes:

In case you were wondering, the section title is partly a pun as half of these changes were intended to assist visually impaired users. They were triggered by me running into Sam Hartmann at DebConf16, where I asked him about how easy Debian’s websites were for blind people. Allegedly, we are generally doing quite good in his opinion (with one exception, for which Sam filed Bug#830213), which was a positive surprise for me.

On a related note: Thanks Luke Faraone and Asheesh Laroia for getting helping me started on these changes.🙂

Reporting framework / “Internal” changes:

With the last change + the “−−no−generate−reports” option, we were able to schedule lintian more frequently. Originally, lintian only ran once a day. With the “−−no−generate−reports“, we added a second run and with the last changes, we bumped it to 4 times a day. Unsurprisingly, it means that we are now reprocessing the archive a lot faster than previously.

All of the above is basically the all the note-worthy changes on the Lintian reporting framework since the Partial rewrite of lintian’s reporting setup (~1½ years ago).


Filed under: Debian, Lintian

CryptogramDDOS for Profit

Brian Krebs reports that the Israeli DDOS service vDOS has earned $600K in the past two years. The information was obtained from a hack and data dump of the company's information.

EDITED TO ADD (9/11): The owners have been arrested.

,

Krebs on SecurityAlleged vDOS Proprietors Arrested in Israel

Two young Israeli men alleged to be the co-owners of a popular online attack-for-hire service were reportedly arrested in Israel on Thursday. The pair were arrested around the same time that KrebsOnSecurity published a story naming them as the masterminds behind a service that can be hired to knock Web sites and Internet users offline with powerful blasts of junk data.

Alleged vDOS co-owner Yarden Bidani.

Alleged vDOS co-owner Yarden Bidani.

According to a story at Israeli news site TheMarker.comItay Huri and Yarden Bidani, both 18 years old, were arrested Thursday in connection with an investigation by the U.S. Federal Bureau of Investigation (FBI).

The pair were reportedly questioned and released Friday on the equivalent of about USD $10,000 bond each. Israeli authorities also seized their passports, placed them under house arrest for 10 days, and forbade them from using the Internet or telecommunications equipment of any kind for 30 days.

Huri and Bidani are suspected of running an attack service called vDOS. As I described in this week’s story, vDOS is a “booter” service that has earned in excess of $600,000 over the past two years helping customers coordinate more than 150,000 so-called distributed denial-of-service (DDoS) attacks designed to knock Web sites offline.

The two men’s identities were exposed because vDOS got massively hacked, spilling secrets about tens of thousands of paying customers and their targets. A copy of that database was obtained by KrebsOnSecurity.

For most of Friday, KrebsOnSecurity came under a heavy and sustained denial-of-service attack, which spiked at almost 140 Gbps. A single message was buried in each attack packet: “godiefaggot.” For a brief time the site was unavailable, but thankfully it is guarded by DDoS protection firm Prolexic/Akamai. The attacks against this site are ongoing.

Huri and Bidani were fairly open about their activities, or at least not terribly careful to cover their tracks. Yarden’s now abandoned Facebook page contains several messages from friends who refer to him by his hacker nickname “AppleJ4ck” and discuss DDoS activities. vDOS’s customer support system was configured to send a text message to Huri’s phone number in Israel — the same phone number that was listed in the Web site registration records for the domain v-email[dot]org, a domain the proprietors used to help manage the site.

At the end of August 2016, Huri and Bidani authored a technical paper (PDF) on DDoS attack methods which was published in the Israeli security e-zine Digital Whisper. In it, Huri signs his real name and says he is 18 years old and about to be drafted into the Israel Defense Forces. Bidani co-authored the paper under the alias “Raziel.b7@gmail.com,” an email address that I pointed out in my previous reporting was assigned to one of the administrators of vDOS.

Sometime on Friday, vDOS went offline. It is currently unreachable. Before it went offline, vDOS was supported by at least four servers hosted in Bulgaria at a provider called Verdina.net (the Internet address of those servers was 82.118.233.144). But according to several automated Twitter feeds that track suspicious large-scale changes to the global Internet routing tables, sometime in the last 24 hours vDOS was apparently the victim of what’s known as a BGP hijack. (Update: For some unknown reason, some of the tweets referenced above from BGPstream were deleted; I’ve archived them in this PDF).

BGP hijacking involves one ISP fraudulently “announcing” to the rest of the world’s ISPs that it is in fact the rightful custodian of a range of Internet addresses that it doesn’t actually have the right to control. It is a hack most often associated with spamming activity. According to those Twitter feeds, vDOS’s Internet addresses were hijacked by a firm called BackConnect Security.

Reached by phone, Bryant Townsend, founder and CEO of BackConnect Security, confirmed that his company did in fact hijack Verdina/vDOS’s Internet address space. Townsend said the company took the extreme measure in an effort to get out from under a massive attack launched on the company’s network Thursday, and that the company received an email directly from vDOS claiming credit for the attack.

“For about six hours, we were seeing attacks of more than 200 Gbps hitting us,” Townsend explained. “What we were doing was for defensive purposes. We were simply trying to get them to stop and to gather as much information as possible about the botnet they were using and report that to the proper authorities.”

I noted earlier this week that I would be writing more about the victims of vDOS. That story will have to wait for a few more days, but Friday evening CloudFlare (another DDoS protection service that vDOS was actually hiding behind) agreed to host the rather large log file listing roughly four months of vDOS attack logs from April through July 2016.

For some reason the attack logs only go back four months, probably because they were wiped at one point. But vDOS has been in operation since Sept. 2012, so this is likely a very small subset of the attacks this DDoS-for-hire service has perpetrated.

The file lists the vDOS username that ordered and paid for the attack; the target Internet address; the method of attack; the Internet address of the vDOS user at the time; the date and time the attack was executed; and the browser user agent string of the vDOS user.

A few lines from the vDOS attack logs.

A few lines from the vDOS attack logs.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Software Freedom Day Meeting 2016

Sep 17 2016 10:00
Sep 17 2016 16:30
Sep 17 2016 10:00
Sep 17 2016 16:30
Location: 

Electron Workshop 31 Arden Street, North Melbourne.

There will not be a regular LUV Beginners workshop for the month of September. Instead, you're going to be in for a much bigger treat!

This month, Free Software Melbourne[1], Linux Users of Victoria[2] and Electron Workshop[3] are joining forces to bring you the local Software Freedom Day event for Melbourne.

The event will take place on Saturday 17th September between 10am and 4:30pm at:

Electron Workshop
31 Arden Street, North Melbourne.
Map: http://www.sfd.org.au/melbourne/

Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 17, 2016 - 10:00

read more

,

CryptogramFriday Squid Blogging: More Research Inspired by Squid Skin

Research on color-changing materials:

What do squid and jellyfish skin have in common with human skin? All three have inspired a team of chemists to create materials that change color or texture in response to variations in their surroundings. These materials could be used for encrypting secret messages, creating anti-glare surfaces, or detecting moisture or damage.

They don't really mean "encrypting"; they mean hiding. But interesting nonetheless.

CryptogramLeaked Product Demo from RCS Labs

We have leak from yet another cyberweapons arms manufacturer: the Italian company RCS Labs. Vice Motherboard reports on a surveillance video demo:

The video shows an RCS Lab employee performing a live demo of the company's spyware to an unidentified man, including a tutorial on how to use the spyware's control software to perform a man-in-the-middle attack and infect a target computer who wanted to visit a specific website.

RCS Lab's spyware, called Mito3, allows agents to easily set up these kind of attacks just by applying a rule in the software settings. An agent can choose whatever site he or she wants to use as a vector, click on a dropdown menu and select "inject HTML" to force the malicious popup to appear, according to the video.

Mito3 allows customers to listen in on the target, intercept voice calls, text messages, video calls, social media activities, and chats, apparently both on computer and mobile platforms. It also allows police to track the target and geo-locate it thanks to the GPS. It even offers automatic transcription of the recordings, according to a confidential brochure obtained by Motherboard.

Slashdot thread

Sociological Images“The Potawatomis Didn’t Have a Word for Global Business Center”?

Flashback Friday.

I was waiting for my connecting flight at Chicago O’Hare, and spotted this advertisement on the opposite side of our gate. It reads:

“Chicago is the Potawatomi word for onion field. Apparently, the Potawatomis didn’t have a word for global business center.”

This is an example of the use of Indigenous language and imagery that many people wouldn’t think twice about, or find any inherent issues with. But let’s look at this a little deeper:

  • The use of past tense. It’s not “The Potawatomis don’t have a word for…” it’s “The Potawatomis didn’t…” Implying that the Potawatomi no longer exist or are using their language.
  • The implication that “Indians” and “Global Business Center” aren’t in congruence. Which is assuming that Natives are static, unchanging, and unable to be modern and contemporary. “Potawatomi” and “Onion Field” are fine together, because American society associates Indians with the natural world, plants, animals, etc. But there is definitely not an association between “Potawatomi” and “Global Business”.

But, in reality, of course Potawotomis still exist today, are still speaking their language, and do have a word for Global Business Center (or multiple words…).

Language is constantly evolving, adapting to new technology (remember when google wasn’t a verb?) and community changes.  I remember reading a long time ago in one of my Native studies classes about the Navajo Nation convening a committee to discuss how one would say things like “computer” or “ipod” in Navajo language, in an effort to preserve language and culture and promote the use of Navajo language among the younger generation.

In fact, here’s an awesome video of a guy describing his ipod in Navajo, complete with concepts like “downloading” (there are subtitles/translations):

Native peoples have been trading and communicating “globally” for centuries, long before the arrival of Europeans. To imply that they wouldn’t have the ability to describe a “Global Business Center” reeks of a colonialist perspective (we must “civilize” the savage! show him the ways of capitalism and personal property, for they know not of society!).

Thanks, Chicago, for giving me one more reason to strongly dislike your airport.

Originally posted in 2010.

Adrienne Keene, EdD is a graduate of the Harvard Graduate School of Education and is now a postdoctoral fellow in Native American studies at Brown University. She blogs at Native Appropriations, where this post originally appeared. You can follow her on Twitter.

(View original at https://thesocietypages.org/socimages)

Google AdsenseEarn more from mobile: 3 rules and 6 best practices

However important mobile is to your business today, it will become even more critical tomorrow.

That's true whether you’re blogging about your favorite sports team, building the site for your community theater, or selling products to potential customers. Your visitors simply must have a great experience when they visit your site on their mobile devices.

Research has found that 61% of users will leave a mobile site if they don’t see what they are looking for right away. 

Sites that are not mobile-friendly expect users to pinch, slide, and zoom in order to consume content. It’s a frustrating experience when users expect to find the information they’re looking for right away, but are presented with obstacles to obtain that information. This is what causes users to abandon sites. 

To create a mobile-friendly site, follow these three rules:

  1. Make it fast. Research shows that 74% of people will abandon a mobile site that takes more than 5 seconds to load.
  2. Make it easy. Research shows that 61% of users will leave a mobile site if they don’t find what they're looking for straight away.
  3. Be consistent across screens. Make it easy for users to find what they need no matter what device they're using.

It's also important to think about your ads when you're designing or fine-tuning your mobile-friendly site. Focus on creating a flow between your content and your ads for the ultimate user experience and maximum viewability. Consult your analytics data and set events to track and understand where your users are most receptive to ads.

Here are some mobile-friendly ad best practice tips:

  1. Swap out the 320x50 ad units for 320x100 for a potential RPM increase.
  2. Place a 320x100 ad unit just above the fold.
  3. Use the 300x250 ad unit below the fold (BTF) mixed in with your content.
  4. Prevent accidental clicks on enhanced features in text ads by moving ad units 150 pixels away from your content.
  5. Consider using responsive ad units, which optimize ad sizes to screen sizes and work seamlessly with your responsive site.  
  6. Test your site. Pick the metrics that matter most to you – then experiment with them.

The ad experience on your site should be designed with your mobile users in mind, just like the site itself. 

There are many ways to improve your users’ mobile experience on your site. Download the AdSense Guide to Mobile Web Success today, and find out more on how to make mobile a major asset to your business.



Posted by: 
Chiara Ferraris
Publisher Monetization Specialist
@chiara_ferraris

Worse Than FailureError'd: Profound Sadness

"Shortly after one of our dear colleagues left the business for pastures new, we started to find some messages they left behind," Samantha wrote.

 

"Apart from not knowing whether I was affected, when, and what to do next, this civil defence warning really put my mind at ease," writes Nick K.

 

Erwan R. wrote, "I just removed my item, of course it won't be there!"

 

"I guess seeing an actually valid certificate must be rare enough to trigger a red flag," writes Zach R.

 

"The child product store Windeln has got you covered," wrote Valts S., "Whether you want to buy a lot of children-related stuff, or avoid it."

 

Duston writes, "Gosh, who knew that "Error" was a problem? Good thing they gave me the details!"

 

"This CAPTCHA provider is really testing our stance on gender inclusiveness," wrote James H.

 

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet Linux AustraliaBen Martin: Houndbot suspension test fit

I now have a few crossover plates in the works to hold the upgraded suspension in place. See the front wheel of the robot on your right. The bottom side is held in place with a crossover to go from the beam to a 1/4 inch bearing mount. The high side uses one of the hub mount brackets which are a fairly thick alloy and four pretapped attachment blocks. To that I screw my newly minted alloy blocks which have a sequence of M8 sized holes in them. I was unsure of the final fit on the robot so made three holes to give me vertical variance to help set the suspension in the place that I want.



Notice that the high tensile M8 bolt attached to the top suspension is at a slight angle. In the end the top of the suspension will be between the two new alloy plates. But to do that I need to trim some waste from the plates, but to do that I needed to test mount to see where and what needs to be trimmed. I now have an idea of what to trim for a final test mount ☺.

Below is a close up view of the coil over showing the good clearance from the tire and wheel assembly and the black markings on the top plate giving an idea of the material that I will be removing so that the top tension nut on the suspension clears the plate.


 The mounting hole in the suspension is 8mm diameter. The bearing blocks are for 1/4 inch (~6.35mm) diameters. For test mounting I got some 1/4 inch threaded rod and hacked off about what was needed to get clear of both ends of the assembly. M8 nylock nuts on both sides provide a good first mounting for testing. The crossover plate that I made is secured to the beam by two bolts. At the moment the bearing block is held to the crossover by JB Weld only, I will likely use that to hold the piece and drill through both chunks of ally and bolt them together too. It's somewhat interesting how well these sorts of JB and threaded rod assemblies seem to work though. But a fracture in the adhesive at 20km/h when landing from a jump without a bolt fallback is asking for trouble.


The top mount is shown below. I originally had the shock around the other way, to give maximum clearance at the bottom so the tire didn't touch the shock. But with the bottom mount out this far I flipped the shock to give maximum clearance to the top mounting plates instead.


So now all I need is to cut down the top plates, drill bolt holes for the bearing to crossover plate at the bottom, sand the new bits smooth, and maybe I'll end up using the threaded rod at the bottom with some JB to soak up the difference from 1/4 inch to M8.

Oh, and another order to get the last handful of parts needed for the mounting.

Planet Linux AustraliaTridge on UAVs: APM:Plane 3.7.0 released

The ArduPilot development team is proud to announce the release of version 3.7.0 of APM:Plane. This is a major update so please read the notes carefully.

The biggest changes in this release are:

  • more reliable recovery from inverted flight
  • automatic IC engine support
  • Q_ASSIST_ANGLE for stall recovery on quadplanes
  • Pixhawk2 IMU heater support
  • PH2SLIM support
  • AP_Module support
  • Parrot Disco support
  • major VRBrain support merge
  • much faster boot time on Pixhawk

I'll give a bit of detail on each of these changes before giving the more detailed list of changes.

More reliable recovery from inverted flight

Marc Merlin discovered that on some types of gliders that ArduPilot would not reliably recover from inverted flight. The problem turned out to be the use of the elevator at high bank angles preventing the ailerons from fully recovering attitude. The fix in this release prevent excessive elevator use when the aircraft is beyond LIM_ROLL_CD. This should help a lot for people using ArduPilot as a recovery system for manual FPV flight.

Automatic IC engine support

ArduPilot has supported internal combustion engines for a long time, but until now the pilot has had to control the ignition and starter manually using transmitter pass throughs. A new "ICE" module in ArduPilot now allows for fully automatic internal combustion engine support.

Coupled with an RPM sensor you can setup your aircraft to automatically control the ignition and starter motor, allowing for one touch start of the motor on the ground and automatic restart of the motor in flight if needed.

The IC engine support is also integrated into the quadplane code, allowing for automatic engine start at a specified altitude above the ground. This is useful for tractor engine quadplanes where the propeller could strike the ground on takeoff. The engine can also be automatically stopped in the final stage of a quadplane landing.

Q_ASSIST_ANGLE for stall recovery

Another new quadplane feature is automatic recovery from fixed wing stall. Previously the VTOL motors would only provide assistance in fixed wing modes when the aircraft airspeed dropped below Q_ASSIST_SPEED. Some stalls can occur with higher airspeed however, and this can result in the aircraft losing attitude control without triggering a Q_ASSIST_SPEED recovery. A new parameter Q_ASSIST_ANGLE allows for automatic assistance when attitude control is lost, triggering when the attitude goes outside the defined roll and pitch limits and is more than Q_ASSIST_ANGLE degrees from the desired attitude. Many thanks to Iskess for the suggestion and good discussion around this feature.

Pixhawk2 heated IMU support

This release adds support for the IMU heater in the upcoming Pixhawk2, allowing for more stable IMU temperatures. The Pixhawk2 is automatically detected and the heater enabled at boot, with the target IMU temperature controllable via BRD_IMU_TARGTEMP.

Using an IMU heater should improve IMU stability in environments with significant temperature changes.

PH2SLIM Support

This release adds support for the PH2SLIM variant of the Pixhawk2, which is a Pixhawk2 cube without the isolated sensor top board. This makes for a very compact autopilot for small aircraft. To enable PH2SLIM support set the BRD_TYPE parameter to 6 using a GCS connected on USB.

AP_Module Support

This is the first release of ArduPilot with loadable module support for Linux based boards. The AP_Module system allows for externally compiled modules to access sensor data from ArduPilot controlled sensors. The initial AP_Module support is aimed at vendors integrating high-rate digital image stabilisation using IMU data, but it is expected this will be expanded to other use cases in future releases.

Parrot Disco Support

This release adds support for the Parrot C.H.U.C.K autopilot in the new Disco airframe. The Disco is a very lightweight flying wing with a nicely integrated Linux based autopilot. The Disco flies very nicely with ArduPilot, bringing the full set of mission capabilities of ArduPilot to this airframe.

Major VRBrain Support Update

This release includes a major merge of support for the VRBrain family of autopilots. Many thanks to the great work by Luke Mike in putting together this merge!

Much Faster Boot Time

Boot times on Pixhawk are now much faster due to a restructuring of the driver startup code, with slow starting drivers not started unless they are enabled with the appropriate parameters. The restructuring also allows for support of a wide variety of board types, including the PH2SLIM above.

This release includes many other updates right across the flight stack, including several new features. Some of the changes include:

  • improved quadplane auto-landing
  • limit roll and pitch by Q_ANGLE_MAX in Q modes
  • improved ADSB avoidance and MAVLink streaming
  • smoother throttle control on fixed-wing to VTOL transition
  • removed "demo servos" movement on boot
  • fixed a problem with spurious throttle output during boot (thanks
  • to Marco for finding this)
  • support MAVLink SET_ATTITUDE_TARGET message
  • log all rally points on startup
  • fixed use of stick mixing for rudder with STICK_MIXING=0
  • fixed incorrect tuning warnings when vtol not active
  • support MAVLink based external GPS device
  • support LED_CONTROL MAVLink message
  • prevent baro update while disarmed for large height change
  • support PLAY_TUNE MAVLink message
  • added AP_Button support for remote button input reporting
  • support Ping2020 ADSB transceiver
  • fixed disarm by rudder in quadplanes
  • support 16 channel SERVO_OUTPUT_RAW in MAVLink2
  • added automatic internal combustion engine support
  • support DO_ENGINE_CONTROL MAVLink message
  • added ground throttle suppression for quadplanes
  • added MAVLink reporting of logging subsystem health
  • prevent motor startup on reboot in quadplanes
  • added quadplane support for Advanced Failsafe
  • added support for a 2nd throttle channel
  • fixed bug in crash detection during auto-land flare
  • lowered is_flying groundspeed threshold to 1.5m/s
  • added support for new FrSky telemetry protocol varient
  • added support for fence auto-enable on takeoff in quadplanes
  • added Q_ASSIST_ANGLE for using quadplane to catch stalls in fixed wing flight
  • added BRD_SAFETY_MASK to allow for channel movement for selected channels with safety on
  • numerous improvements to multicopter stability control for quadplanes
  • support X-Plane10 as SITL backend
  • lots of HAL_Linux improvements to bus and thread handling
  • fixed problem with elevator use at high roll angles that could
  • prevent attitude recovery from inverted flight
  • improved yaw handling in EKF2 near ground
  • added IMU heater support on Pixhawk2
  • allow for faster accel bias learning in EKF2
  • fixed in-flight yaw reset bug in EKF2
  • added AP_Module support for loadable modules
  • support Disco airframe from Parrot
  • use full throttle in initial takeoff in TECS
  • added NTF_LED_OVERRIDE support
  • added terrain based simulation in SITL
  • merged support for wide range of VRBrain boards
  • added support for PH2SLIM and PHMINI boards with BRD_TYPE
  • greatly reduced boot time on Pixhawk and similar boards
  • fixed magic check for signing key in MAVLink2
  • fixed averaging of gyros for EKF2 gyro bias estimate

Many thanks to the many people who have contributed to this release, and happy flying!

,

Google AdsenseThe Need for Mobile Speed

Cross-posted from the DoubleClick for Publishers Blog
Today, we’re excited to share insights from a new study on how mobile speed can impact user engagement and publisher revenue. As people’s expectations for mobile experiences have grown, simply loading on a mobile device is no longer enough. Mobile sites must be fast and relevant.
Unfortunately, based on our analysis of 10,000+ mobile web domains, we found that most mobile sites don’t meet this bar: the average load time for mobile sites is 19 seconds over 3G connections.1 That’s about as long as it takes to sing the entire alphabet song!2
Slow loading sites frustrate users and negatively impact publishers. While there are several factors that impact revenue, our model projects that publishers whose mobile sites load in 5 seconds earn up to 2x more mobile ad revenue than those whose sites load in 19 seconds.3 The study also observed 25% higher ad viewability4 and 70% longer average sessions5 for sites that load in 5 seconds vs 19 seconds.
That’s why we’ve been so focused on mobile-first solutions to help publishers succeed — from our participation in the nearly year old AMP project, to our launch of a scalable native advertising solution, to our investment in products that help publishers increase revenue while minimizing latency.
Never before has mobile speed been more important.


3...2...1… gone

Slow page load times are a big blocker:
  • 53% of visits are likely to be abandoned if pages take longer than 3 seconds to load6
  • One out of two people expect a page to load in less than 2 seconds7
  • 46% of people say that waiting for pages to load is what they dislike the most when browsing the web on mobile devices8

We all know this first hand — if you’re looking for something on your phone, how long will you wait if the page takes more than a few seconds to load?
The three major factors that slow down mobile sites are file size, the number of server requests, and the order in which the different elements of the page are loaded. We found:
  • The average size of the content on mobile sites is 1.49 MB, which takes 7 seconds to load over 3G connections9
  • Mobile pages make an average of 214 server requests, and nearly half of all server requests are ad-related10

Getting up to speed

There are many tools out there to help diagnose the problem and fix it. We recommend a 3-step process to speed up mobile sites:
  • Assess the current performance of the site using tools like PageSpeed Insights, Mobile-Friendly Test, and Web Page Test.
  • Execute changes that eliminate bulky content, reduce the number of server requests, and consolidate data and analytics tags. Switch up the element order and select the minimum number of pieces to show above the fold first — styling, javascript logic, and images accessed after the tap, scroll or swipe can be loaded later.
  • Monitor performance after making changes and run A/B tests to regularly audit the setup of your site, flagging and removing anything that adds latency.

You should also investigate open source solutions like Accelerated Mobile Pages (AMP) and Progressive Web Apps.
To learn more about our study and the steps you can take to improve the experience on your mobile site, check out our guide, “The Need for Mobile Speed” [g.co/MobileSpeed]



1 Webpagetest.org, Sampled 11.8K global mWeb homepage domains loaded using a fast 3G connection timing first view only (no cached resources), February 2016
2 NPR, “Keep Flu At Bay With A Song”, April 2009
3 Google Data, Aggregated, anonymized Google Analytics and DoubleClick AdExchange data from a sample of mWeb sites opted into sharing benchmark data, n=4.5K, Global, June 2015 - May 2016
4 DoubleClick for Publishers, Google Active View ad viewability for 10.7K mWeb homepage domains with >70% measurable ad viewability, Global, February 2016
5 Google Data, Aggregated, anonymized Google Analytics data from a sample of mWeb sites opted into sharing benchmark data, n=3.5K, Global, March 2016
6 Google Data, Aggregated, anonymized Google Analytics data from a sample of mWeb sites opted into sharing benchmark data, n=3.7K, Global, March 2016
7 Akamai Technologies - 2014 Consumer Web Performance Expectations Survey
8 Google Webmaster Central Blog, "#MobileMadness: a campaign to help you go mobile-friendly", April, 2015
9 Webpagetest.org, Sampled 11.8K global mWeb homepage domains loaded using a fast 3G connection timing first view only (no cached resources), February 2016
10 Webpagetest.org, Sampled 11.8K global mWeb homepage domains loaded using a fast 3G connection timing first view only (no cached resources), February 2016

Cory DoctorowIf DRM is so great, why won’t anyone warn you when you’re buying it?

Last month, I filed comments with the Federal Trade Commission on behalf of Electronic Frontier Foundation, 22 of EFF’s supporters, and a diverse coalition of rightsholders, public interest groups, and retailers, documenting the ways that ordinary Americans come to harm when they buy products without realizing that these goods have been encumbered with DRM, and asking the FTC to investigate fair labeling for products that come with sneaky technological shackles.


In my latest Guardian column, DRM products are defective by design. Time to tell users what they’re buying, I describe the process by which we came to file, and what we’re hoping will come of it.

In our open letter on DRM labelling – a letter signed by a diverse coalition of rights holders, public interest groups, and publishers – we ask the FTC to take action to ensure that people know what they’re getting when they buy products encumbered with DRM. DRM-free publishers love this idea, because where DRM-labelling prevails, customers overwhelmingly favour DRM-free products.

But DRM-encumbered publishers should also love this, because they keep telling us that people don’t mind DRM. One significant challenge to DRM labelling is that the restrictions imposed by DRM can be incredibly complex – a video may play back on most manufacturers’ displays, but not all, and not at every resolution, and not if the video player believes that it is running in a virtual machine or has been relocated to a different country.

What’s more, most modern DRM is designed for “renewability” – which is a DRM-vendor euphemism for a remote kill-switch. These DRM tools phone home periodically for updates, and install these updates without user intervention, and then disable some or all of the features that were there when you bought the product.


DRM products are defective by design. Time to tell users what they’re buying
[The Guardian]

Krebs on SecurityIsraeli Online Attack Service ‘vDOS’ Earned $600,000 in Two Years

vDOS  a “booter” service that has earned in excess of $600,000 over the past two years helping customers coordinate more than 150,000 so-called distributed denial-of-service (DDoS) attacks designed to knock Web sites offline — has been massively hacked, spilling secrets about tens of thousands of paying customers and their targets.

The vDOS database, obtained by KrebsOnSecurity.com at the end of July 2016, points to two young men in Israel as the principal owners and masterminds of the attack service, with support services coming from several young hackers in the United States.

The vDos home page.

The vDos home page.

To say that vDOS has been responsible for a majority of the DDoS attacks clogging up the Internet over the past few years would be an understatement. The various subscription packages to the service are sold based in part on how many seconds the denial-of-service attack will last. And in just four months between April and July 2016, vDOS was responsible for launching more than 277 million seconds of attack time, or approximately 8.81 years worth of attack traffic.

Let the enormity of that number sink in for a moment: That’s nearly nine of what I call “DDoS years” crammed into just four months. That kind of time compression is possible because vDOS handles hundreds — if not thousands — of concurrent attacks on any given day.

Although I can’t prove it yet, it seems likely that vDOS is responsible for several decades worth of DDoS years. That’s because the data leaked in the hack of vDOS suggest that the proprietors erased all digital records of attacks that customers launched between Sept. 2012 (when the service first came online) and the end of March 2016.

HOW vDOS GOT HACKED

The hack of vDOS came about after a source was investigating a vulnerability he discovered on a similar attack-for-hire service called PoodleStresser. The vulnerability allowed my source to download the configuration data for PoodleStresser’s attack servers, which pointed back to api.vdos-s[dot]com. PoodleStresser, as well as a large number of other booter services, appears to rely exclusively on firepower generated by vDOS.

From there, the source was able to exploit a more serious security hole in vDOS that allowed him to dump all of the service’s databases and configuration files, and to discover the true Internet address of four rented servers in Bulgaria (at Verdina.net) that are apparently being used to launch the attacks sold by vDOS. The DDoS-for-hire service is hidden behind DDoS protection firm Cloudflare, but its actual Internet address is 82.118.233.144.

vDOS had a reputation on cybercrime forums for prompt and helpful customer service, and the leaked vDOS databases offer a fascinating glimpse into the logistical challenges associated with running a criminal attack service online that supports tens of thousands of paying customers — a significant portion of whom are all trying to use the service simultaneously.

Multiple vDOS tech support tickets were filed by customers who complained that they were unable to order attacks on Web sites in Israel. Responses from the tech support staff show that the proprietors of vDOS are indeed living in Israel and in fact set the service up so that it was unable to attack any Web sites in that country — presumably so as to not attract unwanted attention to their service from Israeli authorities. Here are a few of those responses:

(‘4130′,’Hello `d0rk`,\r\nAll Israeli IP ranges have been blacklisted due to security reasons.\r\n\r\nBest regards,\r\nP1st.’,’03-01-2015 08:39),

(‘15462′,’Hello `g4ng`,\r\nMh, neither. I\’m actually from Israel, and decided to blacklist all of them. It\’s my home country, and don\’t want something to happen to them :)\r\n\r\nBest regards,\r\nDrop.’,’11-03-2015 15:35),

(‘15462′,’Hello `roibm123`,\r\nBecause I have an Israeli IP that is dynamic.. can\’t risk getting hit/updating the blacklist 24/7.\r\n\r\nBest regards,\r\nLandon.’,’06-04-2015 23:04),

(‘4202′,’Hello `zavi156`,\r\nThose IPs are in israel, and we have all of Israel on our blacklist. Sorry for any inconvinience.\r\n\r\nBest regards,\r\nJeremy.’,’20-05-2015 10:14),

(‘4202′,’Hello `zavi156`,\r\nBecause the owner is in Israel, and he doesn\’t want his entire region being hit offline.\r\n\r\nBest regards,\r\nJeremy.’,’20-05-2015 11:12),

(‘9057′,’There is a option to buy with Paypal? I will pay more than $2.5 worth.\r\nThis is not the first time I am buying booter from you.\r\nIf no, Could you please ask AplleJack? I know him from Israel.\r\nThanks.’,’21-05-2015 12:51),

(‘4120′,’Hello `takedown`,\r\nEvery single IP that\’s hosted in israel is blacklisted for safety reason. \r\n\r\nBest regards,\r\nAppleJ4ck.’,’02-09-2015 08:57),

WHO RUNS vDOS?

As we can see from the above responses from vDOS’s tech support, the owners and operators of vDOS are young Israeli hackers who go by the names P1st a.k.a. P1st0, and AppleJ4ck. The two men market their service mainly on the site hackforums[dot]net, selling monthly subscriptions using multiple pricing tiers ranging from $20 to $200 per month. AppleJ4ck hides behind the same nickname on Hackforums, while P1st goes by the alias “M30w” on the forum.

Some of P1st/M30W's posts on Hackforums regarding his service vDOS.

Some of P1st/M30W’s posts on Hackforums regarding his service vDOS.

vDOS appears to be the longest-running booter service advertised on Hackforums, and it is by far and away the most profitable such business. Records leaked from vDOS indicate that since July 2014, tens of thousands of paying customers spent a total of more than $618,000 at the service using Bitcoin and PayPal.

Incredibly, for brief periods the site even accepted credit cards in exchange for online attacks, although it’s unclear how much the site might have made in credit card payments because the information is not in the leaked databases.

The Web server hosting vDOS also houses several other sites, including huri[dot]biz, ustress[dot]io, and vstress[dot]net. Virtually all of the administrators at vDOS have an email account that ends in v-email[dot]org, a domain that also is registered to an Itay Huri with a phone number that traces back to Israel.

The proprietors of vDOS set their service up so that anytime a customer asked for technical assistance the site would blast a text message to six different mobile numbers tied to administrators of the service, using an SMS service called Nexmo.com. Two of those mobile numbers go to phones in Israel. One of them is the same number listed for Itay Huri in the Web site registration records for v-email[dot]org; the other belongs to an Israeli citizen named Yarden Bidani. Neither individual responded to requests for comment.

The leaked database and files indicate that vDOS uses Mailgun for email management, and the secret keys needed to manage that Mailgun service were among the files stolen by my source. The data shows that vDOS support emails go to itay@huri[dot]biz, itayhuri8@gmail.com and raziel.b7@gmail.com.

LAUNDERING THE PROCEEDS FROM DDOS ATTACKS

The $618,000 in earnings documented in the vDOS leaked logs is almost certainly a conservative income figure. That’s because the vDOS service actually dates back to Sept 2012, yet the payment records are not available for purchases prior to 2014. As a result, it’s likely that this service has made its proprietors more than $1 million.

vDOS does not currently accept PayPal payments. But for several years until recently it did, and records show the proprietors of the attack service worked assiduously to launder payments for the service through a round-robin chain of PayPal accounts.

They did this because at the time PayPal was working with a team of academic researchers to identify, seize and shutter PayPal accounts that were found to be accepting funds on behalf of booter services like vDOS. Anyone interested in reading more on their success in making life harder for these booter service owners should check out my August 2015 story, Stress-Testing the Booter Services, Financially.

People running dodgy online services that violate PayPal’s terms of service generally turn to several methods to mask the true location of their PayPal Instant Payment Notification systems. Here is an interesting analysis of how popular booter services are doing so using shell corporations, link shortening services and other tricks.

Turns out, AppleJ4ck and p1st routinely recruited other forum members on Hackforums to help them launder significant sums of PayPal payments for vDOS each week.

“The paypals that the money are sent from are not verified,” AppleJ4ck says in one recruitment thread. “Most of the payments will be 200$-300$ each and I’ll do around 2-3 payments per day.”

vDos co-owner AppleJ4ck recruiting Hackforums members to help launder PayPal payments for his booter service.

vDos co-owner AppleJ4ck recruiting Hackforums members to help launder PayPal payments for his booter service.

It is apparent from the leaked vDOS logs that in July 2016 the service’s owners implemented an additional security measure for Bitcoin payments, which they accept through Coinbase. The data shows that they now use an intermediary server (45.55.55.193) to handle Coinbase traffic. When a Bitcoin payment is received, Coinbase notifies this intermediary server, not the actual vDOS servers in Bulgaria.

A server situated in the middle and hosted at a U.S.-based address from Digital Ocean then updates the database in Bulgaria, perhaps because the vDOS proprietors believed payments from the USA would attract less interest from Coinbase than huge sums traversing through Bulgaria each day.

ANALYSIS

The extent to which the proprietors of vDOS went to launder profits from the service and to obfuscate their activities clearly indicate they knew that the majority of their users were using the service to knock others offline.

Defenders of booter and stresser services argue the services are legal because they can be used to help Web site owners stress-test their own sites and to build better defenses against such attacks. While it’s impossible to tell what percentage of vDOS users actually were using the service to stress-test their own sites, the leaked vDOS logs show that a huge percentage of the attack targets are online businesses.

In reality, the methods that vDOS uses to sustain its business are practically indistinguishable from those employed by organized cybercrime gangs, said Damon McCoy, an assistant professor of computer science at New York University.

“These guys are definitely taking a page out of the playbook of the Russian cybercriminals,” said McCoy, the researcher principally responsible for pushing vDOS and other booter services off of PayPal (see the aforementioned story Stress-Testing the Booter Services, Financially for more on this).

“A lot of the Russian botnet operators who routinely paid people to infect Windows computers with malware used to say they wouldn’t buy malware installs from Russia or CIS countries,” McCoy said. “The main reason was they didn’t want to make trouble in their local jurisdiction in the hopes that no one in their country would be a victim and have standing to bring a case against them.”

The service advertises attacks at up to 50 gigabits of data per second (Gbps). That’s roughly the equivalent of trying to cram two, high-definition Netflix movies down a target’s network pipe all at the same moment.

But Allison Nixon, director of security research at business risk intelligence firm Flashpoint, said her tests of vDOS’s service generated attacks that were quite a bit smaller than that — 14 Gbps and 6 Gbps. Nevertheless, she noted, even an attack that generates just 6 Gbps is well more than enough to cripple most sites which are not already protected by anti-DDoS services.

And herein lies the rub with services like vDOS: They put high-powered, point-and-click cyber weapons in the hands of people — mostly young men in their teens — who otherwise wouldn’t begin to know how to launch such attacks. Worse still, they force even the smallest of businesses to pay for DDoS protection services or else risk being taken offline by anyone with a grudge or agenda.

“The problem is that this kind of firepower is available to literally anyone willing to pay $30 a month,” Nixon said. “Basically what this means is that you must have DDoS protection to participate on the Internet. Otherwise, any angry young teenager is going to be able to take you offline in a heartbeat. It’s sad, but these attack services mean that DDoS protection has become the price of admission for running a Web site these days.”

Stay tuned for the next piece in this series on the hack of vDOS, which will examine some of the more interesting victims of this service.

Google Adsense[Video] Adopt these 3 key strategies to grow your online business

Whether you're new to running a blog or an experienced website pro, check out this video featuring David, Oisin, and Raj from the AdSense team, to learn about the best ways to monetize your online content.

Did you know that 68% of users share online content to give people a better sense of who they are and what they care about? Now, more than ever, it’s important that your audience loves your content. In the video below, we share 3 key strategies for you to boost audience engagement and jump start your earnings using AdSense.

  1. Relate to your users’ interests
  2. Diversify your content strategy
  3. Make your content easy to consume

Watch our video to get started today. 





Not using AdSense yet? Once you’ve watched the video, here’s how you can get started:

  1. Make sure your website is compliant with the AdSense program policies.
  2. Sign up for an AdSense account by enrolling your site.
  3. Add the AdSense ad code to your site.

Planet Linux AustraliaColin Charles: Speaking at Percona Live Europe Amsterdam

I’m happy to speak at Percona Live Europe Amsterdam 2016 again this year (just look at the awesome schedule). On my agenda:

I’m also signed up for the Community Dinner @ Booking.com, and I reckon you should as well – only 35 spots remain!

Go ahead and register now. You should be able to search Twitter or the Percona blog for discount codes :-)

Worse Than FailureRepresentative Line: Pointless Revenge

We write a lot about unhealthy workplaces. We, and many of our readers, have worked in such places. We know what it means to lose our gruntle (becoming disgruntled). Some of us, have even been tempted to do something vengeful or petty to “get back” at the hostile environment.

Milton from 'Office Space' does not receive any cake during the a birthday celebration. He looks on, forlornly, while everyone else in the office enjoys cake.

But none of us actually have done it (I hope ?). It’s self defeating, it doesn’t actually make anything better, and even if the place we’re working isn’t, we are professionals. While it’s a satisfying fantasy, the reality wouldn’t be good for anyone. We know better than that.

Well, most of us know better than that. Harris M’s company went through a round of layoffs while flirting with bankruptcy. It was a bad time to be at the company, no one knew if they’d have a job the next day. Management constantly issued new edicts, before just as quickly recanting them, in a panicked case of somebody-do-something-itis. “Bob” wasn’t too happy with the situation. He worked on a reporting system that displayed financial data. So he hid this line in one of the main include files:

#define double float
//Kind Regards, Bob

This created some subtle bugs. It was released, and it was months before anyone noticed that the reports weren’t exactly reconciling with the real data. Bob was long gone, by that point, and Harris had to clean up the mess. For a company struggling to survive, it didn’t help or improve anything. But I’m sure Bob felt better.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

TEDSylvia Earle introduces President Obama to a newly discovered fish named after him

The United States has long been known for its national parks. But last month, Barack Obama created a single marine reserve that covers significantly more area than all of them, combined.

On August 26, 2016, Obama expanded the Papahānaumokuākea Marine National Monument to 582,578 square miles around the northwestern islands of Hawaii. The monument was established in 2006 by George Bush, and Obama — who grew up in Hawaii — just quadrupled its size, making it the world’s largest marine protected area.

After a trip to Honolulu to address the IUCN World Conservation Congress, Obama met legendary oceanographer Sylvia Earle on the beach of Midway Atoll last Thursday to admire a small section of the newly expanded reserve. With the 2009 TED Prize, Earle wished to ignite public support for marine protected areas, then less than 1% of the world’s oceans. Obama applauded her efforts so far. “I am in awe of anybody who has done so much for ocean conservation,” he said. “You’ve done amazing work.”

Today, about 4% of the world’s oceans are protected. Earle hopes to increase that to 20% by 2020, because marine protected areas are key for improving resilience to climate change and ensuring biodiversity. Papahānaumokuākea, for example, is home to more than 7,000 species, including the endangered Hawaiian monk seal and black corals believed to be more than 4,000 years old.

The reserve also contains a new species just discovered in June by ichthyologist Richard Pyle (watch his TED Talk: “A dive into the reef’s twilight zone”). A member of the genus Tosanoides, this red and yellow fish is the first member of its species found outside Japanese waters, and the males have an unusual red and blue mark on their dorsal fins. This species will be named for Obama, because he created the reserve — and because the mark is reminiscent of his campaign logo. The fish’s official name will be released in print this fall when Pyle and colleagues publish their research. But Obama, being the president, got a sneak peek.

On the beach together, Earle showed the president an image of the newly discovered fish. Obama stumbled on the name, but said, “That’s a nice-looking fish.”

Earle is at the IUCN World Conservation Congress this week, and the meeting of global leaders will continue through September 10. It began just after US National Park System celebrated its 100-year anniversary, and marine protection will stay a centerpiece of the conversation.

“History will remember this anniversary and next century as the ‘blue centennial,’”  Earle said. “The time when the national park idea was brought to the ocean.”


Krebs on SecurityThe Limits of SMS for 2-Factor Authentication

A recent ping from a reader reminded me that I’ve been meaning to blog about the security limitations of using cell phone text messages for two-factor authentication online. The reader’s daughter had received a text message claiming to be from Google, warning that her Gmail account had been locked because someone in India had tried to access her account. The young woman was advised to expect a 6-digit verification code to be sent to her and to reply to the scammer’s message with that code.

2faMark Cobb, a computer technician in Reno, Nev., said had his daughter fallen for the ruse, her Gmail account would indeed have been completely compromised, and she really would have been locked out of her account because the crooks would have changed her password straight away.

Cobb’s daughter received the scam text message because she’d enabled 2-factor authentication on her Gmail account, selecting the option to have Google request that she enter a 6-digit code texted to her cell phone each time it detects a login from an unknown computer or location (in practice, the code is to be entered on the Gmail site, not sent in any kind of texted or emailed reply).

In this case, the thieves already had her password — most likely because she re-used it on some other site that got hacked. Cobb says he and his daughter believe her mobile number and password may have been exposed as part of the 2012 breach at LinkedIn.

In any case, the crooks were priming her to expect a code and to repeat it back to them because that code was the only thing standing in the way of their seizing control over her account. And they could control when Google would send the code to her phone because Google would do this as soon as they tried to log in using her username and password. Indeed, the timing aspect of this attack helps make it more believable to the target.

This is a fairly clever — if not novel — attack, and it’s one I’d wager would likely fool a decent percentage of users who have enabled text messages as a form of two-factor authentication. Certainly, text messaging is far from the strongest form of 2-factor authentication, but it is better than allowing a login with nothing more than a username and password, as this scam illustrates.

Nevertheless, text messaging codes to users isn’t the safest way to do two-factor authentication, even if some entities — like the U.S. Social Security Administration and Sony’s Playstation network — are just getting around to offering two-factor via SMS.

But don’t take my word for it. That’s according to the National Institute of Standards and Technology (NIST), which recently issued new proposed digital authentication guidelines urging organizations to favor other forms of two-factor — such as time-base one-time passwords generated by mobile apps — over text messaging. By the way, NIST is seeking feedback on these recommendations.

If anyone’s interested, Sophos’s Naked Security blog has a very readable breakdown of what’s new in the NIST guidelines. Among my favorite highlights is this broad directive: Favor the user.

“To begin with, make your password policies user friendly and put the burden on the verifier when possible,” Sophos’s Chester Wisniewski writes. “In other words, we need to stop asking users to do things that aren’t actually improving security.” Like expiring passwords and making users change them frequently, for example.

Okay, so the geeks-in-chief are saying it’s time to move away from texting as a form of 2-factor authentication. And, of course, they’re right, because text messages are a lot like email, in that it’s difficult to tell who really sent the message, and the message itself is sent in plain text — i.e. is readable by anyone who happens to be lurking in the middle.

But security experts and many technology enthusiasts have a tendency to think that everyone should see the world through the lens of security, whereas most mere mortal users just want to get on with their lives and are perfectly content to use the same password across multiple sites — regardless of how many times they’re told not to do so.

Google's new push-based two-factor authentication system. Image: Google.

Google’s new push-based two-factor authentication system. Image: Google.

Indeed, while many more companies now offer some form of two-factor authentication than did two or three years ago — consumer adoption of this core security feature remains seriously lacking. For example, the head of security at Dropbox recently told KrebsOnSecurity that less than one percent of its user base of 500 million registered users had chosen to turn on 2-factor authentication for their accounts. And Dropbox isn’t exactly a Johnny-come-lately to the 2-factor party: It has been offering 2-factor logins for a full four years now.

I doubt Dropbox is somehow an aberration in this regard, and it seems likely that other services also suffer from single-digit two-factor adoption rates. But if more consumers haven’t enabled two-factor options, it’s probably because a) it’s still optional and b) it still demands too much caring and understanding from the user about what’s going on and how these security systems can be subverted.

Personally, I favor app-based time-based one-time password (TOTP) systems like Google Authenticator, which continuously auto-generates a unique code via a mobile-based app.

Google recently went a step further along the lines of where I’d like to see two-factor headed across the board, by debuting a new “push” authentication system that generates a prompt on the user’s mobile device that users need to tap to approve login requests. This is very similar to another push-based two-factor system I’ve long used and trusted — from Duo Security [full disclosure: Duo is an advertiser on this site].

For a comprehensive breakdown of which online services offer two-factor authentication and of what type, check out twofactorauth.org. And bear in mind that even if text-based authentication is all that’s offered, that’s still better than nothing. What’s more, it’s still probably more security than the majority of the planet has protecting their accounts.

,

Sociological ImagesBotox, Gender, and the Emotional Lobotomy

Botox has forever transformed the primordial battleground against aging. Since the FDA approved it for cosmetic use in 2002, eleven million Americans have used it. Over 90 percent of them are women.

In my forthcoming book, Botox Nation, I argue that one of the reasons Botox is so appealing to women is because the wrinkles that Botox is designed to “fix,” those disconcerting creases between our brows, are precisely those lines that we use to express negative emotions: angry, bitchy, irritated.  Botox is injected into the corrugator supercilii muscles, the facial muscles that allow us to pull our eyebrows together and push them down.  By paralyzing these muscles, Botox prevents this brow-lowering action, and in so doing, inhibits our ability to scowl, an expression we use to project to the world that we are aggravated or pissed off.

9781479825264_Full.jpg (200×300)

Sociologists have long speculated about the meaning of human faces for social interaction. In the 1950s, Erving Goffman developed the concept of facework to refer to the ways that human faces act as a template to invoke, process, and manage emotions. A core feature of our physical identity, our faces provide expressive information about our selves and how we want our identities to be perceived by others.

Given that our faces are mediums for processing and negotiating social interaction, it makes sense that Botox’s effect on facial expression would be particularly enticing to women, who from early childhood are taught to project cheerfulness and to disguise unhappiness. Male politicians and CEOs, for example, are expected to look pissed off, stern, and annoyed. However, when Hillary Clinton displays these same expressions, she is chastised for being unladylike, as undeserving of the male gaze, and criticized for disrupting the normative gender order. Women more so than men are penalized for looking speculative, judgmental, angry, or cross.

Nothing demonstrates this more than the recent viral pop-cultural idioms “resting bitch face.” For those unfamiliar with the not so subtly sexist phrase, “resting bitch face,” according to the popular site Urban Dictionary, is “a person, usually a girl, who naturally looks mean when her face is expressionless, without meaning to.” This same site defines its etymological predecessor, “bitchy resting face,” as “a bitchy alternative to the usual blank look most people have. This is a condition affecting the facial muscles, suffered by millions of women worldwide. People suffering from bitchy resting face (BRF) have the tendency look hostile and/or judgmental at rest.”

Resting bitch face and its linguistic cousin is nowhere near gender neutral. There is no name for men’s serious, pensive, and reserved expressions because we allow men these feelings. When a man looks severe, serious, or grumpy, we assume it is for good reason. But women are always expected to be smiling, aesthetically pleasing, and compliant. To do otherwise would be to fail to subordinate our own emotions to those of others, and this would upset the gendered status quo.

This is what the sociologist Arlie Russell Hochschild calls “emotion labor,” a type of impression management, which involves manipulating one’s feelings to transmit a certain impression. In her now-classic study on flight attendants, Hochschild documented how part of the occupational script was for flight attendants to create and maintain the façade of positive appearance, revealing the highly gendered ways we police social performance. The facework involved in projecting cheerfulness and always smiling requires energy and, as any woman is well aware, can become exhausting. Hochschild recognized this and saw emotion work as a form of exploitation that could lead to psychological distress. She also predicted that showing dissimilar emotions from those genuinely felt would lead to the alienation from one’s feelings.

Enter Botox—a product that can seemingly liberate the face from its resting bitch state, producing a flattening of affect where the act of appearing introspective, inquisitive, perplexed, contemplative, or pissed off can be effaced and prevented from leaving a lasting impression. One reason Botox may be especially appealing to women is that it can potentially relieve them from having to work so hard to police their expressions.

Even more insidiously, Botox may actually change how women feel. Scientists have long suggested that facial expressions, like frowning or smiling, can influence emotion by contributing to a range of bodily changes that in turn produce subjective feelings. This theory, known in psychology as the “facial feedback hypothesis,” proposes that expression intensifies emotion, whereas suppression softens it. It follows that blocking negative expressions with Botox injections should offer some protection against negative feelings. A study confirmed the hypothesis.

Taken together, this works point to some of the principal attractions of Botox for women. Functioning as an emotional lobotomy of sorts, Botox can emancipate women from having to vigilantly police their facial expressions and actually reduce the negative feelings that produce them, all while simultaneously offsetting the psychological distress of alienation.

Dana Berkowitz is a professor of sociology at Louisiana State University in Baton Rogue where she teaches about gender, sexuality, families, and qualitative methods. Her book, Botox Nation: Changing the Face of America, will be out in January and can be pre-ordered now.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureA Painful Migration

Database models

In most companies, business growth leads to greater organizational complexity. With more clients to juggle, owners must increasingly delegate less important tasks to a growing pool of employees and lower management. With time, the org charts grow from simple diagrams to poster-sized trees. Departments and SBUs become separate entities. What was once a three-man startup morphs into the enterprise behemoth we all know and love.

For Vandelay Books, however, this was not the case. Despite becoming one of the largest book distributors in the state, the owners—a husband and wife pair of successful enterpreneurs—kept a firm grip on every single aspect of business. While it helped to alleviate many of the problems found in large enterprises, it also meant several departments were severely understaffed and barely managed. The internal software department, in particular, consisted of a single developer and an occasional intern or contractor ever since the company had started operating.

While it looked like a recipe for disaster, Vandelay Books had two redeeming features: they were hiring, and paying handsomely. For desperate George, who'd nearly exhausted his unemployment emergency fund, all it took was to shake hands with the couple and sign the contract. From there, it was on to a brighter future, assisting with the migration of the company's software suite from an ancient and unsupported database to something more modern.

After setting up his desk and workstation, the owners led George to a grey-haired, scruffy man sitting at the other end of the room.

"This is Doug, our lead developer," the husband introduced.

"Pleasure to meet you." Doug stood and shook George's hand, smiling from ear to ear. "Have you settled in already?"

"I think so, yes," George said. "All I need is a copy of the database to work with."

"I'll get it for you as soon as possible." Doug turned towards his PC and started typing.

After exchanging a few more words with the owners, George left for his desk, expecting the database copy to be waiting in his inbox.

An hour later, George had grown impatient. What's taking him so long? he wondered. It shouldn't take more than a few minutes to run a build script.

He decided to remind Doug about the copy. Doug was at his desk, furiously whacking at the keyboard and grinning to himself.

"Hi, how's that database coming along?" George asked, trying to hide his irritation.

"Almost done!" Doug took his hands off the keyboard, his lips still curved in a beaming smile. "Sorry to keep you waiting, there's a lot of tables in here."

"What do you mean, lots of ...?" George began, but a quick glance over Doug's shoulder answered his question. Instead of a shell window or a database IDE, Doug's display consisted of nothing but a large Notepad window, with the cursor placed in the middle of an unfinished CREATE TABLE statement.

No wonder it takes so long when you're typing the entire database out! George barely held back from screaming at his coworker. Instead, he stepped away as casually as possible and opened his IDE, morbidly anticipating the horrors lurking in the codebase.

A quick skim through the code made George realize why Doug was always smiling. It was the blissful smile of complete and utter ignorance, the smile of someone who'd risen far beyond their level of incompetence and was now eternally grateful for every day things didn't fall apart.

And the code looked like it could fall apart any minute. Over 300,000 lines had been thrown together without rhyme or reason. Obviously, Doug hadn't heard of such modern concepts as "layers" or "structured code," instead opting to hack things together as he went along. Windows API calls, business code, inline strings and pieces of SQL—everything was shoved together wherever it would stick, creating the programming equivalent of a Katamari.

George sat there, pondering all the wrong decisions in his life that'd led to this Big Ball of Mud, until Doug appeared beside him and shook him out of his stupor.

"Oh, I see you're already looking at the code!" Doug said. "It's not that hard to understand, really. I even have a few flowcharts that could help you out! Anyway, you just need to go through each of these commands, one by one—remember, it's not really SQL—like here, when it says SELECT with no FROM like this? It's actually a DELETE. And so on. Simple, isn't it?"

His head spinning, George decided to risk it. "Actually, I was thinking we could structure it a little. Separate those calls out, make a few functions that read or insert records—"

"I beg your pardon?" Doug's smile faded, giving way to the surprised look of a deer in headlights.

"I mean ... uh, never mind."

Sure, the migration would take a hundred times longer Doug's way—but as long as the paychecks cleared, it wasn't worth it to George to fix the unfixable.


Days passed slowly at Vandelay Books, and George's heroic efforts slowly paid off. The code was still terrible despite numerous attempts to improve it when Doug wasn't looking, and the migration wasn't even close to being completed, but George could finally pay his bills and refill his accounts. Once in a while, the owners would stop by for a friendly chat. Between that and the relaxed atmosphere, George began to enjoy the company, if not the job he was tasked with.

Eventually, during one of the conversations with the owners, George felt confident enough to mention that there was a way to get the migration done faster and more efficiently. He hoped they'd be able to convince Doug to let him have more freedom with refactoring the code, or at least fixing some of the most offensive spots.

Instead, all he got were puzzled looks and a long, uncomfortable silence.

The next day, the husband approached him as soon as he entered the office.

"George." The owner's voice was dry and stern. "We've discussed what you said yesterday with Doug, and we've decided we won't be needing your services anymore. Please clear out your desk by today."

George didn't bother arguing. He quietly packed his things, said his goodbyes, and headed back home to polish his resume again. And although he soon found a job at a much more sanely managed company, he often wondered if Doug were still migrating the application one query at a time—and whether he was still able to smile.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Krebs on SecurityCongressional Report Slams OPM on Data Breach

The massive data breach at the U.S. Office of Personnel Management (OPM) that exposed background investigations and fingerprint data on millions of Americans was the result of a cascading series of cybersecurity blunders from the agency’s senior leadership on down to the outdated technology used to secure the sensitive data, according to a lengthy report released today by a key government oversight panel.

OPM offices in Washington, DC. Image: Flickr.

OPM offices in Washington, DC. Image: Flickr.

The 241-page analysis, commissioned by the U.S. House Oversight & Government Reform Committee, blames OPM for jeopardizing U.S. national security for more than a generation.

The report offers perhaps the most exhaustive accounting and timeline of the breach since it was first publicly disclosed in mid-2015. According to the document, the lax state of OPM’s information security left the agency’s information systems exposed for any experienced hacker to infiltrate and compromise.

“The agency’s senior leadership failed to fully comprehend the extent of the compromise, allowing the hackers to remove manuals and other sensitive materials that essentially provided a roadmap to the OPM IT environment and key users for potential compromise,” the report charges.

Probably the most incisive portion of the assessment is the timeline of major events in the breach, which details a series of miscalculations on the part of the OPM leadership. The analysis paints the picture of a chronic — almost willful — underestimation by senior leadership at OPM about the seriousness of the threat facing the agency, until it was too late.

According to the report, the OPM first learned something was amiss on March 20, 2014, when the US-CERT notified the agency of data being exfiltrated from its network. In the ensuing weeks, OPM worked with US-CERT to implement a strategy to monitor the attackers’ movements to gather counterintelligence.

The only problem with this plan, according to the panel, was that the agency erroneously believed it had cornered the intruder. However, the hacker that OPM and US-CERT had eyes on wasn’t alone. While OPM monitored the first hacker [referred to in the report only as Hacker X1] on May 7, 2014 another hacker posed as an employee of an OPM contractor (Keypoint) performing background investigations. That intruder, referred to as Hacker X2, used the contractor’s OPM credentials to log into the OPM system, install malware and create a backdoor to the network.

As the agency monitored Hacker X1’s movements through the network, the committee found, it noticed hacker X1 was getting dangerously close to the security clearance background information. OPM, in conjunction with DHS, quickly developed a plan to kick Hacker X1 out of its system. It termed this remediation “the Big Bang.” At the time, the agency was confident the planned remediation effort on May 27, 2014 eliminated Hacker X1’s foothold on their systems.

The decision to execute the Big Bang plan was made after OPM observed the attacker load keystroke logging malware onto the workstations of several database administrators, the panel found.

“But Hacker X2, who had successfully established a foothold on OPM’s systems and had not been detected due to gaps in OPM’s security posture, remained in OPM’s systems post-Big Bang,” the report notes.

On June 5, malware was successfully installed on a KeyPoint Web server. After that, X2 moved around OPM’s system until July 29, 2014, when the intruders registered opmlearning.org — a domain the attackers used as a command-and-control center to manage their malware operations.

Beginning in July through August 2014, the Hacker X2 exfiltrated the security clearance background investigation files. Then in December 2014, 4.2 million personnel records were exfiltrated.

On March 3, 2015, wdc-news-post[dot]com was registered by the attackers, who used it as a command-and-control network. On March 26, 2015, the intruders begin stealing fingerprint data.

The committee found that had the OPM implemented basic, required security controls and more expeditiously deployed cutting edge security tools when they first learned hackers were targeting such sensitive data, they could have significantly delayed, potentially prevented, or significantly mitigated the theft.

For example, “OPM’s adoption of two-factor authentication for remote logons in early 2015, which had long been required of federal agencies, would have precluded continued access by the intruder into the OPM network,” the panel concluded.

Unfortunately, the exact details on how and when the attackers gained entry and established a persistent presence in OPM’s network are not entirely clear, the committee charges.

“This is in large part due to sloppy cyber hygiene and inadequate security technologies that left OPM with reduced visibility into the traffic on its systems,” the report notes. “The data breach by Hacker X1 in 2014 should have sounded a high level, multi-agency national security alarm that a sophisticated, persistent actor was seeking to access OPM’s highest-value data. It wasn’t until April 15, 2015 that the OPM identified the first indicator that its systems were compromised by Hacker X2.”

The information stolen in the breach included detailed files and personal background reports on more than 21.5 million individuals, and fingerprint data on 5.6 million of these individuals. Those security clearance background reports often included extremely sensitive information, such as whether applicants had consulted with a health care professional regarding an emotional or mental health condition; illegally used any drugs or controlled substances; experienced financial problems due to gambling.

The intrusion, widely attributed to hackers working with the Chinese government, likely pointed out which federal employees working for the U.S. State Department were actually spies trained by the U.S. Central Intelligence Agency. That’s because — unlike most federal agencies — the CIA conducted its own background checks on potential employees, and did not manage the process through the OPM.

As The Washington Post pointed out in September 2015, the CIA ended up pulling a number of officers from its embassy in Beijing in the wake of the OPM breach, mainly because the data leaked in the intrusion would have let the Chinese government work out which State Department employees stationed there were not listed in the background check data stolen from the OPM.

As bad and as total as the OPM breach has been, it’s remarkable how few security experts I’ve heard raise the issue of what might be at stake if the OPM plunderers had not simply stolen data, but also manipulated it.

Not long after congressional hearings began on the OPM breach, I heard from a source in the U.S. intelligence community who wondered why nobody was asking this question: If the attackers could steal all of this sensitive data and go undetected for so long, could they not also have granted security clearances to people who not only didn’t actually warrant them, but who might have been recruited in advance to work for the attackers? To this date, I’ve not heard a good answer to this question.

A copy of the 110 mb report is available here (PDF).

,

CryptogramInternet Disinformation Service for Hire

Yet another leaked catalog of Internet attack services, this one specializing in disinformation:

But Aglaya had much more to offer, according to its brochure. For eight to 12 weeks campaigns costing €2,500 per day, the company promised to "pollute" internet search results and social networks like Facebook and Twitter "to manipulate current events." For this service, which it labelled "Weaponized Information," Aglaya offered "infiltration," "ruse," and "sting" operations to "discredit a target" such as an "individual or company."

"[We] will continue to barrage information till it gains 'traction' & top 10 search results yield a desired results on ANY Search engine," the company boasted as an extra "benefit" of this service.

Aglaya also offered censorship-as-a-service, or Distributed Denial of Service (DDoS) attacks, for only €600 a day, using botnets to "send dummy traffic" to targets, taking them offline, according to the brochure. As part of this service, customers could buy an add-on to "create false criminal charges against Targets in their respective countries" for a more costly €1 million.

[...]

Some of Aglaya's offerings, according to experts who reviewed the document for Motherboard, are likely to be exaggerated or completely made-up. But the document shows that there are governments interested in these services, which means there will be companies willing to fill the gaps in the market and offer them.

TEDUV light for gene editing, the unfinished business of gender equality, and a new method for producing metals

sangeeta_bhatia_cta

As usual, the TED community has lots of news to share this week. Below, some highlights.

Flip the switch. Sangeeta Bhatia is the senior author on a paper that makes the genome editing power of CRISPR responsive to ultraviolet light. As detailed in academic journal Angewandte Chemie, the researchers developed a system where gene editing occurs only when UV light is shone on the target cells, allowing researchers greater control over when and where the editing occurs. The technique could help scientists study embryonic development and disease progression with more precision, and Bhatia’s lab is exploring possible medical applications as well. (Watch Sangeeta’s TED Talk)

Mind the gap. In 2012, Anne-Marie Slaughter set the Internet on fire with her Atlantic article, “Why Women Still Can’t Have It All,” but after the intense debate around the article died down, Slaughter continued to search for an understanding of what true gender equality means. The result is her book Unfinished Business, released on August 9. The book is not only a more nuanced look at the issues and questions that prompted the article, but also a significant evolution of the ideas she expressed four years ago. (Watch Anne-Marie’s TED Talk)

The political needs of emerging technology. While it seems the stuff of science fiction, Anand Giridharadas tackles a possibility that may well be a monumental challenge in the near future: robots taking jobs. His op-ed in The New York Times centers on the place where the challenge is brewing, Silicon Valley, and explores the disrupting power of emerging technology through the eyes of local legend and venture capitalist Vinod Khosla. In the eyes of Khosla, the displacement caused by robots won’t just require simple adjustments, but a “massive economic redistribution via something like a guaranteed minimum income” and a reinvention of capitalism itself. (Watch Anand’s TED Talk)

Education revolution in Brooklyn. Educator Nadia Lopez has worked tirelessly to right the wrongs of a failing education system and support her students who, as residents of deeply troubled communities in Brooklyn, are too frequently overlooked and left behind. Released on August 30, her book The Bridge to Brilliance chronicles the uphill battle it has taken to create, and run, her pioneering inner-city middle school, Mott Hall Bridges Academy. (Tune into PBS on September 13 to hear Nadia Lopez in TED Talks: Education Revolution.)

A landmark for world peace. In The New York Times, psychologist Steven Pinker and Colombian president Juan Manuel Santos co-author an op-ed on the country’s recent peace treaty, announced August 25, between the Colombian government and the Revolutionary Armed Forces of Colombia, or FARC. The peace agreement, the authors argue, marks not just a monumental step towards ending the decades long conflict that has plagued Colombia, but is a significant landmark for peace in the continent and around the world. “Because we have come this far, we know we can go further. Where wars have ended, other forms of bloodshed, such as gang violence, can also be reduced,” the authors write, “Since the Americas have succeeded in moving away from war, we know this could happen even in the world’s most stubbornly violent regions.” (Watch Steven’s TED Talk)

An accidental discovery. Donald Sadoway is among a team of scientists that stumbled upon a new method of producing some metals. Reported in the journal Nature Communications, the discovery came when the researchers were attempting to develop a new battery. Instead, the researchers realized that they were producing the metal antimony through electrolysis. The researchers believe that they could produce metals such as copper and nickel through the same method, but it’s not just a novel method — it nearly eliminates the greenhouse gas emissions of traditional smelting and has the potential to drastically reduce the cost of metal production. (Watch Donald’s TED Talk)

A challenge to looters. TED Prize winner Sarah Parcak appeared on The Diane Rehm Show on August 24 for a panel discussion on “the big business of looted antiquities.” She explained how terrorist organizations are selling artifacts looted from ancient sites to fund their activities, and talked about GlobalXplorer, the citizen science platform she’s building to democratize archaeology. She also sent an unconventional message to collectors tempted by buying artifacts that could be stolen. “People all around the world want to own a piece of history. My challenge to them is: be a part of making history,” she said. “They say history is written by the winners. I think history should be written by everyone.” (Watch Sarah’s latest update on GlobalXplorer, and sign up to get early access.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.


Cory DoctorowThe privacy wars have been a disaster and they’re about to get a LOT worse



In my latest Locus column, The Privacy Wars Are About to Get A Whole Lot Worse, I describe the history of the privacy wars to date, and the way that the fiction of “notice and consent” has provided cover for a reckless, deadly form of viral surveillance capitalism.

As bad as things have been, they’re about to get much, much worse: the burgeoning realm of the “Internet of Things” is filled with surveillance devices that you can’t even pretend to give your consent to.

It’s possible that we can prevent the proliferation of reckless overcollection and retention of data, maybe by the eventual success of a few ambitious class-action lawyers, but that will only happen if we stop the accompanying plague of “binding arbitration,” which takes away your right to seek justice for corporate malfeasance.

You will ‘‘interact’’ with hundreds, then thou­sands, then tens of thousands of computers every day. The vast majority of these interactions will be glancing, momentary, and with computers that have no way of displaying terms of service, much less presenting you with a button to click to give your ‘‘consent’’ to them. Every TV in the sportsbar where you go for a drink will have cameras and mics and will capture your image and process it through facial-recognition software and capture your speech and pass it back to a server for continu­ous speech recognition (to check whether you’re giving it a voice command). Every car that drives past you will have cameras that record your like­ness and gait, that harvest the unique identifiers of your Bluetooth and other short-range radio devices, and send them to the cloud, where they’ll be merged and aggregated with other data from other sources.

In theory, if notice-and-consent was anything more than a polite fiction, none of this would hap­pen. If notice-and-consent are necessary to make data-collection legal, then without notice-and-consent, the collection is illegal.

But that’s not the realpolitik of this stuff: the reality is that when every car has more sensors than a Google Streetview car, when every TV comes with a camera to let you control it with gestures, when every medical implant collects telemetry that is collected by a ‘‘services’’ business and sold to insurers and pharma companies, the argument will go, ‘‘All this stuff is both good and necessary – you can’t hold back progress!’’

It’s true that we can’t have self-driving cars that don’t look hard at their surroundings all the time, and pay especially close attention to humans to make sure that they’re not killing them. However, there’s nothing intrinsic to self-driving cars that says that the data they gather needs to be retained or further processed. Remember that for many years, the server logs that recorded all your inter­actions with the web were flushed as a matter of course, because no one could figure out what they were good for, apart from debugging problems when they occurred.

The Privacy Wars Are About to Get A Whole Lot Worse [Locus Magazine]

Google Adsense[Infographic] Download the #AdSenseGuide to creating content that draws crowds

Big events create big opportunities for AdSense publishers. Keep your content relevant when web traffic spikes and you could grow your business. We’ve put together a guide to help you make the most of these moments and draw the crowds to your site.

Download The #AdSenseGuide to creating content that draws crowds now! We have the PDF available for you in 4 languages:
Be sure to follow us on Twitter and G+ for more helpful tips and to share your thoughts on this infographic.

New to AdSense? Sign up now and turn your passion into profit.



Posted by Jay Castro, from the AdSense team



CryptogramSpy Equipment from Cobham

The Intercept has published a 120-page catalog of spy gear from the British defense company Cobham. This is equipment available to police forces. The catalog was leaked by someone inside the Florida Department of Law Enforcement.

Worse Than FailureCoded Smorgasbord: What You Don't See

Many times, when we get a submission of bad code, we’re left wondering, “what else is going on here?” Sometimes, you think, “if only I knew more, this would make sense.” More often, you think, “if I knew more, I’d be depressed,” because the code is so clearly bad.

For example, Devan inherited a report, built using SQL Server’s Reporting Services Report Builder tool. Now, this tool is not intended as much as a developer tool, as a “if you can use Excel, you can make a report!” It uses Excel-style functions for everything, which means if you want to have branching conditional logic, you need to use the IIF function.

Just don’t use it like this.

IIf(Location_Id='01111','Location 1',
  ......
IIf(Location_Id='99999','Location Help Me','This was the Easy one')))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) )))))))))) ))))))))

Now, there’s a lot missing here, but we can infer what was skipped, simply by counting the brackets. Even LISP would be embarrassed by those parentheses.

In a different example, Phillip E. was reviewing the day’s check-ins, when he noticed this particular bit of VB.NET code:

Public Class UserSession
  Inherits UserSession(Of UserSession)
...
End Class

Honestly, I’m not 100% sure if this is a problem with the code, or the fact that the language lets you abuse names so badly. Then again, the Of UserSession is a type parameter, so I think we can still blame the developer who wrote this.

Angela A wants us to know that misleadingly named constants to replace magic numbers will always be a thing.

const float ZERO = 0.01f;

Taking all of these examples together, Jack S provides a perfect summary of how I feel about this code, this fragment coming from Classic ASP.

function die(s)
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!