Planet Russell

,

Planet DebianSteinar H. Gunderson: Exploring minimax polynomials with Sollya

Following Fabian Giesen's advice, I took a look at Sollya—I'm not really that much into numerics (and Sollya, like the other stuff that comes out of the same group, is really written by hardcode numerics nerds), but approximation is often useful.

A simple example: When converting linear light values to sRGB, you need to be able to compute the formula f(x) = (x + ɑ - 1) / ɑ)^ɣ for a given (non-integer) ɑ and ɣ. (Movit frequently needs this. For the specific case of sRGB, GPUs often have hard-coded lookup tables, but they are not always applicable, for instance if the data comes from Y'CbCr.) However, even after simplifications, the exponentiation is rather expensive to run for every pixel, so we'd like some sort of approximation.

If you've done any calculus, you may have heard of Taylor series, which looks at the derivatives in a certain point and creates a polynomial from that. One of the perhaps most famous is arctan(x) = x - 1/3 x³ + 1/5 x⁵ - 1/7 x⁷ + ..., which gives rise to a simple formula for approximating pi if you set x=1 (since arctan(1) = pi/4). However, for practical approximation, Taylor series are fairly useless; they're accurate near the origin point of the expansion, but don't care at all about what happens far from it. Minimax polynomials are better; they minimize the maximum error over the range of interest.

In the past, I've been using Maple for this (I never liked Mathematica much); it's non-free, but not particularly expensive for a personal license, and it can do pretty much everything I expect from a computer algebra system. However, it would be interesting to see if Sollya could do better. After toying around a bit, it seems there are pros and cons:

  • Sollya appears to be faster. I haven't made any formal benchmarks, but I just feel like I have to wait a lot less for it.
  • I find Sollya's syntax maybe a bit more obscure (e.g., [| to start a list), although this is probably partially personal preference. Its syntax error handling is also a lot less friendly.
  • Sollya appears to be a lot more robust towards actually terminating with a working result. E.g., Maple just fails on optimizing sqrt(x) over 0..1 (a surprisingly hard case), whereas I haven't really been able to make Sollya fail yet except in the case of malformed problems (e.g. asking for optimizing for relative error of an error which is zero at certain points). Granted, I haven't pushed it that hard.
  • Maple supports a much wider range of functions. This is a killer for me; I frequently need something as simple as piecewise functions, and Sollya simply doesn't appear to support them.
  • Maple supports rational expansions, ie. two polynomials divided by each other (which can often increase performance dramatically—although the execution cost also balloons, of course). Sollya doesn't. On the other hand, Sollya supports expansion over given base functions, e.g. if you happen to sin(x) computed for whatever obscure reason, you can get an expansion of the type f(x) = a + bsin(x) + cx + dsin(x)² + ex².
  • Maple supports arbitrary weighing of the error (e.g. if you care more about errors at the endpoints)—I find this super-useful, especially if you are dealing with transformed variables or piecewise approximations. Sollya only supports relative and absolute errors, which is more limiting.
  • Sollya can seemingly be embedded as a library. Useful for some, not really relevant for me.
  • And finally, Sollya doesn't optimize coefficients over arbitrary precision; you tell it what accuracy you have to deal with (number of bits in floating or fixed point) and it optimizes the coefficients with that round-off error in mind. (I don't know if it also deals with intermediate roundoff errors when evaluating the polynomial.) Fabian makes a big deal of this, but for fp32, it doesn't really seem to matter much; I did some tests relative to what I had already gotten out of Maple, and the difference in maximum error was microscopic.

So, the verdict? Sollya is certainly good, and I can see myself using it in the future, but for me, it's more of an augmentation than replacing Maple for this use.

Planet Linux AustraliaDonna Benjamin: Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
 
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
 
Tis a mystery. Or is it?
 
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
 
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
 
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
 
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
 
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
 
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at linux.conf.au on Monday “Turning stories, into software.”
 

 

References

 

Photo by Josh Simmons

,

TEDTalks from TEDNYC Idea Search 2018

Cloe Shasha and Kelly Stoetzel hosted the fast-paced TED Ideas Search 2018 program on January 24, 2018 at TED HQ in New York, NY. (Photo: Ryan Lash / TED)

TED is always looking for new voices with fresh ideas — and earlier this winter, we opened a challenge to the world: make a one-minute audition video that makes the case for your TED Talk. More than 1,200 people applied to be a part of the Ideas Search program this year, and on Wednesday night at our New York headquarters, 13 audition finalists shared their ideas in a fast-paced program. Here are voices you may not have heard before — but that you’ll want to hear more from soon.

Ruth Morgan shares her work preventing the misinterpretation of forensic evidence. (Photo: Ryan Lash / TED)

Forensic evidence isn’t as clear-cut as you think. For years, forensic science research has focused on making it easier and more accurate to figure out what a trace — such as DNA or a jacket fiber — is and who it came from, but that doesn’t help us interpret what the evidence means. “What we need to know if we find your DNA on a weapon or gunshot residue on you is how did it get there and when did it get there,” says forensic scientist Ruth Morgan. These gaps in understanding have real consequences: forensic evidence is often misinterpreted and used to convict people of crimes they didn’t commit. Morgan and her team are committed to finding ways to answer the why and how, such as determining whether it’s possible to get trace evidence on you during normal daily activities (it is) and how trace DNA can be transferred. “We need to dramatically reduce the chance of forensic evidence being misinterpreted,” she says. “We need that to happen so that you never have to be that innocent person in the dock.”

The intersection of our senses. An experienced composer and filmmaker, Philip Clemo has been on a quest to determine if people can experience imagery with the same depth that they experience music. Research has shown that sound can impact how we perceive visuals, but can visuals have a similarly as profound impact? In his live performances, Clemo and his band use abstract imagery in addition to abstract music to create a visual experience for the audience. He hopes that people can have these same experiences in their everyday lives by quieting their minds to fully experience the “visual music” of our surrounding environment — and improve our connection to our world.

Reading the Bible … without omission. At a time when he was a recovering fundamentalist and longtime atheist, David Ellis Dickerson received a job offer as a quiz question writer and Bible fact-checker for the game show The American Bible Challenge. Among his responsibilities: coming up with questions that conveniently ignored the sections of the Bible that mention slavery, concubines and incest. The omission expectations he was faced with made him realize that evangelicals read the Bible in the same way they watch reality television: “with a willing, even loving, suspension of disbelief.” Now, he invites devout Christians to read the Bible in its most unedited form, to recognize its internal contradictions and to grapple with its imperfections.

Danielle Bilot Bilot suggests three simple but productive actions we can take help bees: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food. (Photo: Ryan Lash / TED)

To bee or not to bee? The most famous bee species of recent memory has overwhelmingly been the honey bee. For years, their concerning disappearance has made international news and been the center of research. Environmental designer Danielle Bilot believes that the honey bee should share the spotlight with the 3,600 other species that pollinate much of the food we eat every day in the US, such as blueberries, tomatoes and eggplants. Honey bees, she says, aren’t even native to North America (they were originally brought over from Europe) and therefore have a tougher time successfully pollinating these and many other indigenous crops. Regardless of species, human activity is harming them, and Bilot suggests three simple but productive actions we can take to make their lives easier and revive their populations: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food.

What if technology protected your hearing? Computers that talk to us and voice-enabled technology like Siri, Alexa and Google Home are changing the importance of hearing, says ear surgeon Matthew Bromwich. And with more than 500 million people suffering from disabling hearing loss globally, the importance of democratizing hearing health care is more relevant than ever. “How do we use our technology to improve the clarity of our communication?” Bromwich asks. He and his team have created a hearing testing technology called “SHOEBOX,” which gives hearing health care access to more than 70,000 people in 32 countries. He proposes using technology to help prevent this disability, amplify sound clarity, and paint a new future for hearing.

Welcome to the 2025 Technology Design Awards, with your host, Tom Wujec. Rocking a glittery dinner jacket, design guru Tom Wujec presents a science-fiction-y awards show from eight years into the future, honoring the designs that made the biggest impact in technology, consumer goods and transportation — capped off by a grand impact award chosen live onstage by an AI. While the designs seem fictional — a hacked auto, a self-rising house, a cutting-edge prosthetic — the kicker to Tom’s future award show is that everything he shows is in development right now.

In collaboration with the MIT Media Lab, Nilay Kulkarni used his skills as a self-taught programmer to build a simple tech solution to prevent human stampedes during the Kumbh Mela, one of the world’s largest crowd gatherings, in India. (Photo: Ryan Lash / TED)

A 15-year-old solves a deadly problem with a low-cost device. Every four years, more than 30 million Hindus gather for the Kumbh Mela, the world’s largest religious gathering, in order to wash themselves of their sins. Once every 12 years, it takes place in Nashik, a city on the western coast of India that ordinarily contains 1.5 million residents. With such a massive crowd in a small space, stampedes inevitably happen, and in 2003, 39 people were killed during the festival in Nashik. In 2014, then-15-year-old Nilay Kulkarni decided he wanted to find a solution. He recalls: “It seemed like a mission impossible, a dream too big.” After much trial and error, he and collaborators at the MIT Media Lab came up with a cheap, portable, effective stampede stopper called “Ashioto” (meaning footstep in Japanese): a pressure-sensor-equipped mat which counts the number of people stepping on it and sends the data over the internet to authorities so they can monitor the flow of people in real time. Five mats were deployed at the 2015 Nashik Kumbh Mela, and thanks to their use and other innovations, no stampedes occurred for the first time ever there. Much of the code is now available to the public to use for free, and Kulkarni is trying to improve the device. His dream: for Ashiotos to be used at all large gatherings, like the other Kumbh Melas, the Hajj and even at major concerts and sports events.

A new way to think about health care. Though doctors and nurses dominate people’s minds when it comes to health care, Tamekia MizLadi Smith is more interested in the roles of non-clinical staff in creating community health spaces. As an educator and spoken word poet, Smith uses the acronym “G.R.A.C.E.D.” to empower non-clinical staff to be accountable for data collection and to provide compassionate care to patients. Under the belief that compassionate care doesn’t begin and end with just clinicians, Smith asks that desk specialists, parking attendants and other non-clinical staff are trained and treated as integral parts of well-functioning health care systems.

Mad? Try some humor. “The world seems humor-impaired,” says comedian Judy Carter. “It just seems like everyone is going around angry: going, ‘Politics is making me so mad; my boss is just making me so mad.'” In a sharp, zippy talk, Carter makes the case that no one can actually make you mad — you always have a choice of how to respond — and that anger actually might be the wrong way to react. Instead, she suggests trying humor. “Comedy rips negativity to shreds,” she says.

Want a happier, healthier life? Look to your friends. In the relationship pantheon, friends tend to place third in importance (after spouses and blood relatives). But we should make them a priority in our lives, argues science writer Lydia Denworth. “The science of friendship suggests we should invest in relationships that deliver strong bonds. We should value people who are positive, stable, cooperative forces,” Denworth says. While friendship was long ignored by academics, researchers are now studying it and finding it provides us with strong physical and emotional benefits. “In this time when we’re struggling with an epidemic of loneliness and bitter political divisions, [friendships] remind us what joy and connection look like and why they matter,” Denworth says.

An accessible musical wonderland. With quick but impressive musical interludes, Matan Berkowitz introduces the “Airstrument” — a new type of instrument that allows anyone to create music in a matter of minutes by translating movement into sound. This technology is part of a series of devices Berkowitz has developed to enable musicians with disabilities (and anyone who wants to make music) to express themselves in non-traditional ways. “Creation with intention,” he raps, accompanied by a song created on the Airstrument. “Now it’s up to us to wake up and act.”

Divya Chander shares he work studying what human brains look like when they lose and regain consciousness. (Photo: Ryan Lash / TED)

Where does consciousness go when you’re under anesthesia? Divya Chander is an anesthesiologist, delivering specialized doses of drugs that put people in an altered state before surgery. She often wonders: Where do people’s brains do while they’re under? What do they perceive? The question has led her into a deeper exploration of perception, awareness and consciousness itself. In a thoughtful talk, she suggests that we have a lot to learn about consciousness … that we could learn by studying unconsciousness.

The art of creation without preparation. To close out the night, vocalist and improviser Lex Empress creates coherent lyrics from words written by audience members on paper airplanes thrown onto the stage. Empress is accompanied by virtuoso pianist and producer Gilian Baracs, who also improvises everything he plays. Their music reminds us to enjoy the great improvisation that is life.

CryptogramFriday Squid Blogging: Squid that Mate, Die, and Then Sink

The mating and death characteristics of some squid are fascinating.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityRegistered at SSA.GOV? Good for You, But Keep Your Guard Up

KrebsOnSecurity has long warned readers to plant your own flag at the my Social Security online portal of the U.S. Social Security Administration (SSA) — even if you are not yet drawing benefits from the agency — because identity thieves have been registering accounts in peoples’ names and siphoning retirement and/or disability funds. This is the story of a Midwest couple that took all the right precautions and still got hit by ID thieves who impersonated them to the SSA directly over the phone.

In mid-December 2017 this author heard from Ed Eckenstein, a longtime reader in Oklahoma whose wife Ruth had just received a snail mail letter from the SSA about successfully applying to withdraw benefits. The letter confirmed she’d requested a one-time transfer of more than $11,000 from her SSA account. The couple said they were perplexed because both previously had taken my advice and registered accounts with MySocialSecurity, even though Ruth had not yet chosen to start receiving SSA benefits.

The fraudulent one-time payment that scammers tried to siphon from Ruth Eckenstein’s Social Security account.

Sure enough, when Ruth logged into her MySocialSecurity account online, there was a pending $11,665 withdrawal destined to be deposited into a Green Dot prepaid debit card account (funds deposited onto a Green Dot card can be spent like cash at any store that accepts credit or debit cards). The $11,655 amount was available for a one-time transfer because it was intended to retroactively cover monthly retirement payments back to her 65th birthday.

The letter the Eckensteins received from the SSA indicated that the benefits had been requested over the phone, meaning the crook(s) had called the SSA pretending to be Ruth and supplied them with enough information about her to enroll her to begin receiving benefits. Ed said he and his wife immediately called the SSA to notify them of fraudulent enrollment and pending withdrawal, and they were instructed to appear in person at an SSA office in Oklahoma City.

The SSA ultimately put a hold on the fraudulent $11,665 transfer, but Ed said it took more than four hours at the SSA office to sort it all out. Mr. Eckenstein said the agency also informed them that the thieves had signed his wife up for disability payments. In addition, her profile at the SSA had been changed to include a phone number in the 786 area code (Miami, Fla.).

“They didn’t change the physical address perhaps thinking that would trigger a letter to be sent to us,” Ed explained.

Thankfully, the SSA sent a letter anyway. Ed said many additional hours spent researching the matter with SSA personnel revealed that in order to open the claim on Ruth’s retirement benefits, the thieves had to supply the SSA with a short list of static identifiers about her, including her birthday, place of birth, mother’s maiden name, current address and phone number.

Unfortunately, most (if not all) of this data is available on a broad swath of the American populace for free online (think Zillow, Ancestry.com, Facebook, etc.) or else for sale in the cybercrime underground for about the cost of a latte at Starbucks.

The Eckensteins thought the matter had been resolved until Jan. 14, when Ruth received a 1099 form from the SSA indicating they’d reported to the IRS that she had in fact received an $11,665 payment.

“We’ve emailed our tax guy for guidance on how to deal with this on our taxes,” Mr. Eckenstein wrote in an email to KrebsOnSecurity. “My wife logged into SSA portal and there was a note indicating that corrected/updated 1099s would be available at the end of the month. She’s not sure whether that message was specific to her or whether everyone’s seeing that.”

NOT SMALL IF IT HAPPENS TO YOU

Identity thieves have been exploiting authentication weaknesses to divert retirement account funds almost since the SSA launched its portal eight years ago. But the crime really picked up in 2013, around the same time KrebsOnSecurity first began warning readers to register their own accounts at the MySSA portal. That uptick coincided with a move by the U.S. Treasury to start requiring that all beneficiaries receive payments through direct deposit (though the SSA says paper checks are still available to some beneficiaries under limited circumstances).

More than 34 million Americans now conduct business with the Social Security Administration (SSA) online. A story this week from Reuters says the SSA doesn’t track data on the prevalence of identity theft. Nevertheless, the agency assured the news outlet that its anti-fraud efforts have made the problem “very rare.”

But Reuters notes that a 2015 investigation by the SSA’s Office of Inspector General investigation identified more than 30,000 suspicious MySSA registrations, and more than 58,000 allegations of fraud related to MySSA accounts from February 2013 to February 2016.

“Those figures are small in the context of overall MySSA activity – but it will not seem small if it happens to you,” writes Mark Miller for Reuters.

The SSA has not yet responded to a request for comment.

Ed and Ruth’s experience notwithstanding, it’s still a good idea to set up a MySSA account — particularly if you or your spouse will be eligible to withdraw benefits soon. The agency has been trying to beef up online authentication for citizens logging into its MySSA portal. Last summer the SSA began requiring all users to enter a username and password in addition to a one-time security code sent their email or phone, although as previously reported here that authentication process could be far more robust.

The Reuters story reminds readers to periodically use the MySSA portal to make sure that your personal information – such as date of birth and mailing address – are correct. “For current beneficiaries, if you notice that a monthly payment has not arrived, you should notify the SSA immediately via the agency’s toll-free line (1-800-772-1213) or at your local field office,” Miller advised. “In most cases, the SSA will make you whole if the theft is reported quickly.”

Another option is to use the SSA’s “Block Electronic Access” feature, which blocks any automatic telephone or online access to your Social Security record – including by you (although it’s unclear if blocking access this way would have stopped ID thieves who manage to speak with a live SSA representative). To restore electronic access, you’ll need to contact the Social Security Administration and provide proof of your identity.

Planet DebianEddy Petrișor: Detecting binary files in the history of a git repository

Git, VCSes and binary files

Git is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?

Detecting any binary files, only in the current commit

As with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'
Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'
And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
 Of course, we assume here, the work tree is clean.

Checking all commits in a branch

Since we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_bins
Git has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh
#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommit
After we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'

Your branch is up-to-date with 'origin/master'
$ git branch -D test_bins
Deleted branch test_bins (was 6358b91).
Enjoy!

CryptogramThe Effects of the Spectre and Meltdown Vulnerabilities

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors' manufacturers, and patched­ -- at least to the extent possible.

This news isn't really any different from the usual endless stream of security vulnerabilities and patches, but it's also a harbinger of the sorts of security problems we're going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn't possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren't anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They're the future of security­ -- and it doesn't look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications -- ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn't supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you're visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they're going to receive and execute instructions based on that. If the guess turns out to be correct, it's a performance win. If it's wrong, the microprocessors throw away what they've done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it's a flaw in the very concept of speculative execution. There's no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can't make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user's perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we're only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel's Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they're guarded.

Third, these vulnerabilities will affect computers' functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn't work. Sometimes it's because computers are in cheap products that don't have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets -- ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it's because a computer chip's functionality is so core to a computer's design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world's computer hardware is the new normal. It's going to leave us all much more vulnerable in the future.

This essay previously appeared on TheAtlantic.com.

Planet DebianDirk Eddelbuettel: prrd 0.0.2: Many improvements

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)

  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureError'd: #TITLE_OF_ERRORD2#

Joe P. wrote, When I tried to buy a coffee at the airport with my contactless VISA card, it apparently thought my name was '%s'."

 

"Instead of outsourcing to Eastern Europe or the Asian subcontinent, companies should be hiring from Malta. Just look at these people! They speak fluent base64!" writes Michael J.

 

Raffael wrote, "While I can proudly say that I am working on bugs, the Salesforce Chatter site should probably consider doing the same."

 

"Wow! Thanks! Happy Null Year to you too!" Alexander K. writes.

 

Joel B. wrote, "Yesterday was the first time I've ever seen a phone with a 'License Violation'. Phone still works, so I guess there's that."

 

"They missed me so much, they decided to give me...nothing," writes Timothy.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker

Closing

  • LCA 2019 will be in Christchurch, New Zealand – http://lca2019.linux.org.au
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions

 

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out

Challenges

  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication

Opportunities

  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • stackoverflow.com/jobs can filter
  • weworkremotely.com

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
  • https://contained.af/
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
  • metaparticle.io
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings

 

Share

,

Planet DebianDaniel Pocock: Do the little things matter?

In a widely shared video, US Admiral McRaven addressing University of Texas at Austin's Class of 2014 chooses to deliver a simple message: make your bed every day.

A highlight of this talk is the quote The little things in life matter. If you can't do the little things right, you'll never be able to do the big things right.

In the world of free software engineering, we have lofty goals: the FSF's High Priority Project list identifies goals like private real-time communication, security and diversity in our communities. Those deploying free software in industry have equally high ambitions, ranging from self-driving cars to beating the stock market.

Yet over and over again, we can see people taking little shortcuts and compromises. If Admiral McRaven is right, our failure to take care of little decisions, like how we choose an email provider, may be the reason those big projects, like privacy or diversity, appear to be no more than a pie-in-the-sky.

The IT industry has relatively few regulations compared to other fields such as aviation, medicine or even hospitality. Consider a doctor who re-uses a syringe - how many laws would he be breaking? Would he be violating conditions of his insurance? Yet if an IT worker overlooks the contempt for the privacy of Gmail users and their correspondents that is dripping off the pages of the so-called "privacy" policy, nobody questions them. Many people will applaud their IT staff for choices or recommendations like this, because, of course, "it works". A used syringe "just works" too, but who would want one of those?

Google's CEO Eric Schmidt tells us that if you don't have anything to hide, you don't need to worry.

Compare this to the advice of Sun Tzu, author of the indispensable book on strategy, The Art of War. The very first chapter is dedicated to estimating, calculating and planning: what we might call data science today. Tzu unambiguously advises to deceive your opponent, not to let him know the truth about your strengths and weaknesses.

In the third chapter, Offense, Tzu starts out that The best policy is to take a state intact ... to subdue the enemy without fighting is the supreme excellence. Surely this is only possible in theory and not in the real world? Yet when I speak to a group of people new to free software and they tell me "everybody uses Windows in our country", Tzu's words take on meaning he never could have imagined 2,500 years ago.

In many tech startups and even some teams in larger organizations, the oft-repeated mantra is "take the shortcut". But the shortcuts and the things you get without paying anything, without satisfying the conditions of genuinely free software, compromises such as Gmail, frequently involve giving up a little bit too much information about yourself: otherwise, why would they leave the bait out for you? As Mr Tzu puts it, you have just been subdued without fighting.

In one community that has taken a prominent role in addressing the challenges of diversity, one of the leaders recently expressed serious concern that their efforts had been subdued in another way: Gmail's Promotions Tab. Essential emails dispatched to people who had committed to their program were routinely being shunted into the Promotions Tab along with all that marketing nonsense that most people never asked for and the recipients never saw them.

I pointed out many people have concerns about Gmail and that I had been having thoughts about simply blocking it at my mail server. It is quite easy to configure a mail server to send an official bounce message, for example, in Postfix, it is just one line in the /etc/postfix/access file:

gmail.com   REJECT  The person you are trying to contact hasn't accepted Gmail's privacy policy.  Please try sending the email from a regular email provider.

Some communities could go further, refusing to accept Gmail addresses on mailing lists or registration forms: the lesser evil compared to a miserable fate in Promotions Tab limbo.

I was quite astounded at the response: several people complained that this was too much for participants to comply with (the vast majority register with a Gmail address) or that it was even showing all Gmail users contempt (can't they smell the contempt for users in the aforementioned Gmail "privacy" policy?). Nobody seemed to think participants could cope with that and if we hope these people are going to be the future of diversity, that is really, really scary.

Personally, I have far higher hopes for them: just as Admiral McRaven's Navy SEALS are conditioned to make their bed every day at boot camp, people entering IT, especially those from under-represented groups, need to take pride in small victories for privacy and security, like saying "No" each and every time they have the choice to give up some privacy and get something "free", before they will ever hope to accomplish big projects and change the world.

If they don't learn these lessons at the outset, like the survival and success habits drilled into soldiers during boot-camp, will they ever? If programs just concentrate on some "job skills" and gloss over the questions of privacy and survival in the information age, how can they ever deliver the power shift that is necessary for diversity to mean something?

Come and share your thoughts on the FSFE discussion list (join, thread and reply).

Sociological ImagesChildren Learn Rules for Romance in Preschool

Originally Posted at TSP Discoveries

Photo by oddharmonic, Flickr CC

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn the rules and beliefs associated with romantic relationships and sexuality much earlier.

Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them.

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at https://thesocietypages.org/socimages)

CryptogramWhatsApp Vulnerability

A new vulnerability in WhatsApp has been discovered:

...the researchers unearthed far more significant gaps in WhatsApp's security: They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here's the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it's theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here's the research paper.

Worse Than FailureThe More Things Change: Fortran Edition

Technology improves over time. Storage capacity increases. Spinning platters are replaced with memory chips. CPUs and memory get faster. Moore's Law. Compilers and languages get better. More language features become available. But do these changes actually improve things? Fifty years ago, meteorologists used the best mainframes of the time, and got the weather wrong more than they got it right. Today, they have a global network of satellites and supercomputers, yet they're wrong more than they're right (we just had a snowstorm in NJ that was forecast as 2-4", but got 16" before drifting).

As with most other languages, FORTRAN also added structure, better flow control and so forth. The problem with languages undergoing such a massive improvement is that occasionally, coding styles live for a very long time.

Imagine a programmer who learned to code using FORTRAN IV (variable names up to 6 characters, integers implicitly start with "I" through "N" and reals start with any other letter - unless explicitly declared, flow control via GOTO, etc) writing a program in 2000 (using a then-current compiler but with FORTRAN IV style). Now imagine some PhD candidate coming along in 2017 to maintain and enhance this FORTRAN IV-style code with the latest FORTRAN compiler.

A.B.was working at a university with just such a scientific software project as part of earning a PhD. These are just a couple of the things that caused a few head-desk moments.

Include statements. The first variant only allows code to be included. The second allows preprocessor directives (like #define).

    INCLUDE  'path/file'

    #include 'path/file'

Variables. Since the only data types/structures originally available were character, logical, integer, real*4, real*8 and arrays, you had to shoehorn your data into the closest fit. This led to declarations sections that included hundreds of basic declarations. This hasn't improved today as people still use one data type to hold something that really should be implemented as something else. Also, while the compilers of today support encapsulation/modules, back then, everything was pretty much global.

Data structures. The only thing supported back then was multidimensional arrays. If you needed something like a map, you needed to roll your own. This looks odd to someone who cut their teeth on a version of the language where these features are built-in.

Inlining. FORTRAN subroutines support local subroutines and functions which are inlined, which is useful to provide implied visibility scoping. Prudent use allows you to DRY your code. This feature isn't even used, so the same code is duplicated over and over again inline. Any of you folks see that pattern in your new-fangled modern systems?

Joel Spolsky commented about the value of keeping old code around. While there is much truth in his words, the main problem is that the original programmers invariably move on, and as he points out, it is much harder to read (someone else's) code than to write your own; maintenance of ancient code is a real world issue. When code lives across too many language version improvements, it becomes inherently more difficult to maintain as its original form becomes more obsolete.

To give you an idea, take a look at the just the declaration section of one module that A.B. inherited (except for a 2 line comment at the top of the file, there were no comments). FWIW, when I did FORTRAN at the start of my career, I used to document the meaning of every. single. abbreviated. variable. name.

      subroutine thesub(xfac,casign,
     &     update,xret,typret,
     &     meop1,meop2,meop12,meoptp,
     &     traop1, traop2, tra12,
     &     iblk1,iblk2,iblk12,iblktp,
     &     idoff1,idoff2,idof12,
     &     cntinf,reoinf,
     &     strinf,mapinf,orbinf)
      implicit none
      include 'routes.h'
      include 'contr_times.h'
      include 'opdim.h'
      include 'stdunit.h'
      include 'ioparam.h'
      include 'multd2h.h'
      include 'def_operator.h'
      include 'def_me_list.h'
      include 'def_orbinf.h'
      include 'def_graph.h'
      include 'def_strinf.h'
      include 'def_filinf.h'
      include 'def_strmapinf.h'
      include 'def_reorder_info.h'
      include 'def_contraction_info.h'
      include 'ifc_memman.h'
      include 'ifc_operators.h'
      include 'hpvxseq.h'

      integer, parameter ::
     &     ntest = 000
      logical, intent(in) ::
     &     update
      real(8), intent(in) ::
     &     xfac, casign
      real(8), intent(inout), target ::
     &     xret(1)
      type(coninf), target ::
     &     cntinf
      integer, intent(in) ::
     &     typret,
     &     iblk1, iblk2, iblk12, iblktp,
     &     idoff1,idoff2,idof12
      logical, intent(in) ::
     &     traop1, traop2, tra12
      type(me_list), intent(in) ::
     &     meop1, meop2, meop12, meoptp
      type(strinf), intent(in) ::
     &     strinf
      type(mapinf), intent(inout) ::
     &     mapinf
      type(orbinf), intent(in) ::
     &     orbinf
      type(reoinf), intent(in), target ::
     &     reoinf

      logical ::
     &     bufop1, bufop2, buf12, 
     &     first1, first2, first3, first4, first5,
     &     msfix1, msfix2, msfx12, msfxtp,
     &     reject, fixsc1, fixsc2, fxsc12,
     &     reo12, non1, useher,
     &     traop1, traop2
      integer ::
     &     mstop1,mstop2,mst12,
     &     igmtp1,igmtp2,igmt12,
     &     nc_op1, na_op1, nc_op2, na_op2,
     &     nc_ex1, na_ex1, nc_ex2, na_ex2, 
     &     ncop12, naop12,
     &     nc12tp, na12tp,
     &     nc_cnt, na_cnt, idxres,
     &     nsym, isym, ifree, lenscr, lenblk, lenbuf,
     &     buftp1, buftp1, bftp12,
     &     idxst1, idxst2, idxt12,
     &     ioff1, ioff2, ioff12,
     &     idxop1, idxop2, idop12,
     &     lenop1, lenop2, len12,
     &     idxm12, ig12ls,
     &     mscmxa, mscmxc, msc_ac, msc_a, msc_c,
     &     msex1a, msex1c, msex2a, msex2c,
     &     igmcac, igamca, igamcc,
     &     igx1a, igx1c, igx2a, igx2c,
     &     idxms, idxdis, lenmap, lbuf12, lb12tp,
     &     idxd12, idx, ii, maxidx
      integer ::
     &     ncblk1, nablk1, ncbka1, ncbkx1, 
     &     ncblk2, nablk2, ncbka2, ncbkx2, 
     &     ncbk12, nabk12, ncb12t, nab12t, 
     &     ncblkc, nablkc,
     &     ncbk12, nabk12,
     &     ncro12, naro12,
     &     iblkof
      type(filinf), pointer ::
     &     ffop1,ffop2,ff12
      type(operator), pointer ::
     &     op1, op2, op1op2, op12tp
      integer, pointer ::
     &     cinf1c(:,:),cinf1a(:,:),
     &     cinf2c(:,:),cinf2a(:,:),
     &     cif12c(:,:),
     &     cif12a(:,:),
     &     cf12tc(:,:),
     &     cf12ta(:,:),
     &     cfx1c(:,:),cfx1a(:,:),
     &     cfx2c(:,:),cfx2a(:,:),
     &     cfcntc(:,:),cfcnta(:,:),
     &     inf1c(:),
     &     inf1a(:),
     &     inf2c(:),
     &     inf2a(:),
     &     inf12c(:),
     &     inf12a(:),
     &     dmap1c(:),dmap1a(:),
     &     dmap2c(:),dmap2a(:),
     &     dm12tc(:),dm12ta(:)

      real(8) ::
     &     xnrm, facscl, fcscl0, facab, xretls
      real(8) ::
     &     cpu, sys, cpu0, sys0, cpu00, sys00
      real(8), pointer ::
     &     xop1(:), xop2(:), xop12(:), xscr(:)
      real(8), pointer ::
     &     xbf1(:), xbf2(:), xbf12(:), xbf12(:), x12blk(:)

      integer ::
     &     msbnd(2,3), igabnd(2,3),
     &     ms12ia(3), ms12ic(3), ig12ia(3), ig12ic(3),
     &     ig12rw(3)

      integer, pointer ::
     &     gm1dc(:), gm1da(:),
     &     gm2dc(:), gm2da(:),
     &     gmx1dc(:), gmx1da(:),
     &     gmx2dc(:), gmx2da(:),
     &     gmcdsc (:), gmcdsa (:),
     &     gmidsc (:), gmidsa (:),
     &     ms1dsc(:), ms1dsa(:),
     &     ms2dsc(:), ms2dsa(:),
     &     msx1dc(:), msx1da(:),
     &     msx2dc(:), msx2da(:),
     &     mscdsc (:), mscdsa (:),
     &     msidsc (:), msidsa (:),
     &     idm1ds(:), idxm1a(:),
     &     idm2ds(:), idxm2a(:),
     &     idx1ds(:), ixms1a(:),
     &     idx2ds(:), ixms2d(:),
     &     idxdc (:), idxdsa (:),
     &     idxmdc (:),idxmda (:),
     &     lstrx1(:),lstrx2(:),lstcnt(:),
     &     lstr1(:), lstr2(:), lst12t(:)

      integer, pointer ::
     &     mex12a(:), m12c(:),
     &     mex1ca(:), mx1cc(:),
     &     mex2ca(:), mx2cc(:)

      integer, pointer ::
     &     ndis1(:,:), dgms1(:,:,:), gms1(:,:),
     &     lngms1(:,:),
     &     ndis2(:,:), dgms2(:,:,:), gms2(:,:),
     &     lngms2(:,:),
     &     nds12t(:,:), dgms12(:,:,:),
     &     gms12(:,:),
     &     lgms12(:,:),
     &     lg12tp(:,:), lm12tp(:,:,:)

      integer, pointer ::
     &     cir12c(:,:), cir12a(:,:),
     &     ci12c(:,:),  ci12a(:,:),
     &     mire1c(:),   mire1a(:),
     &     mire2c(:),   mire2a(:),
     &     mca12(:), didx12(:), dca12(:),
     &     mca1(:),  didx1(:),  dca1(:),
     &     mca2(:),  didx2(:),  dca2(:),
     &     lenstr_array(:,:,:)

c dbg
      integer, pointer ::
     &     dum1c(:), dum1a(:), hpvx1c(:), hpvx1a(:),
     &     dum2c(:), dum2a(:), hpvx2c(:), hpvx2a(:)
      integer ::
     &     msd1(ngastp,2,meop1%op%njoined),
     &     msd2(ngastp,2,meop2%op%njoined),
     &     jdx, totc, tota
c dbg

      type(graph), pointer ::
     &     graphs(:)

      integer, external ::
     &     ielsum, ielprd, imdst2, glnmp, idxlst,
     &     mxdsbk, m2ims4
      logical, external ::
     &     nxtdis, nxtds2, lstcmp,
     &     nndibk, nndids
      real(8), external ::
     &     ddot
[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 3

Insights – solving every problem for good Paul Wayper

Sysadmins

  • Too much to check, too little time
  • What does this message mean again
  • Too reactive

How Sysadmins fix problems

  • Read text files and command output
  • Look at them for information
  • Check this information against the knowlede
  • Decide on appobiate solution

Insites

  • Reads test files and outputs
  • Process them into information
  • Use information in rules
  • Rules provide information about Solution

Examples

  • Simple rule – check “localhost” is in /etc/hosts
  • Rule 2 – chronyd refuses to fix server’s time since is out by more than 1000s
    • Checks /var/log/message for error message from chrony
  • Insites rolls up all the checks against messages, so only down once
  • Rule 3 – rsyslog dropping messages

Website

http://red.ht/demo_rules

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 2

Personalisation at Scale: A “Cookie Cutter” Approach Jim O’Halloran

  • Impact on site performance on conversion is huge
  • Magento
    • LAMP stack + Redis or memcached
    • Generally App is CPI bound
    • Routing / Rendering still time consuming
  • Varnish full page caching (FPC)
  • But what about personalised content?
  • Edge Side Includes (ESIs)
    • But ESIs run in series, is slllow when you have many
    • Content is nont cacheable, expensive to calculate, significant render time
    • ESI therefore undermines much advantage of FPC
  • Ajax
    • Make ajax request and fetch personalised content
    • Still load on backend
    • ESI limitations plus added network latency
  • Cookie Cutter
    • When an event occurs that modifies personalisation state, send a cookies containing the required data with the response.
    • In the browser, use the content of that cookie to update the page

Example

  • Goto www.example.com
    • Probably cached in varnish
    • I don’t have a cookie
    • If I login, uncachable request, I am changing login state
    • Response includes Set-Cookie header creating a personalised cookie
  • Advantages
    • No backend requests
    • Page data served is cached always
  • How big can cookies be?
    • RFC 6265 has limits but in reality
    • Actual limit ~4096 bytes per cookie
    • Some older browsers also limit to ~4096 bytes total per domain

Potential issues

  • Request Size
    • Keep cookies small
      • Store small values only, No pre-rendered markup, No larger data structures
    • Serve static assets via CDN
    • Lot of stuff in cart can get huge
  • Information leakage
    • Final URLs leaked to unlogged in users
  • Large Scale changes
    • Page needs to look completely different to different users
    • Vary headers might be an option
  • Formkeys
    • XSRF protection workarounds
  • What about cache misses
    • Megento assembles all it’s pages from a series of blocks
    • Most parts of page are relatively static (block cache)
    • Aligent_CacheObserver – Megento extension that adds cache tags to blocks that should be cached but were not picked up as cachable by default
    • Aoe_TemplateHints – Visibility into Block cache
    • Cacheing != Performance Optimisation – Aoe_Profiler

Availability

  • Plugin availbale for Megento 1
    • Varnish CookieCutter
  • For Magento 2 has native varnish
    • But has limitations
    • Maybe some off CookieCutter stuff could improve

Future

  • localStorage instead of cookies


 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 1

Panel: Meltdown, Spectre, and the free-software community Jonathan Corbet, Andrew ‘bunnie’ Huang, Benno Rice, Jess Frazelle, Katie McLaughlin, Kees Cook

  • FreeBSD only heard 11 days beforehand. Would have liked more notice
  • Got people involved from the Kernel Summit in Oct
  • Hosting company only heard once it went official, been busy patching since
  • Likely to be class-action lawsuit for $billions. That might make chip makers more paranoid about documentation and disclosure.
  • Thoughts in embargo
    • People noticed strange patches going in beforehand.
    • Only broke 6 days early, had been going for 6 months
    • “Linus is happy with this, something is terribly wrong”
    • Sad that the 2nd-tier cloud providers didn’t know. Exclusive club and lines as to who got informed were not clear
    • Projects that don’t have explicit relationship with Intel didn’t get informed
  • Thoughts on other vendors
    • This class of bugs could affect anybody, open hardware would probably not fix
    • More open hardware could enable people to review the processors and find these from the design rather than poking around
    • Hard to guarantee the shipped hardware matches the design
    • Software people can build everything at home and check. FABs don’t work at home.
  • Speculative execution warned about years ago. Danger ignored. How to make sure the next one isn’t ignored?
    • We always have to do some risky stuff
    • The research on this built up slowly over the years
    • Even if you have only found impractical attacks against something doesn’t mean the practical one doesn’t exist.
  • What criteria do we use to decide who is in?
    • Mechanisms do exist, they were mainly not used. Perhaps because they were for software vulnerabilities
  • Did people move providers?
    • No but Containers made things easier to reboot stuff and shuffle
  • Are there similar vulnerabilities ( similar or general hardware ) coming along?
    • The Kernel page-table patches were fairly general, should cover many similar ones
    • All these performance optimising bit of your CPU are now attack surfaces
    • What are people going to do if this slows down hardware too much?
  • How do we explain problems like these to politicians etc
    • Legos
    • We still have kernel devs getting their laptops
  • Can be use CPUs that don’t have speculative execution?
    • Not really. Back to 486s
  • Who are we protesting against with the embargo?
    • Everybody
    • The longer period let better fixes get in
    • The meltdown fix could be done in semi-public so had better quality

What is the most common street name in Australia? Rachel Bunder

  • Why?
    • Saw a map with most common name by US street
  • Just looking at name, not end bit “park” , “road”
  • Data
    • PSMA Geocoded national address file – Great but came out after project
    • Use Open Street Maps
  • Started with Common Name in Sydney
    • Used Metro Extracts – site closing down soon
    • Format is geojson
    • Road files separately provided
  • Procedure
    • Used python, R also has good features and libaraies
    • geopandas
    • Had some paths with no names
    • What is a road? – “Something with a name I can drive a car on”
  • Sydney
    • Full street name
      • Victoria Road
      • Pacific Highway
      • oops like like names are being counted twice
    • Tried merging them together
    • Roads don’t 100% match ends. Added function to fuzzy merge the roads that are 100m apart
    • Still some weird ones but probably won’t affect top
    • Second attempt
      • Short st, George st, William st, John st, Church st
  • Now with just the “name bit”
    • Tried taking out just the last name. ended up with “the” as most common.
    • Started with “The” = whole name
    • Single word = whole name
    • name – descriptor – suffex
    • lots of weird names
    • name list – Park, Victoria, Railway, William, Short
  • Wouldn’t work in many other counties
  • Now for all over Australia
    • overpass data
    • Downloaded in 50kmx50x squares
  • Lessons
    • Start small
    • Choose something familiar
    • Check you bias (different naming conventions)
    • Constance vigerlence
    • Know your problem
  • Common plant names
    • Wattle – 15th – 385
  • Other name
    • “The Esplanade” more common than “The Avenue”
  • Top names
    • 5th – Victoria
    • 4th – Church – 497
    • 3rd – George –  551
    • 2nd – Railway
    • 1st – Park – 693
  • By State
    • WA – Forest
    • SA – Railway
    • Vic – Park
    • Tas – Esplanade
    • NT – Smith/Stuart
    • NSW – Park

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Keynote – Hugh Blemings

Wandering through the Commons

Reflections on Free and Open Source Software/Hardware in Australia, New Zealand and beyond

  • Past Linux.conf.au’s reviewed
  • FOSS in Aus and NZ
    • Above per capita
  • List of Aus / NZ people and their contributions
    • John Lions , Lions book on Unix
    • Pia Andrews/Waugh/Smith – Open Government, GovHack, Linux Australia, Open Data
    • Vik Oliver – 3D Printing
    • Clare Cuuran – Open Government in NZ
    • plus a bunch of others

Working in Free Software and Open Hardware

  • The basics
    • Be visable in projects of relevance
      • You will be typed into Google, looked at in GitHub
    • Be yourself
      • But be business Friendly
    • Linkedin is a thing, really
    • Need a accurate basic presence
  • Finding a new job
    • Networks
    • Local user groups
    • Conferences
    • The projects you work on
  • Application and negotiation
    • Be professional, courteous
    • Do homework about company and culture
    • Talk to people that work there
    • Spend time on interview prep
      • Know your stuff, if you don’t know, say so
    • Think about Salary expectations and stick to them
      • Val Aurora’s page on this is excellent
    • Ask to keep copyright on your code
      • Should be a no-brainer for a FOSS.OH company
  • In the Job
    • Takes time to get into groove, don’t sweat it
    • Get out every now and then, particularly if working from home
    • Work/life balance
    • Know when to jump
      • Poisonous workplaces
    • An aside to People’s managers
      • Bring your best or don’t be a people manager
      • Take your reports welfare seriously

Looking after You

  • Ours is in the main a sedentary and solitary pursuit
    • exercise
  • Sitting and standing in front of a desk all day is bad
    • takes breaks
  • Depression is a real thing
  • Eat more vegetables
  • Find friends/colleagues to exercise with

Working if FOSS / OH – Staying Current

  • Look over a colleagues shoulder
  • Do something that is not part of your regular job
    • low level programming
    • Karger systems, Openstack
  • Stay uptodate with Security Blogs and the like
    • Many of the attack vectors have generic relevance
  • Take the lid off, tinker with hardware
    • Lots of videos online to help or just watch

Make Hay while the Sun Shines

  • Save some money for rainy day
  • Keep networks Open
  • Even when you have a job

You’re fired … Now What? – In a moment

  • Don’t panic
    • Going out in a twitter storm won’t help anyone
  • It’s not personal
    • It is the position that is no longer needed, not you
  • If you think it an unfair dismissal, seek legal advice before signing anything
  • It is normal to feel rubbish
  • Beware of imposter syndrome
  • Try to keep 2-3 opportunities in the pipeline
  • Don’t assume people will remember you
    • It’s not personal, everyone gets busy
    • It’s okay to (politely naturally) follow up periodically
  • Keep search a little narrow for the first week or two
    • The expand widely
  • Balance take “something/everything” as better than waiting for your dream job

Dream Job

  • Power 9 CPU
    • 14nm process
    • 4GHz, 24 cores
    • 25km of wires
    • 8 billion transisters
    • 3900 official chips pins
    • ~19,000 connections from die to the pin

Conclusions

  • Part of a vibrant FOSS/OH community both hear and abroad
  • We have accomplished much
  • The most exciting (in both senses) things lie before us
  • We need all of you to be part at every level of the stack
  • Look forward to working with you…

Share

,

Planet DebianSteinar H. Gunderson: Movit 1.6.0 released

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018

  - Support for effects that work as compute shaders. Compute shaders are
    generally slower than fragment shaders for the same algorithm,
    but allow some forms of communication between shader invocations
    and have more flexible output, which can enable more efficient algorithms.
    See effect.h for more details. Note that the fastest rendering API on
    EffectChain is now to a texture if possible, not to an FBO. This will
    only matter if the last effect is a compute shader.

  - Movit now includes a compute shader implementation of DeinterlaceEffect,
    which is automatically used instead of the fragment shader implementation
    if your GPU and OpenGL driver supports it (in practice, this means on
    all platforms except on macOS). The compute shader version is typically
    20–80% faster than the fragment shader version, depending on your GPU
    and other factors.

    A compute shader implementation of ResampleEffect was written but
    ultimately failed to be faster, and so is not included.

  - Support for microbenchmarks of effects through the Google microbenchmarking
    framework (optional). Currently, DeinterlaceEffect and ResampleEffect has
    benchmarks; enable them by running the unit test with --benchmark (also try
    --benchmark --help).

  - Effects can now explicitly request _not_ to have mipmaps, which means they
    can do so without needing to request bounce and fiddling with the sampler
    state. Note that this is an API change for effects.

  - Movit now requires C++11, both to build and to #include the header files.
    Support for SDL1 has been dropped; unit tests and the demo program now need
    SDL2.

  - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Planet DebianShirish Agarwal: The Pune Metro 1st anniversary celebrations

Pune Metro facebook, twitter friends

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

Krebs on SecurityChronicle: A Meteor Aimed At Planet Threat Intel?

Alphabet Inc., the parent company of Google, said today it is in the process of rolling out a new service designed to help companies more quickly make sense of and act on the mountains of threat data produced each day by cybersecurity tools.

Countless organizations rely on a hodgepodge of security software, hardware and services to find and detect cybersecurity intrusions before an incursion by malicious software or hackers has the chance to metastasize into a full-blown data breach.

The problem is that the sheer volume of data produced by these tools is staggering and increasing each day, meaning already-stretched IT staff often miss key signs of an intrusion until it’s too late.

Enter “Chronicle,” a nascent platform that graduated from the tech giant’s “X” division, which is a separate entity tasked with tackling hard-to-solve problems with an eye toward leveraging the company’s core strengths: Massive data analytics and storage capabilities, machine learning and custom search capabilities.

“We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find,” wrote Stephen Gillett, CEO of the new venture.

Few details have been released yet about how exactly Chronicle will work, although the company did say it would draw in part on data from VirusTotal, a free service acquired by Google in 2012 that allows users to scan suspicious files against dozens of commercial antivirus tools simultaneously.

Gillett said his division is already trialing the service with several Fortune 500 firms to test the preview release of Chronicle, but the company declined to name any of those participating.

ANALYSIS

It’s not terribly clear from Gillett’s post or another blog post from Alphabet’s X division by Astro Teller how exactly Chronicle will differentiate itself in such a crowded market for cybersecurity offerings. But it’s worth considering the impact that VirusTotal has had over the years.

Currently, VirusTotal handles approximately one million submissions each day. The results of each submission get shared back with the entire community of antivirus vendors who lend their tools to the service — which allows each vendor to benefit by adding malware signatures for new variants that their tools missed but that a preponderance of other tools flagged as malicious.

Naturally, cybercriminals have responded by creating their own criminal versions of VirusTotal: So-called “no distribute” scanners. These services cater to malware authors, and use the same stable of antivirus tools, except they prevent these tools from phoning home to the antivirus companies about new, unknown variants.

On balance, it’s difficult to know whether the benefit that antivirus companies — and by extension their customers — gain by partnering with VirusTotal outweighs the mayhem enabled by these no-distribute scanners. But it seems clear that VirusTotal has helped antivirus companies and their customers do a better job focusing on threats that really matter, as opposed to chasing after (or cleaning up after) so-called “false positives,” — benign files that erroneously get flagged as malicious.

And this is precisely the signal-to-noise challenge created by the proliferation of security tools used in a typical organization today: How to spend more of your scarce cybersecurity workforce, budget and time identifying and stopping the threats that matter and less time sifting through noisy but otherwise time-wasting alerts triggered by non-threats.

I’m not a big listener of podcasts, but I do find myself increasingly making time to listen to Risky Business, a podcast produced by Australian cybersecurity journalist Patrick Gray. Responding to today’s announcement on Chronicle, Gray said he likewise had few details about it but was looking forward to learning more.

“Google has so much data and so many amazing internal resources that my gut reaction is to think this new company could be a meteor aimed at planet Threat Intel™️,” Gray quipped on Twitter, referring to the burgeoning industry of companies competing to help companies trying to identify new threats and attack trends. “Imagine if other companies spin out their tools…Netflix, Amazon, Facebook etc. That could be a fundamentally reshaped industry.”

Well said. I also look forward to hearing more about how Chronicle works and, more importantly, if it works.

Full disclosure: Since September 2016, KrebsOnSecurity has received protection against massive online attacks from Project Shield, a free anti-distributed denial-of-service (DDoS) offering provided by Jigsaw — another subsidiary of Google’s parent company. Project Shield provides DDoS protection for news, human rights, and elections monitoring Web sites.

Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Planet DebianRenata D'Avila: Ideas for the project architecture and short term goals

There has been many discussions about planning for the FOSS calendar. On this post, I report about some of the ideas.

How I first thought the Foss Events calendar

Back in december, when I was just making sense of my surroundings and trying to find a way to start the internship, I drawed this diagram to picture in my head how everything would work:

A diagram showing the schema that will be described bellow. Each item is connected to the next using arrows, except for the relationship between user interface and API, where data flows both ways.

  1. There would be a "crawler.py" module, which would access each site on a determined list (It could be Facebook, Meetup or any other site such as another calendar) that have events information. This module would pull the event data from those sites.

  2. A validator.py would check if the data was good and if there was data. Once this module verified this, it would dump all info into a dirty_events database.

  3. The dirty_events database would be accessed by the module parser.py, which would clean and organize the data to be properly stored in the events database.

  4. An API.py module would query the events database and return the proper data, formatted into JSON, ical and/or some other formats.

  5. There would be an user interface to get data from API.py and to display this data. It should also be possible to add (properly formatted) events to the database using this interface. [If we were talking about a plugin to merely display the events in MoinMoin or Wordpress or some other system, this plugin would enter in this category.]

The ideas that Daniel put on paper

Then, when I shared with my mentors, Daniel came up with this:

Another diagram. On the left, the plugins boxes, they connect to an aggregator, with input towards the storage. The storage then outputs to reports and data dump, represented on the right.

Daniel proposed that module or plugins could be developed or improved (there are some of them already, but they might not support iCalendar URLs) for MoinMoin, Drupal, Wordpress that would allow the data each of these systems have about events to be aggregated. Information from the Meetup and the Facebook APIs could be converted to ical to be agreggated. This aggregation process could happen through a cron job - and I believe daily is enough, because people don't usually publish an event to happen in the very next day (they need time for people to acknowledge it). If the time frame ends up not being the ideal, this can be reviewed and changed later.

Once all this data is gathered, it would then be stored, inserting it or updating it in what could be a PostgreSQL or NoSQL solution.

Using the database with events information, it should be possible to do a data dump with all the information or to give "reports" of the event data, whether the user wants to access the data in iCalendar format (for Thunderbird or GNOME Evolution) or just HTML for viewing in the browser.

Short term goals

Creating a FOSS events calendar it is a big project that will most certainly continue beyond my Outreachy internship.

Therefore, along with my mentors, we have established that my short term goal will be to contribute a bit to it by working on the MoinMoin EventCalendar so the events can be exported to the iCalendar format.

I have been studying and playing around with the EventCalendar code and, so far, I've concluded that the best way to do this might be by writing a function to it. Just like there are other functions on this plugin to change the display of the calendar, there might be a function to just sort the data to the iCalendar format and to allow downloading the file.

Krebs on SecurityExpert: IoT Botnets the Work of a ‘Vast Minority’

In December 2017, the U.S. Department of Justice announced indictments and guilty pleas by three men in the United States responsible for creating and using Mirai, a malware strain that enslaves poorly-secured “Internet of Things” or IoT devices like security cameras and digital video recorders for use in large-scale cyberattacks.

The FBI and the DOJ had help in their investigation from many security experts, but this post focuses on one expert whose research into the Dark Web and its various malefactors was especially useful in that case. Allison Nixon is director of security research at Flashpoint, a cyber intelligence firm based in New York City. Nixon spoke with KrebsOnSecurity at length about her perspectives on IoT security and the vital role of law enforcement in this fight.

Brian Krebs (BK): Where are we today with respect to IoT security? Are we better off than were a year ago, or is the problem only worse?

Allison Nixon (AN): In some aspects we’re better off. The arrests that happened over the last year in the DDoS space, I would call that a good start, but we’re not out of the woods yet and we’re nowhere near the end of anything.

BK: Why not?

AN: Ultimately, what’s going with these IoT botnets is crime. People are talking about these cybersecurity problems — problems with the devices, etc. — but at the end of the day it’s crime and private citizens don’t have the power to make these bad actors stop.

BK: Certainly security professionals like yourself and others can be diligent about tracking the worst actors and the crime machines they’re using, and in reporting those systems when it’s advantageous to do so?

AN: That’s a fair argument. I can send abuse complaints to servers being used maliciously. And people can write articles that name individuals. However, it’s still a limited kind of impact. I’ve seen people get named in public and instead of stopping, what they do is improve their opsec [operational security measures] and keep doing the same thing but just sneakier. In the private sector, we can frustrate things, but we can’t actually stop them in the permanent, sanctioned way that law enforcement can. We don’t really have that kind of control.

BK: How are we not better off?

AN: I would say that as time progresses, the community that practices DDoS and malicious hacking and these pointless destructive attacks get more technically proficient when they’re executing attacks, and they just become a more difficult adversary.

BK: A more difficult adversary?

AN: Well, if you look at the individuals that were the subject of the announcement this month, and you look in their past, you can see they’ve been active in the hacking community a long time. Litespeed [the nickname used by Josiah White, one of the men who pleaded guilty to authoring Mirai] has been credited with lots of code.  He’s had years to develop and as far as I could tell he didn’t stop doing criminal activity until he got picked up by law enforcement.

BK: It seems to me that the Mirai authors probably would not have been caught had they never released the source code for their malware. They said they were doing so because multiple law enforcement agencies and security researchers were hot on their trail and they didn’t want to be the only ones holding the source code when the cops showed up at their door. But if that was really their goal in releasing it, doing so seems to have had the exact opposite effect. What’s your take on that?

AN: You are absolutely, 100 million percent correct. If they just shut everything down and left, they’d be fine now. The fact that they dumped the source was a tipping point of sorts. The damages they caused at that time were massive, but when they dumped the source code the amount of damage their actions contributed to ballooned [due to the proliferation of copycat Mirai botnets]. The charges against them specified their actions in infecting the machines they controlled, but when it comes to what interested researchers in the private sector, the moment they dumped the source code — that’s the most harmful act they did out of the entire thing.

BK: Do you believe their claimed reason for releasing the code?

AN: I believe it. They claimed they released it because they wanted to hamper investigative efforts to find them. The problem is that not only is it incorrect, it also doesn’t take into account the researchers on the other end of the spectrum who have to pick from many targets to spend their time looking at. Releasing the source code changed that dramatically. It was like catnip to researchers, and was just a new thing for researchers to look at and play with and wonder who wrote it.

If they really wanted to stay off law enforcement’s radar, they would be as low profile as they could and not be interesting. But they did everything wrong: They dumped the source code and attacked a security researcher using tools that are interesting to security researchers. That’s like attacking a dog with a steak. I’m going to wave this big juicy steak at a dog and that will teach him. They made every single mistake in the book.

BK: What do you think it is about these guys that leads them to this kind of behavior? Is it just a kind of inertia that inexorably leads them down a slippery slope if they don’t have some kind of intervention?

AN: These people go down a life path that does not lead them to a legitimate livelihood. They keep doing this and get better at it and they start to do these things that really can threaten the Internet as a whole. In the case of these DDoS botnets, it’s worrying that these individuals are allowed to go this deep before law enforcement catches them.

BK: There was a narrative that got a lot of play recently, and it was spun by a self-described Internet vigilante who calls himself “the Janitor.” He claimed to have been finding zero-day exploits in IoT devices so that he could shut down insecure IoT things that can’t really be secured before or maybe even after they have been compromised by IoT threats like Mirai. The Janitor says he released a bunch of his code because he’s tired of being the unrecognized superhero that he is, and many in the media seem to have eaten this up and taken his manifesto as gospel. What’s your take on the Janitor, and his so-called “bricker bot” project?

AN: I have to think about how to choose my words, because I don’t want to give anyone bad ideas. But one thing to keep in mind is that his method of bricking IoT devices doesn’t work, and it potentially makes the problem worse.

BK: What do you mean exactly?

AN: The reason is sometimes IoT malware like Mirai will try to close the door behind it, by crashing the telnet process that was used to infect the device [after the malware is successfully installed]. This can block other telnet-based malware from getting on the machine. And there’s a lot of this type of King of the Hill stuff going on in the IoT ecosystem right now.

But what [this bricker bot] malware does is a lot times it reboots a machine, and when the device is in that state the vulnerable telnet service goes back up. It used to be a lot of devices were infected with the very first Mirai, and when the [control center] for that botnet went down they were orphaned. We had a bunch of Mirai infections phoning home to nowhere. So there’s a real risk of taking the machine that was in the this weird state and making it vulnerable again.

BK: Hrm. That’s a very different story from the one told by the Bricker bot author. According to him, he spent several years of his life saving the world from certain doom at the hands of IoT devices. He even took credit for foiling the Mirai attacks on Deutsche Telekom. Could this just be a case of researcher exaggerating his accomplishments? Do you think his Bricker bot code ever really spread that far?

AN: I don’t have any evidence that there was mass exploitation by Bricker bot. I know his code was published. But when I talk to anyone running an IoT honeypot [a collection of virtual or vulnerable IoT devices designed to attract and record novel attacks against the devices] they have never seen it. The consensus is that regardless of peoples’ opinion on it we haven’t seen it in our honeypots. And considering the diversity of IoT honeypots out there today, if it was out there in real life we would have seen it by now.

BK: A lot of people believe that we’re focusing on the wrong solutions to IoT security — that having consumers lock down IoT devices security-wise or expecting law enforcement agencies to fix this problem for us for me are pollyannish ideas that in any case don’t address the root cause: Which is that there are a lot of companies producing crap IoT products that have virtually no security. What’s your take?

AN: The way I approach this problem is I see law enforcement as the ultimate end goal for all of these efforts. When I look at the IoT DDoS activity and the actual human beings doing this, the vast majority of Mirai attacks, attack infrastructure, malware variants and new exploits are coming from a vast minority of people doing this. That said, the way I perceive the underground ecosystem is probably different than the way most people perceive it.

BK: What’s the popular perception, do you think?

AN: It’s that, “Oh hey, one guy got arrested, great, but another guy will just take his place.” People compare it to a drug dealer on the street corner, but I don’t think that’s accurate in this case. The difference is when you’re looking at advanced criminal hacking campaigns, there’s not usually a replacement person waiting in the wings. These are incredibly deep skills developed over years. The people doing innovations in DDoS attacks and those who are driving the field forward are actually very few. So when you can ID them and attach behavior to the perpetrator, you realize there’s only a dozen people I need to care about and the world suddenly becomes a lot smaller.

BK: So do you think the efforts to force manufacturers to harden their products are a waste of time?

AN: I want to make it clear that all these different ways to tackle the problem…I don’t want to say one is more important than the other. I just happened to be working on one component of it. There’s definitely a lot of disagreement on this. I totally recognize this as a legitimate approach. A lot of people think the way forward is to focus on making sure the devices are secure. And there are efforts ongoing to help device manufacturers create more secure devices that are more resistant to these efforts.

And a lot is changing, although slowly. Do you remember way back when you bought a Wi-Fi router and it was open by default? Because the end user was obligated to change the default password, we had open Wi-Fi networks everywhere. As years passed, many manufacturers started making them more secure. For example, many of these devices now have customers refer to sticker on the machine that has a unique Wi-Fi password. That type of shift may be an example of what we can see in the future of IoT security.

BK: In the wake of the huge attacks from Mirai in 2016 and 2017, several lawmakers have proposed solutions. What do you think of the idea that it doesn’t matter what laws we pass in the United States that might require more security by IoT makers, that those makers are just going to keep on ignoring best practices when it comes to security?

AN: It’s easy to get cynical about this and a lot of people definitely feel like these these companies don’t sell directly to the U.S. and therefore don’t care about such efforts. Maybe in the short term that might be true, but in the long term I think it ends up biting them if they continue to not care.

Ultimately, these things just catch up with you if you have a reputation for making a poor product. What if you had a reputation for making a device that if you put it on the Internet it would reboot every five minutes because it’s getting attacked? Even if we did enact security requirements for IoT that manufacturers not in the U.S. wouldn’t have to follow, it would still in their best interests to care, because they are going to care sooner or later.

BK: I was on a Justice Department conference call with other journalists on the day they announced the Mirai author arrests and guilty pleas, and someone asked why this case was prosecuted out of Alaska. The answer that came back was that a great many of the machines infected with Mirai were in Alaska. But it seems more likely that it was because there was an FBI agent there who decided this was an important case but who actually had a very difficult time finding enough infected systems to reach the threshold needed to prosecute the case. What’s your read on that?

AN: I think that this case is probably going to set precedent in terms of the procedures and processes used to go after cybercrime. I’m sure you finished reading The Wired article about the Alaska investigation into Mirai: It goes in to detail about some of the difficult things that the Alaska FBI field office had to do to satisfy the legal requirements to take the case. Just to prove they had jurisdiction, they had to find a certain number of infected machines in Alaska.

Those were not easy to find, and in fact the FBI traveled far and wide in order to find these machines in Alaska. There are all kinds of barriers big and small that slow down the legal process for prosecuting cases like this, some of which are legitimate and some that I think are going to end up being streamlined after a case like this. And every time a successful case like this goes through [to a guilty plea], it makes it more possible for future cases to succeed.

This one group [that was the subject of the Mirai investigation] was the worst of the worst in this problem area. And right now it’s a huge victory for law enforcement to take down one group that is the worst of the worst in one problem area. Hopefully, it will lead to the takedown of many groups causing damage and harming people.

But the concept that in order for cybercriminals to get law enforcement attention they need to make international headlines and cause massive damage needs to change. Most cybercriminals probably think that what they’re doing nobody is going to notice, and in a sense they’re correct because there is so much obvious criminal activity blatantly connected to specific individuals. And that needs to change.

BK: Is there anything we didn’t talk about related to IoT security, the law enforcement investigations into Mirai, or anything else you’d like to add?

AN: I want to extend my gratitude to the people in the security industry and network operator community who recognized the gravity of this threat early on. There are a lot of people who were not named [in the stories and law enforcement press releases about the Mirai arrests], and want to say thank you for all the help. This couldn’t have happened without you.

Worse Than FailureSponsor Post: Make Your Apps Better with Raygun

I once inherited an application which had a bug in it. Okay, I’ve inherited a lot of applications like that. In this case, though, I didn’t know that there was a bug, until months later, when I sat next to a user and was shocked to discover that they had evolved a complex work-around to bypass the bug which took about twice as long, but actually worked.

“Why didn’t you open a ticket? This shouldn’t be like this.”

“Enh… it’s fine. And I hate dealing with tickets.”

In their defense, our ticketing system at that office was a godawful nightmare, and nobody liked dealing with it.

The fact is, awful ticket tracking aside, 99% of users don’t report problems in software. Adding logging can only help so much- eventually you have a giant haystack filled with needles you don’t even know are there. You have no way to see what your users are experiencing out in the wild.

But what if you could? What if you could build, test and deploy software with a real-time feedback loop on any problems the users were experiencing?

Our new sponsor, Raygun, gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool.

You're probably using software and services today that relies on Raygun to identify when users have a poor experiences: Domino's Pizza, Coca-Cola, Microsoft and Unity all use it, along with many others.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Worse Than FailureAll Saints' Day

Cathedral Antwerp July 2015-1

Oh, PHP. It's the butt of any number of jokes in the programming community. Those who do PHP often lie and pretend they don't, just to avoid the social stigma. Today's submitter not only works in PHP, but they also freelance: the bottom of the bottom of the development hierarchy.

Last year, Ilya was working on a Joomla upgrade as well as adjusting several components on a big, obscure website. As he was poking around in the custom code, he found today's submission. You see, the website is in Italian. At the top of the page, it shows not only the date, but also the saint of the day. This is a Catholic thing: every day of the year has a patron saint, and in certain cultures, you might even be named for the saint whose day you were born on. A full list can be found on this Italian Wikipedia page.

Every day, the website was supposed to display text like "18 luglio: santi Sinforosa e sette compagni" (July 18: Sinforosa and the Seven Companions). But the code that generated this string had broken. It wasn't Ilya's task to fix it, but he chose to do so anyway, because why not?

His first suspect for where this text came from was this mess of Javascript embedded in the head:

     function getDataGiorno(){
     data = new Date();
     ora =data.getHours();
     minuti=data.getMinutes();
     secondi=data.getSeconds();
     giorno = data.getDay();
     mese = data.getMonth();
     date= data.getDate();
     year= data.getYear();
     if(minuti< 10)minuti="0"+minuti;
     if(secondi< 10)secondi="0"+secondi;
     if(year<1900)year=year+1900;
     if(ora<10)ora="0"+ora;
     if(giorno == 0) giorno = " Domenica ";
     if(giorno == 1) giorno = " Lunedì ";
     if(giorno == 2) giorno = " Martedì ";
     if(giorno == 3) giorno = " Mercoledì ";
     if(giorno == 4) giorno = " Giovedì ";
     if(giorno == 5) giorno = " Venerdì ";
     if(giorno == 6) giorno = " Sabato ";
     if(mese == 0) mese = "gennaio ";
     if(mese ==1) mese = "febbraio ";
     if(mese ==2) mese = "marzo ";
     if(mese ==3) mese = "aprile ";
     if(mese ==4) mese = "maggio ";
     if(mese ==5) mese = "giugno ";
     if(mese ==6) mese = "luglio ";
     if(mese ==7) mese = "agosto ";
     if(mese ==8) mese = "settembre ";
     if(mese ==9) mese = "ottobre ";
     if(mese ==10) mese = "novembre ";
     if(mese ==11) mese = "dicembre";
     var dt=date+" "+mese+" "+year;
     var gm =date+"_"+mese;

     return gm.replace(/^\s+|\s+$/g,""); ;
     }

     function getXMLHttp() {
     var xmlhttp = null;
     if (window.ActiveXObject) {
       if (navigator.userAgent.toLowerCase().indexOf("msie 5") != -1) {
         xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
       } else {
           xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
       }
     }
     if (!xmlhttp && typeof(XMLHttpRequest) != 'undefined') {
       xmlhttp = new XMLHttpRequest()
     }
     return xmlhttp
     }

     function elaboraRisposta() {
      var dt=getDataGiorno();
      var data = dt.replace('_',' ');
      if (dt.indexOf('1_')==0){
          dt.replace('1_','%C2%BA');
      }
       // alert("*"+dt+"*");
     var temp = new Array();
     temp = objHTTP.responseText.split(dt);
     //alert(temp[1]);

     var temp1=new Array();
     temp1=temp[1].split(":");
     temp=temp1[1].split("");


      if (objHTTP.readyState == 4) {
      santi=temp[0].split(",");
      //var app = new Array();
      //app=santi[0].split(";");
      santo=santi[0];
      //alert(santo);

        // document.write(data+" - "+santo.replace(/\/wiki\//g,"http://it.wikipedia.org/wiki/"));
        document.write(data+" - "+santo);
      }else {

      }

     }
     function loadDati() {
      objHTTP = getXMLHttp();
      objHTTP.open("GET", "calendario.html" , false);

      objHTTP.onreadystatechange = function() {elaboraRisposta()}


     objHTTP.send(null)
     }

If you've never seen Joomla before, do note that most templates use jQuery. There's no need to use ActiveXObject here.

"calendario.html" contained very familiar text: a saved copy of the Wikipedia page linked above. This ought to be splitting the text with the date, avoiding parsing HTML with regex by using String.prototype.split(), and then parsing the HTML to get the saint for that date to inject into the HTML.

But what if a new saint gets canonized, and the calendar changes? By caching a local copy, you ensure that the calendar will get out of date unless meticulously maintained. Therefore, the code to call this Javascript was commented out entirely in the including page:

<div class="santoForm">
<!--  <?php echo "<script>loadDati()</script>" ;  ?> -->
<?php ...

Instead, it had been replaced with this:

       setlocale(LC_TIME, 'ita', 'it_IT.utf8');
       $gg = ltrim(strftime("%d"), '0');

       if ($gg=='1'){
        $dt=ltrim(strftime("1º %B"), '0');
       }else{
         $dt=ltrim(strftime("%d %B"), '0');
       }
       //$dt='4 febbraio';
     $html = file_get_contents('http://it.wikipedia.org/wiki/Calendario_dei_santi');
     $doc = new DOMDocument();
     $doc->loadHTML($html);
     $elements = $doc->getElementsByTagName('li');
     foreach ($elements as $element) {
        if (strpos($element->nodeValue,utf8_encode($dt))!== false){
         list($santo,$after)= split(';',$element->nodeValue);
         break ;
        }else {}
     }
       list($santo,$after)= split(',',utf8_decode($santo));
       if (strlen ( $santo) > 55) {
         $santo=substr($santo, 0, 55)."...";
       }

This migrates the logic to the backend—the One True Place for all such logic—and uses standard library routines, just as it should. Of course, this being PHP, it breaks horribly if you look at it cross-eyed, or—like Ilya—don't have the Italian locale installed on your development machine. And of course, it'll also break if the live Wikipedia page about the saints gets reformatted. But what's the likelihood of that? Plus, it's not cached in the least, letting every visitor see updates in real time. After all, next time they canonize a saint, everyone will rush right to this site to verify that the day changed. That's how the Internet works, right?

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

CryptogramDetecting Drone Surveillance with Traffic Analysis

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ -- a window, say -- someone might want to guard from potential surveillance. Then they remotely intercept a drone's radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone's encrypted video.

The details have to do with the way drone video is compressed:

The researchers' technique takes advantage of an efficiency feature streaming video has used for years, known as "delta frames." Instead of encoding video as a series of raw images, it's compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who's intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Planet DebianDaniel Pocock: apt-get install more contributors

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer and Synaptic?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 3 – Booting

Securing the Linux boot process Matthew Garrett

  • Without boot security there is no other security
  • MBR Attacks – previously common, still work sometimes
  • Bootloader attacks – Seen in the wild
  • Malicious initrd attacks
    • RAM disk, does stuff like decrypt hard drive
    • Attack captures disk pass-shrase when typed in
  • How do we fix these?
    • UEFI Secure boot
    • Microsoft required in machines shipped after mid-2012
    • sign objects, firmware trusts some certs, boots things correctly signed
    • Problem solved! Nope
    • initrds are not signed
  • initrds
    • contain local changes
    • do a lot of security stuff
  • TPMs
    • devices on system motherboards
    • slow but inexpensive
    • Not under control of the CPU
    • Set of registers “platform configuration registers”, list of hashes of objects booted in boot process. Measurements
    • PCR can enforce things, stop boots if stuff doesn’t match
    • But stuff changes all the time, eg update firmware . Can brick machine
  • Microsoft to the resuce
    • Tie Secure boot into measured boot
    • Measure signing keys rather than the actual files themselves
    • But initrds are not signed
  • Systemd to the resuce
    • systemd boot stub (not the systemd boot loader)
    • Embed initrd and the kernel into a single image with a single signature
    • But initrds contain local information
    • End users should not be signing stuff
  • Kernel can be handed multiple initranfs images (via cpio)
    • each unpacked in turn
    • Each will over-write the previous one
    • configuration can over-written but the signed image, perhaps safely so that if config is changed, stuff fails
    • unpack config first, code second
  • Kernel command line is also security sensative
    • eg turn off iommu and dump RAM to extract keys
    • Have a secure command line turning on all security features, append on the what user sends
  • Proof of device state
    • Can show you are number after boot based on TPM. Can compare to 2FA device to make sure it is securely booted. Safe to type in passwords
  • Secure Provision of secrets
    • Know a remote machine is booted safely and not been subverted before sending it secret stuff.

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 2

Dealing with Contributor Overload Holden Karau

  • Developer Advocate at Google
  • Apache Spark, Contributor to BEAM

Some people from big projects, some from projects hoping to get big

  • Remember it’s okay to not fix it all
  • The fun of a small project
    • Simple communication
    • Aligned incentives
    • Easy to tell who knows what
    • Tight community
  • The fun of a parge project
    • More people to do the work
    • More impact and people thanking you
    • Lots of ideas and experiences
    • If $s then fun conferences
    • Get paid to work on it.
  • Is my project on Fire? or just lots of people on it.
    • Measurable
      • User questions spike
      • issue spike
    • Lesss measurable
      • Non-explicit stuff not being passed on
  • Classic Pipeline
    • Users -> contributors -> committers _> PMC
    • Each stage takes times
    • Very leaky pipeline, perhaps it leaks too much
  • With hyper-growth project can quickly go south
    • Committer:user ration can’t get too far out.
  • Even without hyper-growth: sadness
    • Same thing happens, but slower
  • Overload – Mitigation
    • You don’t have to answer everyone, this can be hard
    • Stackoverflow
    • Are your answers easily searchable
    • Knowledge base – “do you mean”
    • Take time and look for patterns in questions
    • Find people who like writing and get to to write a book
      • Don’t to for core committers, they will have no time for anything else
  • Issue overload
    • Try and get rid of duplicate tickets
    • Autoclose tickets – mixed results
  • How to deal with a spike
    • Raise the bar
    • Make it easier
    • Get Perl to solve the problem
  • Raising the bar
    • Reject trivial changes – reduces the onramp
    • Add weird system – more posts on how to contribute
  • What can Perl solve
    • Style guide
    • bot bot bots
    • make it faster to merge
    • Improve PR + reviewer notice
    • Can increase productivity
  • Add more committers
    • Takes time and effort
    • People can be shy
    • Make a guide for new folks to follow
    • Have a safe space for people to ask questions
  • Reduce overhead for contributing well
    • Have doc on how to contribute next to the code, not elsewhere that people have to search for.

The Open Sourcing of Infrastructure Elizabeth K. Joseph

The recent history of infrastructure

  • 1998
    • To make a server use Solaris or NT. But off a shelf
    • Linux seen as Cheap Unix
    • Lots of FUD

Got a Junior Sysadmin Job

  • 2004
    • Had to tell people the basics “What is free software?”  , “Using Open Source Web Applications to Produce Business Results”
    • Turning point LAMP stack
    • Flood of changes on how customers interacted with software over last
      • Reluctance to be locked-in by a vendor
      • Concerns of security
      • Ability to fix bugs ourselves
      • Innovation stifled when software developed in isloation

Last 10 years

  • Changes in how peopel interacted with software
    • Downtime un-acceptable
    • Reliance of scaling and automation
    • Servers as Pets -> cattle
    • Large focus on data

Open Source is now Ubiquitous

  • Even Microsoft is using it a lot and interacting with the community

Operations tools were not as Open Sourced

  • Configuration Management
    • puppet modules, chef playbooks
  • Open application definitions – juhu charms, DC?OS Universe Catalog
  • Full disk images
    • Dockerhub

The Cloud

  • Cloud is the new propriatory
  • EC2-only infrastructure
  • Questions you should ask beforehand
    • Is your service adhering to open standards or am I locked in?
    • Recourse if the company goes out of business
    • Does vendor have a history of communicating about downtime and security problems?
    • Does vendor responds to bugs and feature requests?
    • Will the vendor use data in a way I’m not comfortable with?
    • Initial costs may be low, but do you have a plan to handle long term, growing costs
  • Alternatives
    • Openstack, Kubernetes, Docker Swarm, DC/OS with Apache Mesos

Hybrid Cloud

  • Tooling can be platform agnostic
  • Hard but can be done

Share

Planet DebianFrançois Marier: LXC setup on Debian stretch

Here's how to setup LXC-based "chroots" on Debian stretch. While I wrote about this on Debian jessie, I had to make some networking changes for stretch and so here are the full steps that should work on stretch.

Start by installing (as root) the necessary packages:

apt install lxc libvirt-clients debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx

That configuration requires that the veth kernel module be loaded. If you have any kinds of module-loading restrictions enabled, you probably need to add the following to /etc/modules and reboot:

veth

Next, I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Restart the network bridge:

    systemctl restart lxc-net.service
    
  4. and ensure that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 10.0.3.0/24 -j ACCEPT
    -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://httpredir.debian.org/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64 -F

Since the root password is randomly generated, you'll need to reset it before you can login as root:

sudo lxc-attach -n sid64 passwd

Then login as root and install a text editor inside the container because the root image doesn't have one by default:

apt install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Mounting your home directory inside a container

In order to have my home directory available within the container, I created a user account for myself inside the container and then added the following to the container config file (/var/lib/lxc/sid64/config):

lxc.mount.entry=/home/francois /var/lib/lxc/sid64/rootfs/home/francois none bind 0 0

before restarting the container:

lxc-stop -n sid64
lxc-start -n sid64

Fixing locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

If you see these errors while reconfiguring the locales package:

Generating locales (this might take a while)...
  en_US.UTF-8...cannot change mode of new locale archive: No such file or directory
 done
  fr_CA.UTF-8...cannot change mode of new locale archive: No such file or directory
 done
Generation complete.

and see the following dmesg output on the host:

[235350.947808] audit: type=1400 audit(1441664940.224:225): apparmor="DENIED" operation="chmod" info="Failed name lookup - deleted entry" error=-2 profile="/usr/bin/lxc-start" name="/usr/lib/locale/locale-archive.WVNevc" pid=21651 comm="localedef" requested_mask="w" denied_mask="w" fsuid=0 ouid=0

then AppArmor is interfering with the locale-gen binary and the work-around I found is to temporarily shutdown AppArmor on the host:

lxc-stop -n sid64
systemctl stop apparmor
lxc-start -n sid64

and then start up it later once the locales have been updated:

lxc-stop -n sid64
systemctl start apparmor
lxc-start -n sid64

AppArmor support

If you are running AppArmor, your container probably won't start until you add the following to the container config (/var/lib/lxc/sid64/config):

lxc.aa_allow_incomplete = 1

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 1 – k8s @ home and bad buses

How to run Kubernetes on your spare hardware at home, and save the world Angus Lees

  • Mainframe ->
  • PC ->
  • Rackmount PC
    • Back the rackmount PC even with built-in redundancy will still fail. Or the location will go offline, or your data spreads across multiple machines
  • Since you need to have distributed/redundancy anyway. New model (2005). Grid computing. Clever software, dumb hardware. Loosely coupled servers
    • Libraries > RPC / Microservices
    • Threadpool -> hadoop
    • SQL -> key/store
    • NFS -> Object store
    • In-place upgrades -> “Immutable” image-based build from scratch
  • Computers in clouds
    • No cases. No redundant Power, journaling on filesystems turned off, etc
  • Everything is in clouds – Secondary effects
    • Corperate driven
    • Apache license over GPL
    • Centralised services rather than federated protocols
    • Profit-driven rather than scrating itches
  • Summary
    • Problem
      • Distributed Systems hard to configure
      • Solutions scale down poorly
      • Most homes don’t have racks of servers
    • Implication
      • Home Free Software “stuck” at single-machine architecture
  • Kubernetes (lots of stuff, but I use it already so just doing unique bits)
    • “Unix Process as a service”
    • Inverts the stack. Data is important then app. Kernel and Hardware unimportant.
    • Easy upgrades, everything is an upgrade
    • Declarative API , command line interface
  • “We’ve conducted this experiment for decades now, and I have news for you, Hardware fails”

Hardware at Home

  • Raid used to be “enterprise” now normal for home
  • Elastic compute for home too
  • Kubernetes for Home
    • Budget $100
      • ARM master nodes
      • Mixed architecture
    • Assume single layer-2 home ethernet
    • Worker nodes – old $500 laptops
      • x86-64
      • CoreOS
      • Broken screens, dead batteries
    • 3 * $30 Banana pis
      • Raspberry Pi2
      • armv7a
      • containOS
    • Persistentvolumes
      • NFS mount from RAID server
    • Service – keepalived-vip
    • Ingress
      • keepalived and nginx-ingress , letsEncrypt
      • Wildcard DNS
    • Status
      • Works!
      • Printing works
      • Install: PXE boot and run coreos-install
    • Status – ungood
      • Banana PIs a bit too slow.
    • github.com/anguslees/k8s-home

Is the 370 the worst bus route in Sydney? Katie Bell

  • The 370 bus
    • Goes UNSW and Sydney University. Goes around the city
  • If bus runs every 15 minutes, you should not be able to see 3 at once
  • Newspaper articles and Facebook group about how bad it is.
  • Two Questions
    • Bus privitisation better or worse
    • Is the 370 really the worst
  • Data provided
    • Lots of stuff but nothing the reliability
    • But they do have realtime data eg for the Tripetime app (done via a 3rd party)
    • They have a API and Key with standard format via GTFS
  • But they only publish “realtime” data, not the old data
    • So collected the realtime data, once a minute for 4 months
    • 557 GB
  • Format
    • zipfile of csv files
    • IDs sometimes ephemeral
    • Had to match timetable data and realtime data
    • Data had to be tidied up – lots
  • Processing realtime data
    • Download 1 minute
    • Parse
    • Match each of around ~7000 trips in timetable (across all of NSW)
    • Write ~20000 realtime updates to the DB
    • Running 5 EC2 instances at leak
    • Writing up to 40MB/s to the DB
  • Is the 370 the worst?
    • Define “worst”
    • Found NSW definition of what an on-time bus is.
    • Now more than 5:59 late or 1:59 early. Measured start/middle/end
    • Victoria definition strictor
    • She defined:
      • Early: more than 2min early
      • On time: 2m early – 5 min late
      • late more than 5m late
      • Very late – more thna 20m late
    • Across all trips
      • 3.7 million trips
      • On time 31%
      • More than 20m late 2.86%
    • Best routes
      • Nightime buses
      • Outside of Sydney
      • Shorter routes
      • 86% – 97% or better
    • Worst
      • Less than 5% on time
      • Longer routes
      • 370 is the 22nd worst
        • 8.79% on time
    • Worst routes ( percent > 20 min late)
      • 23% of 370 trips (6th worst)
      • Lots of Wollongong
    • Worst agencies
      • No obvious difference between agencies and private companies
    • Conclusion
      • Privatisation could go either way
      • 370 is close to the worst (277 could be worse) in Sydney
    • bus-shaming.com
    • github.com/katharosada/bus-shaming

Questions

  • Used Spot instances to keep cost down
  • $200 month on AWS
  • Buses better/worse according to time? Now checked yet
  • Wanted to calculate the “wait time” , not done yet.
  • Another feed of bus locations and some other data out there too.
  • Lots of other questions

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Keynote – Karen Sandler

Executive director of Software Freedom Conservancy

Previously spoke that LCA 2012 about closed-source software on her heart implant. Since then has pivoted career to more open-source advocacy in career.

  • DMCA exemption for medical device research
  • When you ask your doctor about safety of devices you sound like a conspiracy theorist
  • Various problems have been highlighted, some progress
  • Some companies addressing them

Initially published paper highlighting problem without saying she had the device

  • Got pushback from groups who thought she was scaremongering
  • Companies thinking about liability issues
  • After told story in 2012 things improved

Had to get new device recently.

  • Needed this disabled since her jobs pisses off hackers sometimes
  • All manufacturers said they could not disable wireless access
  • Finally found a single model that could be disabled made by a European manufacturer

 

Note: This is a quick summary, Lots more covered but hard to cover. Video should be good. Her slides were broken though much of the talk be she still delivered great talk.

Share

,

Google AdsenseLet machine learning create your In-feed ads


Last year we launched AdSense Native ads, a new family of ads created to match the look and feel of your site. If your site has an editorial feed (a list of articles or news) or a listings feed (a list of products or services), then Native In-feed ads are a great option to give your users a better experience.

Now we've brought the power of machine learning to In-feed ads, saving you time. If you're not quite sure what fonts, colors, and styles will work best for your site, you can let Google's machine learning help you decide. 

How it works: 

  1. Create a new Native In-Feed ad and select "Let Google suggest a style." 
  2. Enter the URL of a page with a feed you’d like to monetize. AdSense will scan your page to find the best placement. 
  3. Select which feed element you’d like your In-feed ad to match.
  4. Your ad is automatically created – simply place the piece of code into your feed, and you’re done! 

By the way, this method is optional, so if you prefer, you can create your ads manually. 

Posted by: 

Faris Zerdoudi, AdSense Tech Lead 
Violetta Kalathaki, AdSense Product Manager 


Planet DebianJonathan McDowell: Going to FOSDEM 2018

Laura comments that she has no idea who is going to FOSDEM. I’m slightly embarrassed to admit I’ve only been once before, way back in 2005. A mixture of good excuses and disorganisation about arranging to go has meant I haven’t been back since. So a few months ago I made the decision to attend and sorted out the appropriate travel and hotel bookings and I’m pleased to say I’m attending FOSDEM 2018. I get in late Friday evening and fly out on Sunday evening, so I’ll miss the Friday beering but otherwise be around for the whole event. Hope to catch up with a bunch of people there!

Sociological ImagesPod Panic & Social Problems

My gut reaction was that nobody is actually eating the freaking Tide Pods.

Despite the explosion of jokes—yes, mostly just jokes—about eating detergent packets, sociologists have long known about media-fueled cultural panics about problems that aren’t actually problems. Joel Best’s groundbreaking research on these cases is a classic example. Check out these short video interviews with Best on kids not really being poisoned by Halloween candy and the panic over “sex bracelets.”

Click here to view the embedded video.

In a tainted Halloween candy study, Best and Horiuchi followed up on media accounts to confirm cases of actual poisoning or serious injury, and they found many cases couldn’t be confirmed or were greatly exaggerated. So, I followed the data on detergent digestion.

It turns out, there is a small trend. According to a report from the American Association of Poison Control Centers,

…in 2016 and 2017, poison control centers handled thirty-nine and fifty-three cases of intentional exposures, respectively, among thirteen to nineteen year olds. In the first fifteen days of 2018 alone, centers have already handled thirty-nine such intentional cases among the same age demographic.

That said, this trend is only relative to previous years and cannot predict future incidents. The life cycle of internet jokes is fairly short, rotating quickly with an eye toward the attention economy. It wouldn’t be too surprising if people moved on from the pods long before the panic dies out.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianJulien Danjou: Scaling a polling Python application with parallelism

A few weeks ago, Alon contacted me and asked me the following:

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

Alon is using such an application to gather information on the hosts it connects to, though that's not important in this case.

In a series of blog post, I'd like to help Alon solve this problem! We're gonna write an application that can manage millions of hosts.

Well, if you have enough hardware, obviously.

The job

Writing a Python application that connects to a host by ssh can be done using, for example, Paramiko. That will not be the focus of this blog post since it is pretty straightforward to do.

To keep this exercise simple, we'll just use a ping function that looks like this:

import subprocess
 

def ping(hostname):
p = subprocess.Popen(["ping", "-c", "3", "-w", "1", hostname],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
return p.wait() == 0


The function ping returns True if the host is reachable and alive, or False if an error occurs (bad hostname, network unreachable, ping timeout, etc.). We're also not trying to make ping fast by specifying a lower timeout or a smaller number of packets. The goal is to scale this task while knowing it takes time to execute.

So ping is going to be the job to be executed by our application. It'll replace ssh in this example, but you'll see it'll be easy to replace it with any other job you might have.

We're going to use this job to accomplish a bigger mission: determine which hosts in my home network are up:

for i in range(255):
ip = "192.168.2.%d" % i
if ping(ip):
print("%s is alive" % ip)


Running this program alone and pinging all 255 IP addresses takes more than 10 minutes.

It is pretty slow because each time we ping a host, we wait for the ping to succeed or timeout before starting the next ping. So if you need 3 seconds to ping each host in average, then to ping 255 nodes you'll need 5 seconds × 255 = 765 seconds and that's more than 12 minutes.

The solution

If 255 hosts need 12 minutes to be pinged, you can imagine how long it's going to be when we're going to test which hosts are alive on the IPv4 Internet – 4 294 967 296 addresses to ping!

Since those ping (or ssh) jobs are not CPU intensive, we can consider that one multi-processor host is going to be powerful enough – at least for a beginning.

The real issue here currently is that those tasks are I/O intensive and executing them serially is very long.

So let's run them in parallel!

To do this, we're going to use threads. Threads are not efficient in Python when your tasks are CPU intensive, but in case of blocking I/O, they are good enough.

Using concurrent.futures

With concurrent.futures, it's easy to manage a pool of threads and schedule the execution of tasks. Here's how we're going to do it:

import functools
from concurrent import futures
import subprocess
 

def ping(hostname):
p = subprocess.Popen(["ping", "-q", "-c", "3", "-W", "1",
hostname],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
return p.wait() == 0
 

with futures.ThreadPoolExecutor(max_workers=4) as executor:
futs = [
(host, executor.submit(functools.partial(ping, host)))
for host in ("192.168.2.%d" % i for i in range(255))
]
 
for ip, f in futs:
if f.result():
print("%s is alive" % ip)


The ThreadPoolExecutor is an engine, called executor, that allows us to submit tasks to it. Each task submitted is put into an internal queue using the executor.submit method. This method takes a function to execute as argument.

Then, the executor pulls jobs out of its queue and execute them. In order to execute them, it starts a thread that is going to be responsible for the execution. The maximum number of threads to start is controlled by the max_workers parameters.

executor.submit returns a Future object, that holds the future result of the submitted task. Future objects expose methods to know if the task is finished or not; here we just use Future.result() to get the result. This method will block until the result is ready.

There's no magic recipe to find how many max workers you should use. It really depends on the nature of the tasks that are submitted. In this case, using a value of 4 brings down the execution time to 3 minutes – roughly 12 minutes divided by 4, which makes sense. Setting the max_workers to 255 (i.e. the number of tasks submitted) will make all the pings started at the same time, producing a CPU usage spike, but bringing down the total execution time to less than 5 seconds!

Obviously, you wouldn't be able to start 4 billion threads in parallel, but if your system is big and fast enough, and your task using more I/O than CPU, you can use a pretty high value in this case. The memory should also be taken into account – in this case, it's very low since the ping task is not using a lot of memory.

Just a first step

As already said, this ping job does not use a lot of CPU time or I/O bandwidth, neither would the original ssh case by Alon. However, if that would be the case, this method would be limited pretty quickly. Threads are not always the best option to maximize your throughput, especially with Python.

These are just the first steps of the distribution and scalability mechanism that you can implement using Python. There are a few other options available on top of this mechanism – I've covered those in my book Scaling Python, if you're interested in learning more!

Until then, stay tuned for the next article of this series!

Planet DebianJonathan Dowland: Imaging DVD-Rs: Overview and Step 1

Example of a degraded DVD-R

Example of a degraded DVD-R

Moiré-like degredation on a commercial CD-ROM

Moiré-like degredation on a commercial CD-ROM

From the late 1990s until relatively recently, I used to archive digital material onto CD-Rs, and later DVD-Rs. I've got about half a dozen spindles of hundreds of home-made discs with all sorts of stuff on them. A project to import them to a more future-proof storage location is long overdue. If you are in a similar position, consider a project like this as soon as possible: based on my experiences it might already be too late for some of them. The adjacent pictures were created with the ddrescueview software and show both a degraded home-made DVD-R and a degraded commercially pressed CD-ROM. I came across both as I embarked on this project.

The process can be divided into roughly five stages:

  1. Gather the discs & preparation
  2. Initial import of the discs
  3. Figuring out disc contents
  4. Retrying damaged/degraded discs
  5. Organising the extracted files

If you are importing a lot of discs, the stages can run like a pipeline, where you perform steps 3 onwards for some images whilst you are beginning stage 2 for others.

I'll be writing in more depth about each step separately. To start, here's the preparatory step.

Preparation (gather the discs)

Fetch all the discs you want to read together into one place.

I had some in jewel cases and others on the spindles that the blank discs come on. I had some at my house, some in boxes at my parents house and more at work. I decided to consolidate most of the discs down onto the spindles and throw away most of the jewel cases. If you suspect a particular disc as having particularly valuable data on it, you may wish to leave it in a jewel case. You might also want to hang onto one or two empty jewel cases, if there's a chance you'll happen upon a disc you want to give to someone else.

discs in progress
a pile of CDs and DVDs

I dedicated an initially-empty spindle as the "done" spindle, opting to keep the imported discs, at least until the import process was complete. I also kept a second "needs attention" spindle for discs that couldn't be read successfully straight away. I labelled both using a label-maker.

Don't trust disc labels: If in doubt, put the disc in your "to image" pile. Don't throw a disc out on the basis of the label alone. I had a bad habit of topping up a disc which was mostly for one thing with other data if there was space left over. Mistakes can also be made, and I had plenty of unlabelled discs anyway.

You're going to need a computer with sufficient storage space upon which to store the disc images, metadata and/or the data within them, once you start organising it. You're also going to need an optical drive to read them. If you haven't yet got a system in place for reliably storing your data and managing backups, it would be worth sorting that out first before embarking on a project like this.

I attached a USB drive to my NAS and did the importing and storing directly onto it.

Finally, this is going to take time. In the best case, discs read quickly and reliably, and the time is spent simply inserting them and ejecting them. In the worst case, you might have troublesome discs that you really want to attempt to read everything from, which can take a great deal of (unattended) time.


Next up is the initial import stage. I'll try to get my notes on that transformed into the next blog post soon.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main February 2018 Meeting: Linux.conf.au report

Feb 6 2018 18:30
Feb 6 2018 20:30
Feb 6 2018 18:30
Feb 6 2018 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

PLEASE NOTE NEW LOCATION

Tuesday, February 6, 2018
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

  • Russell Coker and others, LCA conference report

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 6, 2018 - 18:30

CryptogramNew Malware Hijacks Cryptocurrency Mining

This is a clever attack.

After gaining control of the coin-mining software, the malware replaces the wallet address the computer owner uses to collect newly minted currency with an address controlled by the attacker. From then on, the attacker receives all coins generated, and owners are none the wiser unless they take time to manually inspect their software configuration.

So far it hasn't been very profitable, but it -- or some later version -- eventually will be.

Worse Than FailureCoded Smorgasbord: Archive This

Michael W came into the office to a hair-on-fire freakout: the midnight jobs failed. The entire company ran on batch processes to integrate data across a dozen ERPs, mainframes, CRMs, PDQs, OMGWTFBBQs, etc.: each business unit ran its own choice of enterprise software, but then needed to share data. If they couldn’t share data, business ground to a halt.

Business had ground to a halt, and it was because the archiver job had failed to copy some files. Michael owned the archiver program, not by choice, but because he got the short end of that particular stick.

The original developer liked logging. Pretty much every method looked something like this:

public int execute(Map arg0, PrintWriter arg1) throws Exception {
    Logger=new Logger(Properties.getString("LOGGER_NAME"));
    Log=new Logger(arg1);
    .
    .
    .
catch (Exception e) {
    e.printStackTrace();
    Logger.error("Monitor: Incorrect arguments");
    Log.printError("Monitor: Incorrect arguments");
    arg1.write("In Correct Argument Passed to Method.Please Check the Arguments passed \n \r");
    System.out.println("Monitor: Incorrect arguments");
}

Sometimes, to make the logging even more thorough, the catch block might look more like this:

catch(Exception e){
    e.printStackTrace();
    Logger.error("An exception happened during SFTP movement/import. " + (String)e.getMessage());
}

Java added Generics in 2004. This code was written in 2014. Does it use generics? Of course not. Every Hashtable is stringly-typed:

Hashtable attributes;
.
.
.
if (((String) attributes.get(key)).compareTo("1") == 0 | ((String) attributes.get(key)).compareTo("0") == 0) { … }

And since everything is stringly-typed, you have to worry about case-sensitive comparisons, but don’t worry, the previous developer makes sure everything’s case-insensitive, even when comparing numbers:

if (flag.equalsIgnoreCase("1") ) { … }

And don’t forget to handle Booleans…

public boolean convertToBoolean(String data) {
    if (data.compareToIgnoreCase("1") == 0)
        return true;
    else
        return false;
}

And empty strings…

if(!TO.equalsIgnoreCase("") && TO !=null) { … }

Actually, since types are so confusing, let’s make sure we’re casting to know-safe types.

catch (Exception e) {
    Logger.error((Object)this, e.getStackTraceAsString(), null, null);
}

Yes, they really are casting this to Object.

Since everything is stringly typed, we need this code, which checks to see if a String parameter is really sure that it’s a string…

protected void moveFile(String strSourceFolder, Object strSourceObject,
                     String strDestFolder) {
    if (strSourceObject.getClass().getName().compareToIgnoreCase("java.lang.String") == 0) { … }
    …
}

Now, that all was enough to get Michael’s blood pressure up, but none of that had anything to do with his actual problem. Why did the copy fail? The logs were useless, as they were spammed with messages with no particular organization. The code was bad, sure, so it wasn’t surprising that it crashed. For a little while, Michael thought it might be the getFiles method, which was supposed to identify which files needed to be copied. It did a recursive directory search (with no depth checking, so one symlink could send it into an infinite loop) nor did it actually filter files that it didn’t care about. It just made an ArrayList of every file in the directory structure and then decided which ones to copy.

He spent some time really investigating the copy method, to see if that would help him understand what went wrong:

sourceFileLength = sourceFile.length();
newPath = sourceFile.getCanonicalPath();
newPath = newPath.replace(".lock", "");
newFile = new File(newPath);
sourceFile.renameTo(newFile);                    
destFileLength = newFile.length();
while(sourceFileLength!=destFileLength)
{
    //Copy In Progress
}
//Remy: I didn't elide any code from the inside of that while loop- that is exactly how it's written, as an empty loop.

Hey, out of curiosity, what does the JavaDoc have to say about renameTo?

Many aspects of the behavior of this method are inherently platform-dependent: The rename operation might not be able to move a file from one filesystem to another, it might not be atomic, and it might not succeed if a file with the destination abstract pathname already exists. The return value should always be checked to make sure that the rename operation was successful.

It only throws exceptions if you don’t supply a destination, or if you don’t have permissions to the files. Otherwise, it just returns false on a failure.

So… if the renameTo operation fails, the archiver program will drop into an infinite loop. Unlogged. Undetected. That might seem like the root cause of the failure, but it wasn’t.

As it turned out, the root cause was that someone in ops hit “Ok” on a security update, which triggered a reboot, disrupting all the scheduled jobs.

Michael still wanted to fix the archiver program, but there was another problem with that. He owned the InventoryArchiver.jar. There was also OrdersArchiver.jar, and HRArchiver.jar, and so on. They had all been “written” by the same developer. They all did basically the same job. So they were all mostly copy-and-paste jobs with different hard-coded strings to specify where they ran. But they weren’t exactly copy-and-paste jobs, so each one had to be analyzed, line by line, to see where the logic differences might possibly crop up.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianLouis-Philippe Véronneau: Long Live the Memory of Ursula K. Le Guin

Today, one of my favorite author passed away.

I stumbled upon Le Guin's work about 10 years ago when my uncle gave me a copy of The Left Hand of Darkness and since then, I've managed to lose and buy this novel three or four times. I'm normally very careful with how I manage my book collection and the only way I can explain how I lost this book so many times is I must have been too eager to share it with my entourage.

My weathered copy of The Dispossessed

Very tasteful eulogies have sprung up all day long and express far better than I can how much of a genius Ursula K. Le Guin was.

So here is my humble homage to her. In 1987, the CBC radio show Vanishing Point adapted what is to me Le Guin's best novel, The Dispossessed, in a series of 30 minute episodes.

The result if far less interesting than the actual novel, but if you are the kind of person that enjoys audiobooks, it might just be what convinces you to pick up the book.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #143

Here's what happened in the Reproducible Builds effort between Sunday January 14 and Saturday January 20 2018:

Upcoming events

Packages reviewed and fixed, and bugs filed

During reproducibility testing, 83 FTBFS bugs have been detected and reported by Adrian Bunk.

Reviews of unreproducible packages

56 package reviews have been added, 44 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

diffoscope development

Furthermore Juliana Oliveira has been working on a separated branch on parallizing diffoscope.

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianLaura Arjona Reina: It’s 2018, where’s my traditional New Year Plans post?

I closed my eyes, opened them again, a new year began, and we’re even almost finishing January. Time flies.

In this article I’ll post some updates about my life with computer, software and free software communities. It’s more a “what I’ve been doing” than a “new year plans” post… it seems that I’m learning to not to make so much plans (life comes to break them anyways!).

At home

My home server is still running Debian Jessie. I’m happy that it just works and my services are up, but I’m sad that I couldn’t find time for an upgrade to Debian stable (which is now Debian 9 Stretch) and maybe reinstall it with another config. I have lots of photos and videos to upload in my GNU MediaGoblin instances, but also couldn’t find time to do it (nor to print some of them, which was a plan for 2017, and the files still sleep in external harddrives or DVDs). So, this is a TODO item that crossed the year (yay! now I have almost 12 months ahead to try to complete it!). I’ll try to get this done before summer. I am considering installing my own pump.io instance but I’m not sure it’s good to place it in the same machine as the other services. We’ll see.

I bought a new laptop (well, second hand, but in a very good condition), a Lenovo X230, and this is now my main computer. It’s an i5 with 8 GB RAM. Wow, modern computer at home!
I’m very very happy with it, with its screen, keyboard, and everything. It’s running a clean install of Debian 9 stable with KDE Plasma Desktop and works great. It is not heavy at all so I carry it to work and use it in the public transport (when I can sit) for my contributions to free software.

My phone (Galaxy S III with Lineage OS 14 which is Android 7) fell down and the touchscreen broke (I can see the image but it is unresponsive to touch). When normal boot, the phone is recognized by the PC as storage, and thus I could recover most of the data on it, but it’s not recognized by adb (as when USB debugging is disabled). It is recognized by adb when booted into Recovery (TWRP), though. I tried to enable USB debugging in several ways from adb while in Recovery, but couldn’t. I could switch off the wifi, though, so when I booted the phone it does not receive new messages, etc. I bought an OTG cable but I have no wireless mouse at home and couldn’t make it work with a normal USB mouse. I’ve given up for now until I find a wireless mouse or I have more time, and temporarily returned to use my old Galaxy Ace (with CyanogenMod 7 which is Android 2.3.7). I’ve looked at new phones but I don’t like that all of them have integrated battery, the screens are too big, all of them are very expensive (I know they are hi-tech machines, but don’t want to carry so valuable stuff all the time in my pocket) and other things. I still need to find time to go shopping with the list of phones where I can install Lineage OS (I already visited some stores but didn’t get convinced by the price, or they had no suitable models).

My glasses broke (in a different incident than the phone) and I used old ones for two weeks, because in the middle of the new ones preparation I had some family issues to care about. So putting time in reading or writing in front of the computer has been a bit uncomfortable and I tried to avoid it in the last weeks. Now I have new glasses and I can see very well 🙂 so I’m returning to my computer TODO.

I’ve given up the battle against iThings at home (I lost). I don’t touch them but other members of the family use them. I’m considering contributing to Debian info about testing things or maintaining some wiki pages about accessing iThings from Debian etc, but will leave that for summer, maybe later. Now I just try not to get depressed about this.

At work

We still have servers running Debian Wheezy which is in LTS support until May. I’m confident that we’ll upgrade before Wheezy reaches end of life, but frankly looking at my work plan, I’m not sure when. Every month seems packed with other stuff. I’ve taken some weeks leave to attend my family and I have no clear mind about when and how do things. We’ll see.

I gave a course about free software (in Spanish) for University staff last October. It was 20 hours, and 20 attendants, mostly administrative staff, librarians, and some IT assistants. It went pretty well, we talked about the definition of free software, history, free culture, licenses, free software tools for the office, for Android, and free software as a service (“cloud” stuff). They liked it very much. Many of them didn’t know that our Uni uses free software for our webmail (RoundCube), Cloud services (OwnCloud), and other important areas. I requested promotional material from the FSFE and I gave away many stickers. I also gave away all the Debian stickers that I had, and some other free software stickers. I’m not sure when and how I will get new Debian stickers, not sure if somebody from Madrid is going to FOSDEM. I’m considering printing them myself but I don’t know a good printer (for stickers) here. I’ll ask and try with a small investment, and see how it works out.

Debian

I think I have too many things in my plate and would like to close some stuff and focus on other, or maybe do other things.

I feel comfortable doing publicity work, but I would be happier if the team gets bigger and we have more contributors. I’m happy that we managed to publish a Debian Project News issue in DebConf17, a new one in September, and a new one in November, but since then I couldn’t find time to put on it. I’ll try to make a new issue happen before February ends, though. Meanwhile, the team has managed to handle the different announcements (release points and others) and we try to keep the community informed via micronews (mostly) and the blog bits.debian.org.

I’m keeping an eye on DebConf18 organization and I hope I can engage with publicity work about it, but I feel that we will need a local team member that leads the what-to-publish/when-to-publish and probably translations too.

About Spanish translations, I’m very happy that the translations for the Debian website have new contributors and reviewers that are making a really good work. In the last months I’m a bit behind, just trying to review and keep my files up to date, but I hope I can setup a routine in the following weeks to get more involved again, and also try to translate new files too.

Since some time, the Debian website work is the one that keeps my motivation in Debian up. It’s like a paradox because the Debian website is too big, complicated, old in some sense, and we have so much stuff that needs to be done, and so many people complaining or giving ideas (without patches) that one would get overwhelmed, depressed and sometimes would like just to resign from this team. But after all these years, it is now when I feel comfortable with the codebase and experienced enough to try things, review bugs, and try to help with the things needed. So I’m happy to put time in the website team, updating or improving the website, even when I do mistakes, or triage bugs. Also, working in the website is very rewarding because there is always some small thing that I can do to fix something, and thus, “get something done” even when my time is limited. The bad news is that there are also some big tasks that require a lot of time and motivation, and I get them postponed and postponed… 😦 At least, I try to file bugs for all the stuff that I would like to put time on, and maybe slowly, but thanks to all the team members and other contributors, we are advancing: we have a more updated /partners section (still needs work), a new /derivatives section, and we are working on the migration from CVS to Git, the reorganization of the download pages, and other stuff.

Some times I’d like to do other/new things in Debian. Learn to package (and thus, package spigot and gnusrss, used in Publicity, or weewx, that we use it at work, and also help maintaining or adopting some small things), or join the Documentation Team, or put more work in the Outreach Team (relaunch the Welcome Team), or put more work in Internationalization Team. Or maybe other stuff. But before that, I feel that I would need to finish some pending tasks in my current teams, and also find more people for them, too.

Other free software communities

I am still active in the pump.io community, although I don’t post very often in my social network account. I’ll try to open Dianara more often, and use Puma in my new phone (maybe I should adopt/fork Puma…). I am present in the IRC channel (#pump.io in Freenode) and try to organize and attend the meetings. I have a big TODO which is advance our application to join Software Freedom Conservancy (another item that crossed the TODO from 2017 to 2018) but I’ll really try to get this done before January ends.

I keep on testing F-Droid and free software apps for Android (now again in Android 2.x, I get F-Droid crashes all the time “OutofMemory” :D). I keep on reading the IRC channels and mailing list (also the mailing list for Replicant. If I get the broken phone to work with the OTG I will install Replicant on it and will keep it for tests). I keep on translating Android apps when I have some time to kill.

I have no idea who is going to FOSDEM and if I should talk to them prior to their travel (e.g. ask to bring Debian stickers for me if somebody from Madrid goes, or promote if there is any F-Droid or Pump.io or GNU MediaGoblin IRC meeting or talk or whatever) but I really got busy in December-January with life and family stuff, so I just left FOSDEM apart in my mind and will try to join and see the streaming the weekend that the conference is happening, or maybe later.

I think that’s all, or at least this blogpost became very long and I don’t find anything else to write, for now, to make it longer. In any case, it’s hard for me these days to make plans more than one-two weeks ahead. Hopefully I’ll write in my blog more often during this year.

Comments?

You can comment on this post using this pump.io thread.

Planet DebianBenjamin Mako Hill: Introducing Computational Methods to Social Media Scientists

The ubiquity of large-scale data and improvements in computational hardware and algorithms have provided enabled researchers to apply computational approaches to the study of human behavior. One of the richest contexts for this kind of work is social media datasets like Facebook, Twitter, and Reddit.

We were invited by Jean BurgessAlice Marwick, and Thomas Poell to write a chapter about computational methods for the Sage Handbook of Social Media. Rather than simply listing what sorts of computational research has been done with social media data, we decided to use the chapter to both introduce a few computational methods and to use those methods in order to analyze the field of social media research.

A “hairball” diagram from the chapter illustrating how research on social media clusters into distinct citation network neighborhoods.

Explanations and Examples

In the chapter, we start by describing the process of obtaining data from web APIs and use as a case study our process for obtaining bibliographic data about social media publications from Elsevier’s Scopus API.  We follow this same strategy in discussing social network analysis, topic modeling, and prediction. For each, we discuss some of the benefits and drawbacks of the approach and then provide an example analysis using the bibliographic data.

We think that our analyses provide some interesting insight into the emerging field of social media research. For example, we found that social network analysis and computer science drove much of the early research, while recently consumer analysis and health research have become more prominent.

More importantly though, we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. While there are dangers and downsides (some of which we discuss in the chapter), we see the use of computational tools as one of the most important and exciting developments in the social sciences.

Steal this paper!

One of the great benefits of computational methods is their transparency and their reproducibility. The entire process—from data collection to data processing to data analysis—can often be made accessible to others. This has both scientific benefits and pedagogical benefits.

To aid in the training of new computational social scientists, and as an example of the benefits of transparency, we worked to make our chapter pedagogically reproducible. We have created a permanent website for the chapter at https://communitydata.cc/social-media-chapter/ and uploaded all the code, data, and material we used to produce the paper itself to an archive in the Harvard Dataverse.

Through our website, you can download all of the raw data that we used to create the paper, together with code and instructions for how to obtain, clean, process, and analyze the data. Our website walks through what we have found to be an efficient and useful workflow for doing computational research on large datasets. This workflow even includes the paper itself, which is written using LaTeX + knitr. These tools let changes to data or code propagate through the entire workflow and be reflected automatically in the paper itself.

If you  use our chapter for teaching about computational methods—or if you find bugs or errors in our work—please let us know! We want this chapter to be a useful resource, will happily consider any changes, and have even created a git repository to help with managing these changes!


The book chapter and this blog post were written with Jeremy Foote and Aaron Shaw. You can read the book chapter here. This blog post was originally published on the Community Data Science Collective blog.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 2 – Keynote – Matthew Todd

Collaborating with Everybody: Open Source Drug Discovery

  • Term used is a bit undefined. Open Source, Free Drugs?
  • First Open Source Project – Praziquantel
    • Molecule has 2 mirror image forms. One does the job, other tastes awful. Pills were previously a mix
    • Project to just have pill with the single form
      • Created discussion
      • Online Lab Notebook
      • 75% of contributions were from private sector (especially Syncom)
      • Ended up finding a approach that worked, different from what was originally proposed from feedback.
      • Similar method found by private company that was also doing the work
  • Conventional Drug discovery
    • Find drug that kills something bad – Hit
    • Test it and see if it is suitable – Led
    • 13,500 molecules in public domain that kill maleria parasite
  • 6 Laws of Open Scrience
    • All data is open and all ideas are shared
    • Anyone can take part at any level of the project
  • Openness increasing seen as a key
  • Open Source Maleria
    • 4 campaigns
    • Work on a molecule, park it when doesn’t seem promising
    • But all data is still public
  • What it actually is
    • Electronic lab book (80% of scientists still use paper)
    • Using Labtrove, changing to labarchives
    • Everything you do goes up every day
    • Todo list
      • Tried stuff, ended up using issue list on github
      • Not using most other github stuff
    • Data on a Google Sheet
    • Light Website, twitter feed
  • Lab vs Code
  • Have a promising molecule – works well in mice
    • Would probably be a patentable state
    • Not sure yet exactly how it works
  • Competition – Predictive model
    • Lots of solutions submitted, not good enough to use
    • Hopeful a model will be created
  • Tried a a known-working molecule from elsewhere, but couldn’t get it to work
    • This is out in the open. Lots of discussion
  • School group able to recreate Daraprim, a high-priced US drug
  • Public Domain science is now accepted for publications
  • Need to to make computers understand molecule digram and convert to representative format which can then be search one.
  • Missing
    • Automated links to databases in tickets
    • Basic web page stuff, auto-porting of data, newsletter, become non-profit, stickers
    • Stuff is not folded back into the Wiki
  • OS Mycetoma – New Project
    • Fungus with no treatment
    • Working on possible molecule to treat
  • Some ideas on how to get products created this way to market – eg “data exclusivity”

 

Share

,

Planet DebianBits from Debian: Mentors and co-mentors for Debian's Google Summer of Code 2018

GSoC logo

Debian is applying as a mentoring organization for the Google Summer of Code 2018, an internship program open to university students aged 18 and up.

Debian already has a wide range of projects listed but it is not too late to add more or to improve the existing proposals. Google will start reviewing the ideas page over the next two weeks and students will start looking at it in mid-February.

Please join us and help extending Debian! You can consider listing a potential project for interns or listing your name as a possible co-mentor for one of the existing projects on Debian's Google Summer of Code wiki page.

At this stage, mentors are not obliged to commit to accepting an intern but it is important for potential mentors to be listed to get the process started. You will have the opportunity to review student applications in March and April and give the administrators a definite decision if you wish to proceed in early April.

Mentors, co-mentors and other volunteers can follow an intern through the entire process or simply volunteer for one phase of the program, such as helping recruit students in a local university or helping test the work completed by a student at the end of the summer.

Participating in GSoC has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreachy mailing list.

Planet DebianLars Wirzenius: Ick: a continuous integration system

TL;DR: Ick is a continuous integration or CI system. See http://ick.liw.fi/ for more information.

More verbose version follows.

First public version released

The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own.

My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at http://ick.liw.fi/, and the download page has links to the source code and .deb packages and an Ansible playbook for installing it.

I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky.

Invitation to contribute

Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the governance page for the constitution, the getting started page for tips on how to start contributing, and the contact page for how to get in touch.

Architecture

Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the architecture page for details.

Manifesto

Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested.

A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working.

A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned.

Also, like all software, CI should be fully and completely free software and your instance should be under your control.

(Ick is little of this yet, but it will try to become all of it. In the best possible taste.)

Dreams of the future

In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented.

  • A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again.

  • Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead.

  • Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.)

  • Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller.

  • Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.)

  • Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc.

  • Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V."

Please give feedback

If you try ick, or even if you've just read this far, please share your thoughts on it. See the contact page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too.

CryptogramSkygofree: New Government Malware for Android

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That's not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn't respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

Cory DoctorowMy keynote from ConveyUX 2017: “I Can’t Let You Do That, Dave.”

“The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

—Cory Doctorow

ConveyUX is pleased to feature author and activist Cory Doctorow to close out our 2017 event. Cory’s body work includes fascinating science fiction and engaging non-fiction about the relationship between society and technology. His most recent book is Information Doesn’t Want to be Free: Laws for the Internet Age. Cory will delve into some of the issues expressed in that book and talk about issues that affect all of us now and in the future. Cory will be on hand for Q&A and a post-session book signing.

Planet DebianRenata D'Avila: Improving communication

After my last post, a lot of things happened, but what I'm going to talk about now is the thing that I believe had the most impact in improving my experience with the Outreachy internship: the changes that were made in communication, specially between me and my mentors.

When I struggled with the tasks, with moving forward, it was somewhat a wish of mine to change the ways I communicate with my mentors. (Alright, Renata, so why didn't you start by just doing that? Well, I wasn't sure where to begin.)

I didn't know how to propose something like that to my mentors, I mean... maybe that was how Outreachy was supposed to be and I just might have set different expectations? The first step to figure this out I took by reaching Anna, an Outreachy intern with Wikimedia who I'd been talking to since the interns announcement had been made.

I asked her about how she interacted with her mentors and how often, so I knew what I could ask for. She told me about her weekly meetings with her mentors and how she could chat direcly with them when she ran into some issues. And, indeed, I felt like things like that what I wanted to happen.

Before I could reach out and discuss this with my mentors, though, Daniel himself read last week's post and brought up the idea of us speaking on the phone for the first time. That was indeed a good experience and I told him I would like to repeat or establish some sort of schedule to communicate with each other.

Yes, well, a schedule would be the best improvement, I think. It's not just about the means (phone call or IRC, for instance) that we communicate, but to know that, at some point, either one per week or bi-weekly, there would be someone to talk to at a determined time so I could untie any knots that were created during my internship (if that makes sense). I know I could just send an email at any time to my mentors (and sometimes I do) and they would reply, but that's not quite the point.

So, to make this short: I started to talk to one of my mentors daily and it's been really helpful. We are working on a schedule for bi-weekly calls. And we always have e-mails. I'm glad to say that now I talk not just with mentors, but also with fellow brazilian Outreachers and former participants and everyone is willing to help out.

For all the ways to reach me, you can look up my Debian wiki profile.

Planet DebianThomas Lange: FAI.me build service now supports backports

The FAI.me build service now supports packages from the backports repository. When selecting the stable distribution, you can also enable backports packages. The customized installation image will then uses the kernel from backports (currently 4.14) and you can add additional packages by appending /stretch-backports to the package name, e.g. notmuch/stretch-backports.

Currently, the FAIme service offers images build with Debian stable, stable with backports and Debian testing.

If you have any ideas for extensions or any feedback, send an email to FAI.me =at= fai-project.org

FAI.me

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.8: Strictly maintenance

Another Rblpapi release, now at version 0.3.8, arrived on CRAN yesterday. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the eight release since the package first appeared on CRAN in 2016. This release wraps up a few smaller documentation and setup changes, but also includes an improvement to the (less frequently-used) subscription mode which Whit cooked up on the weekend. Details below:

Changes in Rblpapi version 0.3.8 (2018-01-20)

  • The 140 day limit for intra-day data histories is now mentioned in the getTicks help (Dirk in #226 addressing #215 and #225).

  • The Travis CI script was updated to use run.sh (Dirk in #226).

  • The install_name_tool invocation under macOS was corrected (@spennihana in #232)

  • The blpAuthenticate help page has additional examples (@randomee in #252).

  • The blpAuthenticate code was updated and improved (Whit in #258 addressing #257)

  • The jump in version number was an oversight; this should have been 0.3.7.

And only while typing up these notes do I realize that I fat-fingered the version number. This should have been 0.3.7. Oh well.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramDark Caracal: Global Espionage Malware from Lebanon

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that's been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There's a lot in the full report. It's worth reading.

Three news articles.

Worse Than FailureAlien Code Reuse

“Probably the best thing to do is try and reorganize the project some,” Tim, “Alien”’s new boss said. “It’s a bit of a mess, so a little refactoring will help you understand how the code all fits together.”

“Alien” grabbed the code from git, and started walking through the code. As promised, it was a bit of a mess, but partially that mess came from their business needs. There was a bunch of common functionality in a Common module, but for each region they did business in- Asia, North America, Europe, etc.- there was a region specific deployable, each in its own module. Each region had its own build target that would include the Common module as part of the build process.

The region-specific modules were vaguely organized into sub-modules, and that’s where “Alien” settled in to start reorganizing. Since Asia was the largest, most complicated module, they started there, on a sub-module called InventoryManagement. THey moved some files around, set up the module and sub-modules in Maven, and then rebuilt.

The Common library failed to build. This gave “Alien” some pause, as they hadn’t touched anything pertaining to the Common project. Specifically, Common failed to build because it was looking for some files in the Asia.InventoryManagement sub-module. Cue the dive into the error trace and the vagaries of the build process. Was there a dependency between Common and Asia that had gone unnoticed? No. Was there a build-order issue? No. Was Maven just being… well, Maven? Yes, but that wasn’t the problem.

After hunting around through all the obvious places, “Alien” eventually ran an ls -al.

~/messy-app/base/Common/src/com/mycompany > ls -al
lrwxrwxrwx 1 alien  alien    39 Jan  4 19:10 InventoryManagement -> ../../../../../Asia/InventoryManagement/src/com/mycompany/IM/
drwxr-x--- 3 alien  alien  4096 Jan  4 19:10 core/

Yes, that is a symbolic link. A long-ago predecessor discovered that the Asia.InventoryManagement sub-module contained some code that was useful across all modules. Acutally moving that code into Common would have involved refactoring Asia, which was the largest, most complicated module. Presumably, that sounded like work, so instead they just added a sym-link. The files actually lived in Asia, but were compiled into Common.

“Alien” writes, “This is the first time in my over–20-year working life I see people reuse source code like this.”

They fixed this, and then went hunting, only to find a dozen more examples of this kind of code “reuse”.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJames Morris: LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Linux.conf.au Kernel Miniconf in Sydney — the slides from the talk are here: http://namei.org/presentations/selinux_namespacing_lca2018.pdf

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via https://github.com/stephensmalley/selinux-kernel/tree/selinuxns.  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon!

ETA: the video is up! See:

Planet DebianDaniel Pocock: Keeping an Irish home warm and free in winter

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

bare home

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

Planet DebianNorbert Preining: Continuous integration testing of TeX Live sources

The TeX Live sources consists in total of around 15000 files and 8.7M lines (see git stats). It integrates several upstream projects, including big libraries like FreeType, Cairo, and Poppler. Changes come in from a variety of sources: external libraries, TeX specific projects (LuaTeX, pdfTeX etc), as well as our own adaptions and changes/patches to upstream sources. Since quite some time I wanted to have a continuous integration (CI) testing, but since our main repository is based on Subversion, the usual (easy, or the one I know) route via Github and one of the CI testing providers, didn’t come to my mind – until last week.

Over the weekend I have set up CI testing for our TeX Live sources by using the following ingredients: git-svn for checkout, Github for hosting, Travis-CI for testing, and a cron job that does the connection. To be more specific:

  • git-svn I use git-svn to check out only the source part of the (otherwise far to big) subversion repository onto my server. This is similar to the git-svn checkout of the whole of TeX Live as I reported here, but contains only the source part.
  • Github The git-svn checkout is pushed to the project TeX-Live/texlive-source on Github.
  • Travis-CI The CI testing is done in the TeX-Live/texlive-source project on Travis-CI (who are offering free services for open source projects, thanks!)

Although this sounds easy, there are a few stumbling blocks: First of all, the .travis.yml file is not contained in the main subversion repository. So adding it to the master tree that is managed via git-svn is not working, because the history is rewritten (git svn rebase). My solution was to create a separate branch travis-ci which adds only the .travis.yml file and merge master.

Travis-CI by default tests all branches, and does not test those not containing a .travis.yml, but to be sure I added an except clause stating that the master branch should not be tested. This way other developers can try different branches, too. The full .travis.yml can be checked on Github, here is the current status:

# .travis.yml for texlive-source CI building
# Norbert Preining
# Public Domain

language: c

branches:
  except:
  - master

before_script:
  - find . -name \*.info -exec touch '{}' \;

before_install:
  - sudo apt-get -qq update
  - sudo apt-get install -y libfontconfig-dev libx11-dev libxmu-dev libxaw7-dev

script: ./Build

What remains is stitching these things together by adding a cron job that regularly does git svn rebase on the master branch, merges the master branch into travis-ci branch, and pushes everything to Github. The current cron job is here:

#!/bin/bash
# cron job for updating texlive-source and pushing it to github for ci
set -e

TLSOURCE=/home/norbert/texlive-source.git
GIT="git --no-pager"

quiet_git() {
    stdout=$(tempfile)
    stderr=$(tempfile)

    if ! $GIT "$@" $stdout 2>$stderr; then
	echo "STDOUT of git command:"
	cat $stdout
	echo "************"
        cat $stderr >&2
        rm -f $stdout $stderr
        exit 1
    fi

    rm -f $stdout $stderr
}

cd $TLSOURCE
quiet_git checkout master
quiet_git svn rebase
quiet_git checkout travis-ci
# don't use [skip ci] here because we only built the 
# last commit, which would stop building
quiet_git merge master -m "merging master"
quiet_git push --all

With this setup we can CI testing of our changes in the TeX Live sources, and in the future maybe some developers will use separate branches to get testing there, too.

Enjoy.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 3 – Developers, Developers Miniconf

Beyond Web 2.0 Russell Keith-Magee

  • Django guy
  • Back in 2005 when Django first came out
    • Web was fairly simple, click something and something happened
    • model, views, templates, forms, url routing
  • The web c 2016
    • Rich client
    • API
    • mobile clients, native apps
    • realtime channels
  • Rich client frameworks
    • reponse to increased complexity that is required
    • Complex client-side and complex server-side code
  • Isomorphic Javascript development
    • Same code on both client and server
    • Only works with javascript really
    • hacks to work with other languages but not great
  • Isomorphic javascript development
    • Requirements
    • Need something in-between server and browser
    • Was once done with Java based web clients
    • model, view, controller
  • API-first development
  • How does it work with high-latency or no-connection?
  • Part of the controller and some of the model needed in the client
    • If you have python on the server you need python on the client
    • brython, skulp, pypy.js
    • <script type=”text/pyton”>
    • Note: Not phyton being compiled into javascript. Python is run in the browser
    • Need to download full python interpreter though (500k-15M)
    • Fairly fast
  • Do we need a full python interpreter?
    • Maybe something just to run the bytecode
    • Batavia
    • Javascript implementation of python virtual machine
    • 10KB
    • Downside – slower than cpython on the same machine
  • WASM
    • Like assembly but for the web
    • Benefits from 70y of experience with assembly languages
    • Close to Cpython speed
    • But
      • Not quite on browsers
      • No garbage collection
      • Cannot manipulate DOM
      • But both coming soon
  • Example: http://bit.ly/covered-in-bees
  • But “possible isn’t enough”
  • pybee.org
  • pybee.org/bee/join

Using “old skool” Free tools to easily publish API documentation – Alec Clew

  • https://github.com/alecthegeek/doc-api-old-skool
  • You API is successful if people are using it
  • High Quality and easy to use
  • Provide great docs (might cut down on support tickets)
  • Who are you writing for?
    • Might not have english as first language
    • New to the API
    • Might have different tech expertise (different languages)
    • Different tooling
  • Can be hard work
  • Make better docs
    • Use diagrams
    • Show real code (complete and working)
  • Keep your sentence simple
  • Keep the docs current
  • Treat documentation like code
    • Fix bugs
    • add features
    • refactor
    • automatic builds
    • Cross platform support
    • “Everything” is text and under version control
  • Demo using pandoc
  • Tools
  • pandoc, plantuml, Graphviz, M4, make, base/sed/python/etc

 

Lightning Talks

  • Nic – Alt attribute
    • need to be added to images
    • Don’t have alts when images as links
    • http://bit.ly/Nic-slides
  • Vaibhav Sager – Travis-CI
    • Builds codes
    • Can build websites
    • Uses to build Resume
    • Build presentations
  • Steve Ellis
    • Openshift Origin Demo
  • Alec Clews
    • Python vs C vs PHP vs Java vs Go for small case study
    • Implemented simple xmlrpc client in 5 languages
    • Python and Go were straightforward, each had one simple trick (40-50 lines)
    • C was 100 lines. A lot harder. Conversions, etc all manual
    • PHP wasn’t too hard. easier in modern vs older PHP
  • Daurn
    • Lua
    • Fengari.io – Lua in the browser
  • Alistair
    • How not to docker ( don’t trust the Internet)
    • Don’t run privileged
    • Don’t expose your docker socket
    • Don’t use host network mode
    • Don’t where your code is FROM
    • Make sure your kernel on your host is secure
  • Daniel
    • Put proxy in front of the docker socket
    • You can use it to limit what no-priv users with socket access to docker port can do

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 2

Manage all your tasks with TaskWarrior Paul ‘@pjf’ Fenwick

  • Lots of task management software out there
    • Tried lots
    • Doesn’t like proprietary ones, but unable to add features he wants
    • Likes command line
  • Disclaimer: “Most systems do not work for most people”
  • TaskWarrior
    • Lots of features
    • Learning cliff

Intro to TaskWarrior

  • Command line
  • Simple level can be just a todo list
  • Can add tags
    • unstructured many to many
    • Added just put putting “+whatever” on command
    • Great for searching
    • Can put all people or all types of jobs togeather
  • Meta Tags
    • Automatic date related (eg due this week or today)
  • Project
    • A bunch of tasks
    • Can be strung togeather
    • eg Travel project, projects for each trip inside them
  • Contexts (show only some projects and tasks)
    • Work tasks
    • Tasks for just a client
    • Home stuff
  • Annotation (Taking notes)
    • $ task 31 annotate “extra stuff”
    • has an auto timestamp
    • show by default, or just show a count of them
  • Tasks associated with dates
    • “wait”
    • Don’t show task until a date (approx)
    • Hid a task for an amount of time
    • Scheduled tasks urgency boasted at specific date
  • Until
    • delete a task after a certain date
  • Relative to other tasks
    • eg book flights 30 days before a conference
    • good for scripting, create a whole bunch of related tasks for a project
  • due dates
    • All sorts of things give (see above) gives tasks higher priority
    • Tasks can be manually changed
  • Tools and plugins
    • Taskopen – Opens resources in annotations (eg website, editor)
  • Working with others
    • Bugworrier – interfaces with github trello, gmail, jira, trac, bugzilla and lots of things
    • Lots of settings
    • Keeps all in sync
  • Lots of extra stuff
    • Paul updates his shell prompt to remind him things are busy
  • Also has
    • Graphical reports: burndown, calendar
    • Hooks: Eg hooks to run all sort of stuff
    • Online Sync
    • Android client
    • Web client
  • Reminder it has a steep learning curve.

Love thy future self: making your systems ops-friendly Matt Palmer

  • Instrumentation
  • Instrumenting incoming requests
    • Count of the total number of requests (broken down by requestor)
    • Count of reponses (broken down by request/error)
    • How long it took (broken down by sucess/errors
    • How many right now
  • Get number of in-progress requests, average time etc
  • Instrumenting outgoing requests
    • For each downstream component
    • Number of request sent
    • how many reponses we’ve received (broken down by success/err)
    • How long it too to get the response (broken down by request/ error)
    • How many right now
  • Gives you
    • incoming/outgoing ratio
    • error rate = problem is downstream
  • Logs
    • Logs cost tends to be more than instrumentation
  • Three Log priorities
    • Error
      • Need a full stack trace
      • Add info don’t replace it
      • Capture all the relivant variables
      • Structure
    • Information
      • Startup messages
      • Basic request info
      • Sampling
    • Debug
      • printf debugging at webcale
      • tag with module/method
      • unique id for each request
      • late-bind log data if possible.
      • Allow selective activation at runtime (feature flag, special url, signals)
    • Summary
      • Visbility required
      • Fault isolation

 

Share

Planet DebianShirish Agarwal: PrimeZ270-p, Intel i7400 review and Debian – 1

This is going to be a biggish one as well.

This is a continuation from my last blog post .

Before diving into installation, I had been reading for quite a while Matthew Garett’s work. Thankfully most of his blog posts do get mirrored on planet.debian.org hence it is easy to get some idea as what needs to be done although have told him (I think even shared here) that he should somehow make his site more easily navigable. Trying to find posts on either ‘GPT’ and ‘UEFI’ and to have those posts in an ascending or descending way date-wise is not possible, at least I couldn’t find a way to do it as he doesn’t do it date-wise or something.

The closest I could come to is sing ‘$keyword’ site:https://mjg59.dreamwidth.org/ via a search-engine and go through the entries shared therein. This doesn’t mean I don’t value his contribution. It is in fact, the opposite. AFAIK he was one of the first people who drew the community’s attention when UEFI came in and only Microsoft Windows could be booted on them, nothing else.

I may be wrong but AFAIK he was the first one to talk about having a shim and was part of getting people to be part of the shim process.

While I’m sure Matthew’s understanding may have evolved significantly from what he had shared before, it was two specific blog posts that I had to re-read before trying to install MS-Windows and then Debian-GNU/Linux system on it. .

I went to a friend’s house who had windows 7 running at his end, I ran over there, used diskpart and did the change to GPT using Windows technet article.

I had to use/go the GPT way as I understood that MS-Windows takes all the four primary partitions for itself, leaving nothing for any other operating system to do/use .

I did the conversion to GPT and tried to have MS-Windows 10 as my current motherboard and all future motherboards from Intel Gen7/Gen8 onwards do not support anything less than Windows 10. I did see an unofficial patch floating on github somewhere but now have lost the reference to it. I had read some of the bug-reports of the repo. which seemed to suggest it was still a work in progress.

Now this is where it starts becoming a bit… let’s say interesting.

Now a friend/client of mine offered me a job to review MS-Windows 10, with his product keys of course. I was a bit hesitant as it had been a long time since I had worked with MS-Windows and didn’t know if I could do it or not, the other was a suspicion that I might like it too much. While I did review it, I found –

a. It it one heck of a bloatware – I had thought MS-Windows would have learned it by now but no, they still have to have to learn that adware and bloatware aren’t solutions. I still can’t get my head wrapped around as to how 4.1 GB of an MS-WIndows ISO gets extracted to 20 GB and still have to install shit-loads of third-party tools to actually get anything done. Just amazed (and not in good way.) .

Just to share as an example I still had to get something like Revo Uninstaller as MS-Windows even till date hasn’t learned to uninstall programs cleanly and needs a tool like that to clean the registry and other places to remove the titbits left along the way.

Edit/Update – It still doesn’t have Fall Creators Update which is still supposed to be another 4 GB+ iso which god only knows how much space that will take.

b. It’s still not gold – With all the hoopla around MS-Windows 10 that I had been hearing and seeing ads, I was under the impression that MS-Windows had turned gold i.e. it had a release like Debian would have ‘buster’ something around next year probably around or after 2019 Debconf is held. Windows 10 Microsoft would be released around July 2018, so it’s still a few months off.

c. I had read an insightful article few years ago by a Junior Microsoft employee sharing/emphasizing why MS cannot do GNU/Linux volunteer/bazaar type of development. To put in not so many words, it came down to the cultural differences the way two communities operate. While in GNU/Linux a one more patch, one more pull request will be encouraged, and it may be integrated in that point release or it can’t it would be in the next point release (unless it changes something much more core/fundamentally which needs more in-depth review) MS-Windows on the other hand, actively discourages that sort of behavior as it meant more time for integration and testing and from the sound of it MS still doesn’t do Continuous Integration (CI), regressive testing etc. as is common in many GNU/Linux common projects more and more.

I wish I could have shared the article but don’t have the link anymore. @Lazyweb, if you would be so kind so as to help find that article. The developer had shared some sort of ssh credentials or something to prove who he was which he later to remove (probably) because of the consequences to him for sharing that insight were not worth it, although the writings seemed to be valid.

There were many more quibbles but shared the above ones. For e.g. copying files from hdd to usb disks doesn’t tell how much time it takes, while in Debian I’ve come to see time taken for any operation as guaranteed.

Before starting on to the main issue, some info. before-hand although I don’t know how relevant or not that info. might be –

Prime Z270-P uses EFI 2.60 by American Megatrends –

/home/shirish> sudo dmesg | grep -i efi
[sudo] password for shirish:
[ 0.000000] efi: EFI v2.60 by American Megatrends

I can share more info. if needed later.

Now as I understood/interpretated info. found on the web and by experience Microsoft makes quite a few more partitions than necessary to get MS-Windows installed.

This is how it stacks up/shows up –

> sudo fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: xxxxxxxxxxxxxxxxxxxxxxxxxxx

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data
/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem
/dev/sda6 3718232064 5280731135 1562499072 745.1G Linux filesystem
/dev/sda7 5280731136 7761199103 2480467968 1.2T Linux filesystem
/dev/sda8 7761199104 7814035455 52836352 25.2G Linux swap

I had made 2 GB for /boot in MS-Windows installer as I had thought it would take only some space and leave the rest for Debian GNU/Linux’s /boot to put its kernel entries, tools to check memory and whatever else I wanted to have on /boot/debian but for some reason I have not yet understood, that didn’t work out as I expected it to be.

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data

As seen in the above, the first four primary partitions are taken by MS-Windows themselves. I just wish I had understood how to use GPT disklabels properly so I could figure out things better, but it seems (for reasons not fully understood) why the efi partition is a lowly 100 MB which I suspect where /boot is when I asked it to be 2 GB. Is that UEFI doing, Microsoft’s doing or something which is a default bit, dunno. Having the EFI partition smaller hampers the way I want to do things as will be clear in a short while from now.

After I installed MS-Windows, I installed Debian GNU/Linux using the net install method.

The following is what I had put on piece of paper as what partitions would be for GNU/Linux –

/boot – 512 MB (should be enough to accommodate couple of kernel versions, memory checking and any other tools I might need in the future.

/ – 700 GB – well admittedly that looks insane a bit but I do like to play with new programs/binaries as and when possible and don’t want to run out of space as and when I forget to clean it up.

[off-topic, wishlist] One tool I would like to have (and dunno if it’s there) is an ability to know when I installed a package, how many times I have used it, how frequently and the ability to add small notes or description to the package. Many a times I have seen that the package description is either too vague or doesn’t focus on the practical usefulness of a package to me .

An easy example to share what I mean would be the apt package –

aptitude show apt
Package: apt
Version: 1.6~alpha6
Essential: yes
State: installed
Automatically installed: no
Priority: required
Section: admin
Maintainer: APT Development Team
Architecture: amd64
Uncompressed Size: 3,840 k
Depends: adduser, gpgv | gpgv2 | gpgv1, debian-archive-keyring, libapt-pkg5.0 (>= 1.6~alpha6), libc6 (>= 2.15), libgcc1 (>= 1:3.0), libgnutls30 (>= 3.5.6), libseccomp2 (>=1.0.1), libstdc++6 (>= 5.2)
Recommends: ca-certificates
Suggests: apt-doc, aptitude | synaptic | wajig, dpkg-dev (>= 1.17.2), gnupg | gnupg2 | gnupg1, powermgmt-base, python-apt
Breaks: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~), aptitude (< 0.8.10)
Replaces: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~)
Provides: apt-transport-https (= 1.6~alpha6)
Description: commandline package manager
This package provides commandline tools for searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library.

These include:
* apt-get for retrieval of packages and information about them from authenticated sources and for installation, upgrade and removal of packages together with their dependencies
* apt-cache for querying available information about installed as well as installable packages
* apt-cdrom to use removable media as a source for packages
* apt-config as an interface to the configuration settings
* apt-key as an interface to manage authentication keys

Now while I love all the various tools that the apt package has, I do have special fondness for $apt-cache rdepends $package

as it gives another overview of a package or library or shared library that I may be interested in and which other packages are in its orbit.

Over period of time it becomes easy/easier to forget packages that you don’t use day-to-day hence having something like such a tool would be a god-send where you can put personal notes about packages. Another could be reminders of tickets posted upstream or something on those lines. I don’t know of any tool/package which does something on those lines. [/off-topic, wishlist]

/home – 1.2 TB

swap – 25.2 GB

Admit I got a bit overboard on swap space but as and when I get more memory at least should have swap 1:1 right. I am not sure if the old rules would still apply or not.

Then I used Debian buster alpha 2 netinstall iso

https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/debian-buster-DI-alpha2-amd64-netinst.iso and put it on the usb stick. I did use the sha1sum to ensure that the netinstall iso was the same as the original one https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/SHA1SUMS

After that simply doing a dd if of was enough to copy the net install to the usb stick.

I did have some issues with the installation which I’ll share in the next post but the most critical issue was that I had to again do make a /boot and even though I made /boot as a separate partition and gave 1 GB to it during the partitioning step, I got only 100 MB and I have no idea why it is like that.

/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem

> df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 88M 68M 14M 84% /boot

home/shirish> ls -lh /boot
total 55M
-rw-r--r-- 1 root root 193K Dec 22 19:42 config-4.14.0-2-amd64
-rw-r--r-- 1 root root 193K Jan 15 01:15 config-4.14.0-3-amd64
drwx------ 3 root root 1.0K Jan 1 1970 efi
drwxr-xr-x 5 root root 1.0K Jan 20 10:40 grub
-rw-r--r-- 1 root root 19M Jan 17 10:40 initrd.img-4.14.0-2-amd64
-rw-r--r-- 1 root root 21M Jan 20 10:40 initrd.img-4.14.0-3-amd64
drwx------ 2 root root 12K Jan 1 17:49 lost+found
-rw-r--r-- 1 root root 2.9M Dec 22 19:42 System.map-4.14.0-2-amd64
-rw-r--r-- 1 root root 2.9M Jan 15 01:15 System.map-4.14.0-3-amd64
-rw-r--r-- 1 root root 4.4M Dec 22 19:42 vmlinuz-4.14.0-2-amd64
-rw-r--r-- 1 root root 4.7M Jan 15 01:15 vmlinuz-4.14.0-3-amd64

root@debian:/boot/efi/EFI# ls -lh
total 3.0K
drwx------ 2 root root 1.0K Dec 31 21:38 Boot
drwx------ 2 root root 1.0K Dec 31 19:23 debian
drwx------ 4 root root 1.0K Dec 31 21:32 Microsoft

I would be the first to say I don’t really the understand this EFI business.

The only thing I do understand that it’s good that even without OS it becomes easier to see that all the components if you change/add which would or would not work in BIOS. In bios, getting info on components were iffy at best.

There have been other issues with EFI which I may take in another blog post but for now I would be happy if somebody can share –

how to have a big /boot/ so it’s not a small partition for debian boot. I don’t see any value in having a bigger /boot for MS-Windows unless there is a way to also get grub2 pointer/header added in MS-Windows bootloader. Will share the reasons for it in the next blog post.

I am open to reinstalling both MS-Windows and Debian from scratch although that would happen when debian-buster-alpha3 arrives. Any answer to the above would give me something to try the solution and share if I get the desired result.

Looking forward for answers.

Planet DebianLouis-Philippe Véronneau: French Gender-Neutral Translation for Roundcube

Here's a quick blog post to tell the world I'm now doing a French gender-neutral translation for Roundcube.

A while ago, someone wrote on the Riseup translation list to complain against the current fr_FR translation. French is indeed a very gendered language and it is common place in radical spaces to use gender-neutral terminologies.

So yeah, here it is: https://github.com/baldurmen/roundcube_fr_FEM

I haven't tested the UI integration yet, but I'll do that once the Riseup folks integrate it to their Roundcube instance.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 1 – Kernel Miniconf

Look out for what’s in the security pipeline – Casey Schaufler

Old Protocols

  • SeLinux
    • No much changing
  • Smack
    • Network configuration improvements and catchup with how the netlable code wants things to be done.
  • AppArmor
    • Labeled objects
    • Networking
    • Policy stacking

New Security Modules

  • Some peopel think existing security modules don’t work well with what they are doing
  • Landlock
    • eBPF extension to SECMARK
    • Kills processes when it goes outside of what it should be doing
  • PTAGS
    • General purpose process tags
    • Fro application use ( app can decide what it wants based on tags, not something external to the process enforcing things )
  • HardChroot
    • Limits on chroot jail
    • mount restrictions
  • Safename
    • Prevents creation of unsafe files names
    • start, middle or end characters
  • SimpleFlow
    • Tracks tainted data

Security Module Stacking

  • Problems with incompatibility of module labeling
  • People want different security policy and mechanism in containers than from the base OS
  • Netfilter problems between smack and Apparmor

Container

  • Containers are a little bit undefined right now. Not a kernel construct
  • But while not kernel constructs, need to work with and support them

Hardening

  • Printing pointers (eg in syslog)
  • Usercopy

 

Share

,

Planet DebianDirk Eddelbuettel: #15: Tidyverse and data.table, sitting side by side ... (Part 1)

Welcome to the fifteenth post in the rarely rational R rambling series, or R4 for short. There are two posts I have been meaning to get out for a bit, and hope to get to shortly---but in the meantime we are going start something else.

Another longer-running idea I had was to present some simple application cases with (one or more) side-by-side code comparisons. Why? Well at times it feels like R, and the R community, are being split. You're either with one (increasingly "religious" in their defense of their deemed-superior approach) side, or the other. And that is of course utter nonsense. It's all R after all.

Programming, just like other fields using engineering methods and thinking, is about making choices, and trading off between certain aspects. A simple example is the fairly well-known trade-off between memory use and speed: think e.g. of a hash map allowing for faster lookup at the cost of some more memory. Generally speaking, solutions are rarely limited to just one way, or just one approach. So if pays off to know your tools, and choose wisely among all available options. Having choices is having options, and those tend to have non-negative premiums to take advantage off. Locking yourself into one and just one paradigm can never be better.

In that spirit, I want to (eventually) show a few simple comparisons of code being done two distinct ways.

One obvious first candidate for this is the gunsales repository with some R code which backs an earlier NY Times article. I got involved for a similar reason, and updated the code from its initial form. Then again, this project also helped motivate what we did later with the x13binary package which permits automated installation of the X13-ARIMA-SEATS binary to support Christoph's excellent seasonal CRAN package (and website) for which we now have a forthcoming JSS paper. But the actual code example is not that interesting / a bit further off the mainstream because of the more specialised seasonal ARIMA modeling.

But then this week I found a much simpler and shorter example, and quickly converted its code. The code comes from the inaugural datascience 1 lesson at the Crosstab, a fabulous site by G. Elliot Morris (who may be the highest-energy undergrad I have come across lately) focusssed on political polling, forecasts, and election outcomes. Lesson 1 is a simple introduction, and averages some polls of the 2016 US Presidential Election.

Complete Code using Approach "TV"

Elliot does a fine job walking the reader through his code so I will be brief and simply quote it in one piece:


## Getting the polls

library(readr)
polls_2016 <- read_tsv(url("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"))

## Wrangling the polls

library(dplyr)
polls_2016 <- polls_2016 %>%
    filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
library(lubridate)
polls_2016 <- polls_2016 %>%
    mutate(end_date = ymd(end_date))
polls_2016 <- polls_2016 %>%
    right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date),
                                              max(polls_2016$end_date), by="days")))

## Average the polls

polls_2016 <- polls_2016 %>%
    group_by(end_date) %>%
    summarise(Clinton = mean(Clinton),
              Trump = mean(Trump))

library(zoo)
rolling_average <- polls_2016 %>%
    mutate(Clinton.Margin = Clinton-Trump,
           Clinton.Avg =  rollapply(Clinton.Margin,width=14,
                                    FUN=function(x){mean(x, na.rm=TRUE)},
                                    by=1, partial=TRUE, fill=NA, align="right"))

library(ggplot2)
ggplot(rolling_average)+
  geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
  geom_point(aes(x=end_date,y=Clinton.Margin))

It uses five packages to i) read some data off them interwebs, ii) then filters / subsets / modifies it leading to a right (outer) join with itself before iv) averaging per-day polls first and then creates rolling averages over 14 days before v) plotting. Several standard verbs are used: filter(), mutate(), right_join(), group_by(), and summarise(). One non-verse function is rollapply() which comes from zoo, a popular package for time-series data.

Complete Code using Approach "DT"

As I will show below, we can do the same with fewer packages as data.table covers the reading, slicing/dicing and time conversion. We still need zoo for its rollapply() and of course the same plotting code:


## Getting the polls

library(data.table)
pollsDT <- fread("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv")

## Wrangling the polls

pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
pollsDT[, end_date := as.IDate(end_date)]
pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]),
                                              max(pollsDT[,end_date]), by="days")), on="end_date"]

## Average the polls

library(zoo)
pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), by=end_date]
pollsDT[, Clinton.Margin := Clinton-Trump]
pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
                                   FUN=function(x){mean(x, na.rm=TRUE)},
                                   by=1, partial=TRUE, fill=NA, align="right")]

library(ggplot2)
ggplot(pollsDT) +
    geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
    geom_point(aes(x=end_date,y=Clinton.Margin))

This uses several of the components of data.table which are often called [i, j, by=...]. Row are selected (i), columns are either modified (via := assignment) or summarised (via =), and grouping is undertaken by by=.... The outer join is done by having a data.table object indexed by another, and is pretty standard too. That allows us to do all transformations in three lines. We then create per-day average by grouping by day, compute the margin and construct its rolling average as before. The resulting chart is, unsurprisingly, the same.

Benchmark Reading

We can looking how the two approaches do on getting data read into our session. For simplicity, we will read a local file to keep the (fixed) download aspect out of it:

R> url <- "http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"
R> download.file(url, destfile=file, quiet=TRUE)
R> file <- "/tmp/poll-responses-clean.tsv"
R> res <- microbenchmark(tidy=suppressMessages(readr::read_tsv(file)),
+                       dt=data.table::fread(file, showProgress=FALSE))
R> res
Unit: milliseconds
 expr     min      lq    mean  median      uq      max neval
 tidy 6.67777 6.83458 7.13434 6.98484 7.25831  9.27452   100
   dt 1.98890 2.04457 2.37916 2.08261 2.14040 28.86885   100
R> 

That is a clear relative difference, though the absolute amount of time is not that relevant for such a small (demo) dataset.

Benchmark Processing

We can also look at the processing part:

R> rdin <- suppressMessages(readr::read_tsv(file))
R> dtin <- data.table::fread(file, showProgress=FALSE)
R> 
R> library(dplyr)
R> library(lubridate)
R> library(zoo)
R> 
R> transformTV <- function(polls_2016=rdin) {
+     polls_2016 <- polls_2016 %>%
+         filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
+     polls_2016 <- polls_2016 %>%
+         mutate(end_date = ymd(end_date))
+     polls_2016 <- polls_2016 %>%
+         right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), 
+                                                   max(polls_2016$end_date), by="days")))
+     polls_2016 <- polls_2016 %>%
+         group_by(end_date) %>%
+         summarise(Clinton = mean(Clinton),
+                   Trump = mean(Trump))
+ 
+     rolling_average <- polls_2016 %>%
+         mutate(Clinton.Margin = Clinton-Trump,
+                Clinton.Avg =  rollapply(Clinton.Margin,width=14,
+                                         FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                         by=1, partial=TRUE, fill=NA, align="right"))
+ }
R> 
R> transformDT <- function(dtin) {
+     pollsDT <- copy(dtin) ## extra work to protect from reference semantics for benchmark
+     pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
+     pollsDT[, end_date := as.IDate(end_date)]
+     pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]), 
+                                                   max(pollsDT[,end_date]), by="days")), on="end_date"]
+     pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), 
+                        by=end_date][, Clinton.Margin := Clinton-Trump]
+     pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
+                                        FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                        by=1, partial=TRUE, fill=NA, align="right")]
+ }
R> 
R> res <- microbenchmark(tidy=suppressMessages(transformTV(rdin)),
+                       dt=transformDT(dtin))
R> res
Unit: milliseconds
 expr      min       lq     mean   median       uq      max neval
 tidy 12.54723 13.18643 15.29676 13.73418 14.71008 104.5754   100
   dt  7.66842  8.02404  8.60915  8.29984  8.72071  17.7818   100
R> 

Not quite a factor of two on the small data set, but again a clear advantage. data.table has a reputation for doing really well for large datasets; here we see that it is also faster for small datasets.

Side-by-side

Stripping the reading, as well as the plotting both of which are about the same, we can compare the essential data operations.

Summary

We found a simple task solved using code and packages from an increasingly popular sub-culture within R, and contrasted it with a second approach. We find the second approach to i) have fewer dependencies, ii) less code, and iii) running faster.

Now, undoubtedly the former approach will have its staunch defenders (and that is all good and well, after all choice is good and even thirty years later some still debate vi versus emacs endlessly) but I thought it to be instructive to at least to be able to make an informed comparison.

Acknowledgements

My thanks to G. Elliot Morris for a fine example, and of course a fine blog and (if somewhat hyperactive) Twitter account.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaBen Martin: 4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.


The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!


,

Planet DebianRuss Allbery: New year haul

Some new acquired books. This is a pretty wide variety of impulse purchases, filled with the optimism of a new year with more reading time.

Libba Bray — Beauty Queens (sff)
Sarah Gailey — River of Teeth (sff)
Seanan McGuire — Down Among the Sticks and Bones (sff)
Alexandra Pierce & Mimi Mondal (ed.) — Luminescent Threads (nonfiction anthology)
Karen Marie Moning — Darkfever (sff)
Nnedi Okorafor — Binti (sff)
Malka Older — Infomocracy (sff)
Brett Slatkin — Effective Python (nonfiction)
Zeynep Tufekci — Twitter and Tear Gas (nonfiction)
Martha Wells — All Systems Red (sff)
Helen S. Wright — A Matter of Oaths (sff)
J.Y. Yang — Waiting on a Bright Moon (sff)

Several of these are novellas that were on sale over the holidays; the rest came from a combination of reviews and random on-line book discussions.

The year hasn't been great for reading time so far, but I do have a couple of things ready to review and a third that I'm nearly done with, which is not a horrible start.

Planet DebianShirish Agarwal: PC desktop build, Intel, spectre issues etc.

This is and would be a longish one.

I have been using desktop computers for around couple of decades now. My first two systems were an Intel Pentium III and then a Pentium Dual-core, the first one on kobian/mercury motherboard. The motherboards were actually called Mercury and was a brand which was later sold to Kobian which kept the brand-name. The motherboards and the CPU/processor used to be cheap. One could set up a decentish low-end system with display for around INR 40k/- which seemed to be decent as a country we had just come out of non-alignment movement and also chose to come out of isolationist tendencies (technology and otherwise as well). Most middle-class income families got their first taste of computers after y2k. There were quite a few y2k incomes which prompted the Government to lose duties further.

One of the highlights during 1991 when satellite TV came was shown by CNN (probably CNN International) was the coming down of the Berlin Wall. There were many of us who were completely ignorant of world politics or what is/was happening in other parts of the world.

Computer systems at those times were considered a luxury item and duties were sky-high ( between 1992-2001). The launch of Mars Pathfinder, its subsequent successful landing on the Martian surface also catapulted people’s imagination about PCs and micro-processors.

I can still recall the excitement that was among young people of my age first seeing the liftoff from Cape Canaveral and then later the processed images of Spirits cameras showing images of a desolate desert-type land. We also witnessed the beginnings of ‘International Space Station‘ (ISS) .

Me and few of my friends had drunk lot of Carl Sagan and many other sci-fi coolaids/stories. Star Trek, the movies and the universal values held/shared by them was a major influence to all our lives.

People came to know about citizen based science or projects/distributed science projects, y2k fear appeared to be unfounded all these factors and probably a few more prompted the Government of India to reduce duties on motherboards, processors, components as well as taking Computers out of the restricted list which lead to competition and finally the common man being able to dream of a system sooner than later. Y2K also kick-started the beginnings of Indian software industry which is the bread and butter of many a middle class men-women who are in the service industry using technology directly or indirectly.

In 2002 I bought my first system, an Intel Pentium III, i810 chipset (integrated graphics) with 256 MB of SDRAM which was supposed to be sufficient for the tasks it was being used for, Some light gaming, some web-mails, seeing movies,etc running on a mercury board. I don’t remember the code-name partly because the code-names are/were really weird and partly because it is just too long ago. I remember using Windows ’98 and trying to install one of the early GNU/Linux variants on that machine. Ir memory serves right, you had to flick a jumper (like a switch) to use the extended memory.

I do not know/remember what happened but I think somewhere in a year or two in that time-frame Mercury India filed for bankruptcy and the name, manufacturing was sold to Kobian. After Kobian took over the ownership, it said it would neither honor the 3/5 year warranty or even repairs on the motherboards Mercury had sold, it created a lot of bad will against the company and relegated itself to the bottom of the pile for both experienced and new system-builders. Also mercury motherboards weren’t reputed/known to have a long life although the one I had gave me quite a decent life.

The next machine I purchased was a Pentium Dual-core, (around 2009/2010) LGA a Williamnette which had out-of-order execution, the bug meltdown which is making news nowadays has history this far back. I think I bought it in 45nm which was a huge jump from the previous version although still secure in the mATX package. Again the board was from mercury. (Intel 845 chipset, DDR2 2 GB RAM and SATA came to stay).

So meltdown has been in existence for 10-12 odd years and is in everything which either uses Intel or ARM processors.

As you can probably make-out most systems came stretched out 2-3 years later than when they were launched in American or/and European markets. Also business or tourism travel was neither so easy, smooth or transparent as is today. All of which added to delay in getting new products in India.

Sadly, the Indian market is similar to other countries where Intel is used in more than 90% machines. I know of few institutions (though pretty much rare) who insisted and got AMD solutions.

That was the time when gigabyte came onto the scene which formed the basis of the Wolfdale-3M 45nm system which was in the same price range as the earlier models, and offered a weeny tiny bit of additional graphics performance.To the best of my knowledge, it was perhaps the first motherboard which had solid state capacitors being offered/put in a budget motherboard. The mobo-processor bundle used to be in the range of INR 7/8k excluding RAM. cabinet etc, I had a Philips 17″ CRT display which ran a good decade or so, so just had to get the new cabinet, motherboard, CPU, RAM and was good to go.

Few months later at a hardware exhibition held in the city I was invited to an Asus party which was just putting a toe-hold in the Indian market. I went to the do, enjoyed myself. They had a small competition where they asked some questions and asked if people had queries. To my surprise, I found that most people who were there were hardware vendors and for one reason or the other they chose to remain silent. Hence I got an AMD Asus board. This is different from winning another Gigabyte motherboard which I also won in the same year in another competition as well in the same time-frame. Both were mid-range motherboards (ATX build).

As I had just bought a Gigabyte (mATX) motherboard and had made the build, I had to give both the motherboards away, one to a friend and one to my uncle and both were pleased with the AMD-based mobos which they somehow paired with AMD processors. At that time AMD had one-upped Intel in both graphics and even bare computing especially at the middle level and they were striving to push into new markets.

Apart from the initial system bought, most of my systems when being changed were in the INR 20-25k/- budget including all and any accessories I bought later.

The only real expensive parts I purchased have been external hdd ( 1 TB WD passport) and then a Viewsonic 17″ LCD which together sent me back by around INR 10k/- but both seem to give me adequate performance (both have outlived the warranty years) with the monitor being used almost 24×7 over 6 years or so, of course over GNU/Linux specifically Debian. Both have been extremely well value for the money.

As I had been exposed to both the motherboards I had been following those and other motherboards as well. What was and has been interesting to observe what Asus did later was to focus more on the high-end gaming market while Gigabyte continued to dilute it energy both in the mid and high-end motherboards.

Cut to 2017 and had seen quite a few reports –

http://www.pcstats.com/NewsView.cfm?NewsID=131618

http://www.digitimes.com/news/a20170904PD207.html

http://www.guru3d.com/news-story/asus-has-the-largest-high-end-intel-motherboard-share.html

All of which points to the fact that Asus had cornered a large percentage of the market and specifically the gaming market . While there are no formal numbers as both Asus and Gigabyte choose to releases only APAC numbers rather than a country-wide split which would have made for some interesting reading.

Just so that people do not presume anything, there are about 4-5 motherboard vendors in the Indian market. There is Asus at the top (I believe) followed by Gigabyte, Intel at a distant 3rd place (because it’s too expensive). There are also pockets of Asrock and MSI and I know of people who follow them religiously although their mobos are supposed to be somewhat pensive than the two above. Asus and Gigabyte do try to fight out with each other but each has its core competency I believe with Asus being used by heavy gamers, overclockers more than Gigabyte.

Anyway come October 2017 and my main desktop died and am left as they say up the creek without the paddle. I didn’t even have Net access for about 3 weeks due to BSNL or PMC’s foolishness and then later small riots breaking out due to Koregaon Bhima conflict.

This led to a situation where I had to buy/build a system with oldish/half knowledge. I was open to having an AMD system but both datacare and even Rashi peripherals, Pune both of whom used to deal in AMD systems shared they had stopped dealing in AMD stuff sometime back. While datacare had AMD mobos, getting processors were an issue. Both the vendors are near to my home so if I buy from them getting support becomes an non-issue. I could have gone out of my way to get an AMD processor but getting support could have been an issue as would have had to travel and I do not know the vendors enough. Hence fell back to the Intel platform.

I asked around quite a few PC retailers and distributors around and found the Asus Prime Z270-P was the only mid-range motherboard available at that time. I did come to know a bit later of other motherboards in the z270 series but most vendors didn’t/don’t stock them as there is capital, interest and stock cost.

History – Historically, there has also been huge time lag in getting motherboards, processors etc. between worldwide announcements, and then announcements of sale in India and actually getting hands-on to the newest motherboards and processors as seen above. This had led to quite a bit of frustration to many a users. I have known of many a soul visiting Lamington Road, Mumbai to get the latest motherboard, processor. Even to-date this system flourishes as Mumbai has an International Airport and there is always a demand and people willing to pay a premium for the newest processor/motherboard even before any reviews are in.

I was highly surprised to know recently that Prime Z370-P motherboards are already selling (just 3 months late) with the Intel 8th generation processors although these are still as samples rather than a torrent some of the other motherboard-combo might be.

At the end I bought an Intel I7400 chip and an Asus Prime Z270-P motherboard with 2400 mhz Corsair 8 GB and a 4 TB WD Green (5400) HDD with a Circle 545 cabinet and (with the almost criminal 400 Watts SMPS). Later came to know that it’s not really even 400 Watts, but around 20-25% less . The whole package costed me north of INR 50k/- with still need to spend on a better SMPS (probably a Cosair or Coolermaster 80 600/650 SMPS) with a few accessories I still need to complete the system.

I will be changing the PSU most probably next week.

Circle SMPS Picture

Asus motherboard, i7400 and RAM

Disclosure – The neatness you see is not me. I was unsure if I would be able to put the heatsink on the CPU properly as that is the most sensitive part while building a system. A bent pin on the CPU could play havoc as well as void the warranty on the CPU or motherboard or both. The new thing I saw were the knobs that can be seen on the heatsink fan is something which I hadn’t seen before. The vendor did the fixing of the processor on the mobo for me as well as tied up the remaining power cables without asking for which I am and was grateful and would definitely provide him with more business as and when I need components.

Future – While it’s ok for now, I’m still using a pretty old 2 speaker setup which I hope to upgrade to either a 2.1/3.1 speaker setup, have full 64 GB 2400 Mhz Kingston Razor/G.Skill/Corsair memory, an M.2 512 MB SSD .

If I do get the Taiwan Debconf bursary I do hope to buy some or all of the above plus a Samsung or some other Android/Replicant/Librem smartphone. I have been also looking for a vastly simplified smartphone for my mum with big letters and everything but that has been a failure to find in the Indian market. Of course this all depends if I do get the bursary and even after the bursary if Global warranty and currency exchange works out in my favor vis-a-vis what I would have to pay in India.

Apart from above, Taiwan is supposed to be a pretty good source to get graphic novels, manga comics, lots of RPG games for very cheap prices with covers and hand-drawn material etc. All of this is based upon few friend’s anecdotal experiences so dunno if all of that would still hold true if I manage to be there.

There are also quite a few chip foundries and maybe during debconf could have visit to one of them if possible. It would be rewarding if the visit was to any 45nm or lower chip foundry as India is still stuck at 65nm range till date.

I would be sharing about my experience about the board, the CPU, the expectations I had from the Intel chip and the somewhat disappointing experience of using Debian on the new board in the next post, not necessarily Debian’s fault but the free software ecosystem being at fault here.

Feel free to point out any mistakes you find, grammatically or even otherwise. The blog post has been in the works for over couple of weeks so its possible for mistakes to creep in.

Planet DebianDirk Eddelbuettel: Rcpp 0.12.15: Numerous tweaks and enhancements

The fifteenth release in the 0.12.* series of Rcpp landed on CRAN today after just a few days of gestation in incoming/.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, and the 0.12.14.release in November 2017 making it the nineteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1288 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This release contains a pretty large number of pull requests by a wide variety of authors. Most of these pull requests are very focused on a particular issue at hand. One was larger and ambitious with some forward-looking code for R 3.5.0; however this backfired a little on Windows and is currently "parked" behind a #define. Full details are below.

Changes in Rcpp version 0.12.15 (2018-01-16)

  • Changes in Rcpp API:

    • Calls from exception handling to Rf_warning() now correctly set an initial format string (Dirk in #777 fixing #776).

    • The 'new' Date and Datetime vectors now have is_na methods too. (Dirk in #783 fixing #781).

    • Protect more temporary SEXP objects produced by wrap (Kevin in #784).

    • Use public R APIs for new_env (Kevin in #785).

    • Evaluation of R code is now safer when compiled against R 3.5 (you also need to explicitly define RCPP_PROTECTED_EVAL before including Rcpp.h). Longjumps of all kinds (condition catching, returns, restarts, debugger exit) are appropriately detected and handled, e.g. the C++ stack unwinds correctly (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • The new function Rcpp_fast_eval() can be used for performance-sensitive evaluation of R code. Unlike Rcpp_eval(), it does not try to catch errors with tryEval in order to avoid the catching overhead. While this is safe thanks to the stack unwinding protection, this also means that R errors are not transformed to an Rcpp::exception. If you are relying on error rethrowing, you have to use the slower Rcpp_eval(). On old R versions Rcpp_fast_eval() falls back to Rcpp_eval() so it is safe to use against any versions of R (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • Overly-clever checks for NA have been removed (Kevin in #790).

    • The included tinyformat has been updated to the current version, Rcpp-specific changes are now more isolated (Kirill in #791).

    • Overly picky fall-through warnings by gcc-7 regarding switch statements are now pre-empted (Kirill in #792).

    • Permit compilation on ANDROID (Kenny Bell in #796).

    • Improve support for NVCC, the CUDA compiler (Iñaki Ucar in #798 addressing #797).

    • Speed up tests for NA and NaN (Kirill and Dirk in #799 and #800).

    • Rearrange stack unwind test code, keep test disabled for now (Lionel in #801).

    • Further condition away protect unwind behind #define (Dirk in #802).

  • Changes in Rcpp Attributes:

    • Addressed a missing Rcpp namespace prefix when generating a C++ interface (James Balamuta in #779).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ now shows Rcpp::Rcpp.plugin.maker() and not the outdated ::: use applicable non-exported functions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianNorbert Preining: TLCockpit v0.8

Today I released v0.8 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr. I spent the winter holidays in updating and polishing, but also in helping me debug problems that users have reported. Hopefully the new version works better for all.

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last release:

  • add debug facility: It is now possible to pass -d for debugging to tlcockpit, activating debugging. There is also -dd for more verbose debugging.
  • select mirror facility: The edit screen for the repository setting now allows selecting from the current list of mirrors, see the following screenshot:
  • initial loading speedup: Till now we used to parse the json output of tlmgr, which included everything the whole database contains. We now load the initial minimal information via info --data and load additional data when details for a package is shown on demand. This should especially make a difference on systems without a compiled json Perl library available.
  • fixed self update: In the previous version, updating the TeX Live Manager itself was not properly working – it was updated but the application itself became unresponsive afterwards. This is hopefully fixed (although this is really tricky).
  • status indicator: The status indicator has moved from the menu bar (where it was somehow a stranger) to below the package listing, and now also includes the currently running command, see screenshot after the next item.
  • nice spinner: Only an eye-candy, but I added a rotating spinner while loading the database, updates, backups, or doing postactions. See the attached screenshot, which also shows the new location of the status indicator and the additional information provided.

I hope that this version is more reliable, stable, and easier to use. As usual, please use the issue page of the github project to report problems.

TeX Live should contain the new version starting from tomorrow.

Enjoy.

Don MartiMore brand safety bullshit

There's enough bullshit on the Internet already, but I'm afraid I'm going to quote some more. This time from Ilyse Liffreing at IBM.

The reality is none of us can say with certainty that anywhere in the world, we are [brand] safe. Look what just happened with YouTube. They are working on fixing it, but even Facebook and Google themselves have said there’s not much they can do about it. I mean, it’s hard. It’s not black and white. We are putting a lot of money in it, and pull back on channels where we have concerns. We’ve had good talks with the YouTube teams.

Bullshit.

One important part of this decision is black and white.

Either you give money to Nazis.

Or you don't give money to Nazis.

If Nazis are better at "programmatic" than the resting-and-vesting chill bros at the programmatic ad firms (and, face it, Nazis kick ass at programmatic), then the choice to spend ad money in a we're-kind-of-not-sure-if-this-goes-to-Nazis-or-not way is a choice that puts your brand on the wrong side of a black and white line.

There are plenty of Nazi-free places for brands to run ads. They might not be the cheapest. But I know which side of the line I buy from.

,

CryptogramFriday Squid Blogging: Te Papa Colossal Squid Exhibition Is Being Renovated

The New Zealand home of the colossal squid exhibit is behind renovated.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Breaches Don't Affect Stock Price

Interesting research: "Long-term market implications of data breaches, not," by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies' stock, with a focus on the results relative to the performance of the firms' peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.

  • For the differences in the breached companies' betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies' beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn't going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

Worse Than FailureError'd: Alphabetical Soup

"I appreciate that TIAA doesn't want to fully recognize that the country once known as Burma now calls itself Myanmar, but I don't think that this is the way to handle it," Bruce R. writes.

 

"MSI Installed an update - but I wonder what else it decided to update in the process? The status bar just kept going and going..." writes Jon T.

 

Paul J. wrote, "Apparently my occupation could be 'All Other Persons' on this credit card application!"

 

Geoff wrote, "So I need to commit the changes I didn't make, and my options are 'don't commit' or 'don't commit'?"

 

David writes, "This was after a 15 minute period where I watched a timer spin frantically."

 

"It's as if DealeXtreme says 'three stars, I think you meant to say FIVE stars'," writes Henry N.

 

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet DebianEddy Petrișor: Suppressing color output of the Google Repo tool

On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never info
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto.

,

Sociological ImagesBros and Beer Snobs

The rise of craft beer in the United States gives us more options than ever at happy hour. Choices in beer are closely tied to social class, and the market often veers into the world of pointlessly gendered products. Classic work in sociology has long studied how people use different cultural tastes to signal social status, but where once very particular tastes showed membership in the upper class—like a preference for fine wine and classical music—a world with more options offers status to people who consume a little bit of everything.

Photo Credit: Brian Gonzalez (Flickr CC)

But who gets to be an omnivore in the beer world? New research published in Social Currents by Helana Darwin shows how the new culture of craft beer still leans on old assumptions about gender and social status. In 2014, Darwin collected posts using gendered language from fifty beer blogs. She then visited four craft beer bars around New York City, surveying 93 patrons about the kinds of beer they would expect men and women to consume. Together, the results confirmed that customers tend to define “feminine” beer as light and fruity and “masculine” beer as strong, heavy, and darker.

Two interesting findings about what people do with these assumptions stand out. First, patrons admired women who drank masculine beer, but looked down on those who stuck to the feminine choices. Men, however, could have it both ways. Patrons described their choice to drink feminine beer as open-mindedness—the mark of a beer geek who could enjoy everything. Gender determined who got “credit” for having a broad range of taste.

Second, just like other exclusive markers of social status, the India Pale Ale held a hallowed place in craft brew culture to signify a select group of drinkers. Just like fancy wine, Darwin writes,

IPA constitutes an elite preference precisely because it is an acquired taste…inaccessible to those who lack the time, money, and desire to cultivate an appreciation for the taste.

Sociology can get a bad rap for being a buzzkill, and, if you’re going to partake, you should drink whatever you like. But this research provides an important look at how we build big assumptions about people into judgments about the smallest choices.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianMike Gabriel: Building packages with Meson and Debhelper version level 11 for Debian stretch-backports

More a reminder for myself, than a blog post...

If you want to backport a project from unstable based on the meson build system and your package uses debhelper to invoke the meson build process, then you need to modify the backported package's debian/control file slightly:

diff --git a/debian/control b/debian/control
index 43e24a2..d33e76b 100644
--- a/debian/control
+++ b/debian/control
@@ -14,7 +14,7 @@ Build-Depends: debhelper (>= 11~),
                libmate-menu-dev (>= 1.16.0),
                libmate-panel-applet-dev (>= 1.16.0),
                libnotify-dev,
-               meson,
+               meson (>= 0.40.0),
                ninja-build,
                pkg-config,
 Standards-Version: 4.1.3

Enforce the build to pull-in meson from stretch-backports, i.e. a meson version that is newer than 0.40.0.

Reasoning: if you want to build your package against debhelper (>= 11~) from stretch-backports it will use the --wrap-mode option when invoking meson. However, this option only got added in meson 0.40.0. So you need to make sure, the meson version from stretch-backports gets pulled in, too, for your build. The build will fail when using the meson version that we find in Debian stretch.

TEDNew clues about the most mysterious star in the universe, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

New clues about the most mysterious star in the universe. KIC 8462852 (often called “Tabby’s star,” after the astronomer Tabetha Boyajian, who led the first study of the star) intermittently dims as much as 22% and then brightens again, for a reason no one has yet quite figured out. This bizarre occurrence led astronomers to propose over a dozen theories for why the star might be dimming, including the fringe theory that it was caused by an alien civilization using the planet’s energy. Now, new data shows that the dimming isn’t fully opaque; certain colors of light are blocked more than others. This suggests that what’s causing the star to dim is dust. After all, if an opaque object — like a planet or alien megastructure — was passing in front of the star, all of the light would be blocked equally. Tabby’s star is due to become visible again in late February or early March of 2018. (Watch Boyajian’s TED Talk)

TED’s new video series celebrates the genius design of everyday objects. What do the hoodie, the London Tube Map, the hyperlink, and the button have in common? They’re everyday objects, often overlooked, that have profoundly influenced the world around us. Each 3- to 4- minute episode of TED’s original video series Small Thing Big Idea celebrates one of these objects, with a well-known name in design explaining what exactly makes it so great. First up is Michael Bierut on the London Tube Map. (Watch the first episode here and tune in weekly on Tuesday for more.)

The science of black holes. In the new PBS special Black Hole Apocalypse, astrophysicist Janna Levin explores the science of black holes, what they are, why they are so powerful and destructive, and what they might tell us about the very origin of our existence. Dubbing them the world’s greatest mystery, Levin and her fellow scientists, including astronomer Andrea Ghez and experimental physicist Rainer Weiss, embark on a journey to portray the magnitude and importance of these voids that were long left unexplored and unexplained. (Watch Levin’s TED Talk, Ghez’s TED Talk, and read Weiss’ Ideas piece.)

An organized crime thriller with non-fiction roots. McMafia, a television show starring James Norton, premiered in the UK in early January. The show is a fictionalized account of Misha Glenny’s 2008 non-fiction book of the same name. The show focuses on Alex Goldman, the son of an exiled Mafia boss who wants to put his family’s history behind him. Unfortunately, a murder foils his plans and to protect his family, he must face up to various international crime syndicates. (Watch Glenny’s TED Talk)

Inside the African-American anti-abortion movement. In her new documentary for PBS’ Frontline, Yoruba Richen examines the complexities of the abortion debate as it relates to US’ racial history. Richen speaks with African-American members of both the pro-life and the anti-abortion movements, as her short doc follows a group of anti-abortion activists as they work in the black community. (Watch Richen’s TED Talk.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Worse Than FailureCodeSOD: The Least of the Max

Adding assertions and sanity checks to your code is important, especially when you’re working in a loosely-typed language like JavaScript. Never assume the input parameters are correct, assert what they must be. Done correctly, they not only make your code safer, but also easier to understand.

Matthias’s co-worker… doesn’t exactly do that.

      function checkPriceRangeTo(x, min, max) {
        if (max == 0) {
          max = valuesPriceRange.max
        }
        min = Math.min(min, max);
        max = Math.max(min, max);
        x = parseInt(x)
        if (x == 0) {
          x = 50000
        }

        //console.log(x, 'min:', min, 'max:', max);
        return x >= min && x <= max
      }

This code isn’t bad, per se. I knew a kid, Marcus, in middle school that wore the same green sweatshirt every day, and had a musty 19th Century science textbook that discussed phlogiston in his backpack. Over lunch, he was happy to strike up a conversation with you about the superiority of phlogiston theory over Relativity. He wasn’t bad, but he was annoying and not half as smart as he thought he was.

This code is the same. Sure, x might not be a numeric value, so let’s parseInt first… which might return NaN. But we don’t check for NaN, we check for 0. If x is 0, then make it 50,000. Why? No idea.

The real treat, though, is the flipping of min/max. If the calling code did this wrong (min=6,max=1) then instead of swapping them, which is obviously the intent, it instead makes them both equal to the lowest of the two.

In the end, Matthias has one advantage in dealing with this pest, that I didn’t have in dealing with Marcus. He could actually make it go away. I just had to wait until the next year, when we didn’t have lunch at the same time.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianJoey Hess: cubietruck temperature sensor

I wanted to use 1-wire temperature sensors (DS18B20) with my Cubietruck board, running Debian. The only page I could find documenting this is for the sunxi kernel, not the mainline kernel Debian uses. After a couple of hours of research I got it working, so here goes.

wiring

First you need to pick a GPIO pin to use for the 1-wire signal. The Cubietruck's GPIO pins are documented here, and I chose to use pin PG8. Other pins should work as well, although I originally tried to use PB17 and could not get it to work for an unknown reason. I also tried to use PB18 but there was a conflict with something else trying to use that same pin. To find a free pin, cat /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins and look for a line like: "pin 200 (PG8): (MUX UNCLAIMED) (GPIO UNCLAIMED)"

Now wire the DS18B20 sensor up. With its flat side facing you, the left pin goes to ground, the center pin to PG8 (or whatever GPIO pin you selected), and the right pin goes to 3.3V. Don't forget to connect the necessary 4.7K ohm resistor between the center and right pins.

You can find plenty of videos showing how to wire up the DS18B20 on youtube, which typically also involve a quick config change to a Raspberry Pi running Raspbian to get it to see the sensor. With Debian it's unfortunately quite a lot more complicated, and so this blog post got kind of long.

configuration

We need to get the kernel to enable the GPIO pin. This seems like a really easy thing, but this is where it gets really annoying and painful.

You have to edit the Cubietruck's device tree. So apt-get source linux and in there edit arch/arm/boot/dts/sun7i-a20-cubietruck.dts

In the root section ('/'), near the top, add this:

    onewire_device {
       compatible = "w1-gpio";
       gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */
       pinctrl-names = "default";
       pinctrl-0 = <&my_w1_pin>;
    };

In the '&pio` section, add this:

    my_w1_pin: my_w1_pin@0 {
         allwinner,pins = "PG8";
         allwinner,function = "gpio_in";
    };

Note that if you used a different pin than PG8 you'll need to change that. The "pio 6 8" means letter G, pin 8. The 6 is because G is the 7th letter of the alphabet. I don't know where this is documented; I reverse engineered it from another example. Why this can't be hex, or octal, or symbolic names or anything sane, I don't know.

Now you'll need to compile the dts file into a dtb file. One way is to configure the kernel and use its Makefile; I avoided that by first sudo apt-get install device-tree-compiler and then running, in the top of the linux source tree:

cpp -nostdinc -I include -undef -x assembler-with-cpp \
    ./arch/arm/boot/dts/sun7i-a20-cubietruck.dts | \
    dtc -O dtb -b 0 -o sun7i-a20-cubietruck.dtb -

You'll need to install that into /etc/flash-kernel/dtbs/sun7i-a20-cubietruck.dtb on the cubietruck. Then run flash-kernel to finish installing it.

use

Now reboot, and if all went well, it'll come up and the GPIO pin will finally be turned on:

# grep PG8 /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins
pin 200 (PG8): onewire_device 1c20800.pinctrl:200 function gpio_in group PG8

And if you picked a GPIO pin that works and got the sensor wired up correctly, in /sys/bus/w1/devices/ there should be a subdirectory for the sensor, using its unique ID. Here I have two sensors connected, which 1-wire makes easy to do, just hang them all off the same wire.. er wires.

root@honeybee:/sys/bus/w1/devices> ls
28-000008290227@  28-000008645973@  w1_bus_master1@
root@honeybee:/sys/bus/w1/devices> cat *-*/w1_slave
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375

So, it's 15.37 Celsius in my house. I need to go feed the fire, this took too long to get set up.

future work

Are you done at this point? I fear not entirely, because what happens when there's a kernel upgrade? If the device tree has changed in some way in the new kernel, you might need to update the modified device tree file. Or it might not boot properly or not work in some way.

With Raspbian, you don't need to modify the device tree. Instead it has support for device tree overlay files, which add some entries to the main device tree. The distribution includes a bunch of useful overlays, including one that enables GPIO pins. The Raspberry Pi's bootloader takes care of merging the main device tree and the selected overlays.

There are u-boot patches to do such merging, or the merging could be done before reboot (by flash-kernel perhaps), but apparently Debian's device tree files are built without phandle based referencing needed for that to work. (See http://elektranox.org/2017/05/0020-dt-overlays/)

There's also a kernel patch to let overlays be loaded on the fly using configfs. It seems to have been around for several years without being merged, for whatever reason, but would avoid this problem nicely if it ever did get merged.

,

Planet DebianThorsten Alteholz: First steps with arm64

As it was Christmas time recently, I wanted to allow oneself something special. So I ordered a Macchiatobin from SolidRun. Unfortunately they don’t exaggerate with their delivery times and I had to wait about two months for my device. I couldn’t celebrate Christmas time with it, but fortunately New Year.

Anyway, first I tried to use the included U-Boot to start the Debian installer on an USB stick. Oh boy, that was a bad idea and in retrospect just a waste of time. But there is debian-arm@l.d.o and Steve McIntyre was so kind to help me out of my vale of tears.

First I put the EDK2 flash image from Leif on an SD card, set the jumper on the board to boot from it (for the SD card boot, the right most jumper has to be set!) and off we go. Afterwards I put the debian-testing-arm64-netinst.iso on an USB stick and tried to start this. Unfortunately I was hit by #887110 and had to use a mini installer from here. Installation went smooth and as a last step I had to start the rescue mode and install grub to the removable media path. It is an extra point in the installer, so no need to enter cryptic commands :-).

Voila, rebooted and my Macchiatobin is up and running.

Planet DebianMatthew Garrett: Privacy expectations and the connected home

Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.

Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.

There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?

To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.

(Thanks to Nikki Everett for conversations that inspired this post)

(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's)

comment count unavailable comments

Valerie AuroraGetting free of toxic tech culture

This post was co-authored by Valerie Aurora and Susan Wu, and cross-posted on both our blogs.

Marginalized people leave tech jobs in droves, yet we rarely write or talk publicly about the emotional and mental process of deciding to leave tech. It feels almost traitorous to publicly discuss leaving tech when you’re a member of a marginalized group – much less actually go through with it.

There are many reasons we feel this way, but a major reason is that the “diversity problem in tech” is often framed as being caused by marginalized people not “wanting” to be in tech enough: not taking the right classes as teenagers, not working hard enough in university, not “leaning in” hard enough at our tech jobs. In this model, it is the moral responsibility of marginalized people to tolerate unfair pay, underpromotion, harassment, and assault in order to serve as role models and mentors to the next generation of marginalized people entering tech. With this framing, if marginalized people end up leaving tech to protect ourselves, it’s our duty to at least keep quiet about it, and not scare off other marginalized people by sharing our bad experiences.

A green plant growing out of a printer
A printer converted to a planter
CC BY-SA Ben Stanfield https://flic.kr/p/2CjHL

Under that model, this post is doubly taboo: it’s a description of how we (Susan and Valerie) went through the process of leaving toxic tech culture, as a guide to other marginalized people looking for a way out. We say “toxic tech culture” because we want to distinguish between leaving tech entirely, and leaving areas of tech which are abusive and harmful. Toxic tech culture comes in many forms: the part of Silicon Valley VC hypergrowth culture that deifies founders as “white, male, nerds who’ve dropped out of Harvard or Stanford,” the open source software ecosystem that so often exploits and drives away its best contributors, and the scam-riddled cryptocurrency community, to name just three.

What is toxic tech culture? Toxic tech cultures are those that demean and devalue you as holistic, multifaceted human beings. Toxic tech cultures are those that prioritize profits and growth over human and societal well being. Toxic tech cultures are those that treat you as replaceable cogs within a system of constant churn and burnout.

But within tech there are exceptions to the rule: technology teams, organizations, and communities where marginalized people can feel a degree of safety, belonging, and purpose. You may be thinking about leaving all of tech, or leaving a particular toxic tech culture for a different, better tech culture; either way, we hope this post will be useful to you.

A little about us: Valerie spent more than ten years working as a software engineer, specializing in file systems, Linux, and operating systems. Susan grew up on the Internet, and spent 25 years as a software developer, a community builder, an investor, and a VC-backed Silicon Valley founder. We were both overachievers who advanced quickly in our fields – until we could not longer tolerate the way we were treated, or be complicit in a system that did not match our values. Valerie quit her job as a programmer to co-found a tech-related non-profit for women, and now teaches ally skills to tech workers. Susan relocated to France and Australia, co-founded Project Include, a nonprofit dedicated to improving diversity and inclusion in tech, and is now launching a new education system. We are both still involved in tech to various degrees, but on our own terms, and we are much happier now.

We disagree that marginalized people should stay silent about how and why they left toxic tech culture. When, for example, more than 50% of women in tech leave after 12 years, there is an undeniable need for sharing experience and hard-learned lessons. Staying silent about the unfairness that causes 37% of underrepresented people of color to leave tech 37% of people of color cite as a reason they left helps no one.

We reject the idea that it is the “responsibility” of marginalized people to stay in toxic tech culture despite abuse and discrimination, solely to improve the diversity of tech. Marginalized people have already had to overcompensate for systemic sexist, ableist, and racist biases in order to earn their roles in tech. We believe people with power and privilege are responsible for changing toxic tech culture to be more inclusive and fair to marginalized people. If you want more diversity in tech, don’t ask marginalized people to be silent, to endure often grievous discrimination, or to take on additional unpaid, unrecognized labor – ask the privileged to take action.

For many marginalized people, our experience of being in tech includes traumatic experience(s) which we may not have not yet fully come to terms with and that influenced our decisions to leave. Sometimes we don’t make a direct connection between the traumatic experiences and our decision to leave. We just find that we are “bored” and are no longer excited about our work, or start avoiding situations that used to be rewarding, like conferences, speaking, and social events. Often we don’t realize traumatic events are even traumatic until months or years later. If you’ve experienced trauma, processing the trauma is necessary, whether or not you decide to leave toxic tech culture.

This post doesn’t assume that you are sure that you want to leave your current area of tech, or tech as a whole. We ourselves aren’t “sure” we want to permanently leave the toxic tech cultures we were part of even now – maybe things will get better enough that we will be willing to return. You can take the steps described in this post and stay in your current area of tech for as long as you want – you’ll just be more centered, grounded, and happy.

The steps we took are described in roughly the order we took them, but they all overlapped and intermixed with each other. Don’t feel like you need to do things in a particular order or way; this is just to give you some ideas on what you could do to work through your feelings about leaving tech and any related trauma.

Step 1: Deprogram yourself from the cult of tech

The first step is to start deprogramming yourself from the cult of tech. Being part of toxic tech culture has a lot in common with being part of a cult. How often have you heard a Silicon Valley CEO talk about how his (it’s almost always a he) startup is going to change the world? The refrain of how a startup CEO is going to save humanity is so common that it’s actually uncommon for a CEO to not use saviour language when describing their startup. Cult leaders do the same thing: they create a unique philosophy, imbued with some sort of special message that they alone can see or hear, convince people that only they have the answers for what ails humanity, and use that influence to control the people around them.

Consider this list of how to identify a cult, and how closely this list mirrors patterns we can observe in Silicon Valley tech:

  • “Be wary of any leader who proclaims him or herself as having special powers or special insight.” How often have you heard a Silicon Valley founder or CEO proclaimed as some sort of genius, and they alone can figure out how to invent XYZ? Nearly every day, there’s some deific tribute to Elon Musk or Mark Zuckerberg in the media.
  • “The group is closed, so in other words, although there may be outside followers, there’s usually an inner circle that follows the leader without question, and that maintains a tremendous amount of secrecy.” The Information just published a database summarizing how secretive, how protective, how insular the boards are for the top 30 private companies in tech. Here’s what they report: “Despite their enormous size and influence, the biggest privately held technology companies eschew some basic corporate governance standards, blocking outside voices, limiting decision making to small groups of mostly white men and holding back on public disclosures, an in-depth analysis by The Information shows.”
  • “A very important aspect of cult is the idea that if you leave the cult, horrible things will happen to you.” There’s an insidious reason why your unicorn startup provides you with a free cafeteria, gym, yoga rooms, and all night snack bars: they never want you to leave. And if you do leave the building, you can stay engaged with Slack, IM, SMS, and every other possible communications tool so that you can never disconnect. They then layer over this with purported positive cultural messaging around how lucky, how fortunate you are to have landed this job — you were the special one selected out of thousands of candidates. Nobody else has it as good as we do here. Nobody else is as smart, as capable, as special as our team. Nobody else is building the best, most impactful solutions to solve humanity’s problems. If you fall off this treadmill, you will become irrelevant, you’ll be an outsider, a consumer instead of a builder, you’ll never be first on the list for the Singularity, when it happens. You’ll be at the shit end of the income inequality distribution funnel.

Given how similar toxic tech culture (and especially Silicon Valley tech culture) is to cult culture, leaving tech often requires something like cult-deprogramming techniques. We found the following steps especially useful for deprogramming ourselves from the cult of tech: recognizing our unconscious beliefs, experimenting with our identity, avoiding people who don’t support us, and making friendships that aren’t dependent on tech.

Recognize your unconscious beliefs

One cult-like aspect of toxic tech culture is a strong moral us-vs-them dichotomy: either you’re “in tech,” and you’re important and smart and hardworking and valuable, or you are not “in tech” because you are ignorant and untalented and lazy and irrelevant. (What are the boundaries of “in tech?” Well, the more privileged you are, the more likely people will define you as “in tech” – so be generous to yourself if you are part of a marginalized group. Or read more about the fractal nature of the gender binary and how it shows up in tech.)

We didn’t realize how strongly we’d unconsciously adopted this belief that people in tech were better than those who weren’t until we started to imagine ourselves leaving tech and felt a wave of self-judgment and fear. Early on, Valerie realized that she unconsciously thought of literally every single job other than software engineer as “for people who weren’t good enough to be a software engineer” – and that she thought this because other software engineers had been telling her that for her entire career. Even now, as Susan is launching a new education startup in Australia, she’s trying to be careful to not assume that just because people are doing things in a “non Silicon Valley, lean startup, agile way,” that it’s not automatically wrong. In reality, the best way in which to do things is probably not based on any particular dogma, but one that reflects a healthy balance of diverse perspectives and styles.

The first step to ridding yourself of the harmful belief that only people who are “in tech” or doing things in a “startup style” are good or smart or valuable is surfacing the unconscious belief to the conscious level, so you can respond to it. Recognize and name that belief when it comes up: when you think about leaving your job and feel fear, when you meet a new person and immediately lose interest when you learn their job is not “technical,” when you notice yourself trying to decide if someone is “technical enough.” Say to yourself, “I am experiencing the belief that only people I consider technical are valuable. This isn’t true. I believe everyone is valuable regardless of their job or level of technical knowledge.”

Experiment with your self-identity

The next step is to experiment with your own self-identity. Begin thinking of yourself as having different non-tech jobs or self-descriptions, and see what thoughts come up. React to those thoughts as though you were reacting to a friend you care about who was saying those things about them. Try to find positive things to think and say about your theoretical new job and new life. Think about people you know with that job and ask yourself if you would say negative things about their job to them. Some painful thoughts and experiences will come up during this time; aim to recognize them consciously and process them, rather than trying to stuff them down or make them go away.

When you live in Silicon Valley, it’s easy for your work life to consume 95% of your waking hours — this is how startups are designed, after all, with their endless perks and pressures to socialize within the tribe. Often times, promotions go hand in hand with socializing successfully within the startup scene. What can you do to carve out several hours a week just for yourself, and an alternate identity that isn’t defined by success within toxic tech culture? How do you make space for self care? For example, Susan began to take online writing courses, and found that the outlet of interacting with poets and fiction writers helped ground her.

If necessary, change the branding of your personal life. Stop wearing tech t-shirts and get shirts that reflect some other part of your self. Get a different print for your office wall. Move the tech books into one out-of-the-way shelf and donate any you don’t use right now (especially the ones that you have been planning to read but never got around to). Donate most of your conference schwag and stop accepting new schwag. Pack away the shelf of tech-themed tchotchkes or even (gasp) throw them away. Valerie went to a “burn party” on Ocean Beach, where everyone brought symbols of old jobs that they were happy to be free of and symbolically burned them in a beach bonfire. You might consider a similar ritual.

De-emphasize tech in your self-presentation. Change any usernames that reference your tech interests. Rewrite any online bios or descriptions to emphasize non-tech parts of your life. Start introducing yourself by talking about your non-tech hobbies and interests rather than your job. You might even try introducing yourself to new people as someone whose primary job isn’t tech. Valerie, who had been writing professionally for several years, started introducing herself as a writer at tech events in San Francisco. People who would have talked to her had she introduced herself as a Linux kernel developer would immediately turn away without a second word. Counterintuitively, this made her more determined to leave her job, when she saw how inconsiderate her colleagues were when she did not make use of her technical privilege.

Avoid unsupportive people

Identify any people in your life who are consistently unsupportive of you, or only supportive when you perform to their satisfaction, and reduce your emotional and financial dependence on them. If you have friends or idols who are unhelpfully critical or judgemental, take steps to see or hear from them less often. Don’t seek out their opinion and don’t stoke your admiration for them. This will be difficult the closer and more dependent you are on the person; if your spouse or manager is one of these people, you have our sympathy. For more on this dynamic and how to end it, see this series of posts about narcissism, co-narcissism, and tech.

Depressingly often, we especially seek the approval of people who give approval sparingly (think about the popularity of Dr. House, who is a total jerk). If you find yourself yearning for the approval of someone in tech who has been described as an “asshole,” this is a great time to stop. Some helpful tips to stop seeking the approval of an asshole: make a list of cruel things they’ve done, make a list of times they were wrong, stop reading their writing or listening to their talks, filter them out of your daily reading, talk to people who don’t know who that person is or care what they think, listen to people who have been hurt by them, and spend more time with people who are kind and nurturing.

At the same time, seek out and spend more time with people who are generally supportive of you, especially people who encourage experimentation and personal change. You may already have many of these people in your life, but don’t spend much time thinking about them because you can depend on their friendship and support. Reach out to them and renew your relationship.

Make friendships that don’t depend on tech

If your current social circle consists entirely of people who are fully bought into toxic tech culture, you may not have anyone in your life willing to support a career change. To help solve this, make friendships that aren’t dependent on your identity as a person in tech. The goal is to have a lot of friendships that aren’t dependent on your being in tech, so that if you decide to leave, you won’t lose all your friends at the same time as your job. Being friends with people who aren’t in tech will help you get an outside perspective on the kind of tech culture you are part of. It also helps you envision a future for yourself that doesn’t depend on being in toxic tech culture. You can still have lots of friends in tech, you are just aiming for diversity in your friendships.

One way to make this easier is to focus on your existing friendships that are “near tech,” such as people working in adjacent fields that sometimes attend tech conferences, but aren’t “in tech” themselves. Try also getting a new hobby, being more open to invitations to social events, and contacting old friends you’ve fallen out of touch with. Spend less time attending tech-related events, especially if you currently travel to a lot of tech conferences. It’s hard to start and maintain new local friendships when you’re constantly out of town or working overtime to prepare a talk for a conference. If you have a set of conferences you attend every year, it will feel scary the first time you miss one of them, but you’ll notice how much more time you have to spend with your local social circle.

Making friends outside of your familiar context (tech co-workers, tech conferences, online tech forums) is challenging for most people. If you learned how to socialize entire in tech culture, you may also need to learn new norms and conventions (such as how to have a conversation that isn’t about competing to show who knows more about a subject). Both Valerie and Susan experienced this when we started trying to make friends outside of toxic tech culture: all we knew how to talk about was startups, technology, video games, science fiction, scientific research, and (ugh) libertarian economic philosophy. We discovered people outside toxic tech culture wanted to talk about a wider range of topics, and often in a less confrontational way. And after a lifetime of socialization to distrust and discount everyone who wasn’t a man, we learned to seek out and value friendships with women and non-binary people.

If making new friends sounds intimidating, we recommend checking out Captain Awkward’s practical advice on making friends. Making new friends takes work and willingness to be rejected, but you’ll thank yourself for it later on.

Step 2: Make room for a career change

If you are already in a place where you have the freedom to make a big career change, congratulations! But if changing careers seems impossibly hard right now, that’s okay too. You can make room for a career change while still working in tech. Even if you end up deciding to stay in your current job, you will likely appreciate the freedom and flexibility that you’ve opened up for yourself.

Find a career counselor

The most useful action you can take is to find a career counselor who is right for you, and be honest with them about your fears, goals, and desires. Finding a career counselor is a lot like finding a dentist or a therapist: ask your friends for recommendations, read online reviews, look for directories or lists, and make an appointment for a free first meeting. If your first meeting doesn’t click, go ahead and try another career counselor until you find someone you can work with. A good career counselor will get a comprehensive view of your entire life (including family and friends) and your goals (not just job-related goals), and give you concrete steps to take to bring you closer to your goals.

Sometimes a career counselor’s job is explaining to you how the job you want but thought was impossible to get is actually possible. Valerie started seeing a career counselor about two years before she quit her last job as a software engineer and co-founded a non-profit. It took her about five years to get everything she listed as part of what she thought was an unattainable dream job (except for the “view of the water from her office,” which she is still working on). All the rest of this section is a high-level generic version of the advice a good career counselor will give you.

Improve your financial situation

Many tech jobs pay relatively well, but many people in tech would still have a hard time switching careers tomorrow because they don’t have enough money saved or couldn’t take a pay cut (hello, overheated rental markets and supporting your extended family). Don’t assume you’ll have to take a pay cut if you leave tech or your particular part of toxic tech culture, but it gives you more flexibility if you don’t have to immediately start making the same amount of money in a different job.

Look for ways to change your lifestyle or your expectations in ways that let you save money or lower your bills. Status symbols and class markers will probably loom large here and it’s worth thinking about which things are most valuable to you and which ones you can let go. You might find it is a relief to no longer have an expensive car with all its attendant maintenance and worries and fear, but that you really value the weekly exercise class that makes you feel happier and more energetic the rest of the week. Making these changes will often be painful in the short term but pay off in the long term. Valerie ended up temporarily moving out of the San Francisco Bay Area to a cheaper area near her family, which let her save up money and spend less while she was planning a career change. She moved back to the Bay Area when she was established in her new career, into a smaller, cheaper apartment she could afford on her new salary. Today she is making more money than she ever did as a programmer.

Take stock of your transferrable skills

Figure out what you actually like to do and how much of that is transferrable to other fields or jobs. One way to do this is to look back at, say, the top seven projects you most enjoyed doing in your life, either for your job or as a volunteer. What skills were useful to you in getting those projects done? What parts of doing that project did you enjoy the most? For example, being able to quickly read and understand a lot of information is a transferrable skill that many people enjoy using. The ability to persuade people is another such skill, useful for selling gym memberships, convincing people to recycle more, teaching, getting funding, and many other jobs. Once you have an idea of what it is that you enjoy doing and that is transferrable to other jobs, you can figure out what jobs you might enjoy and would be reasonably good at from the beginning.

Think carefully before signing up for new education

This is not necessarily the time to start taking career-related classes or going back to university in a serious way! If you start taking classes without first figuring out what you enjoy, what your skills are, and what your goals are, you are likely to be wasting your time and money and making it more difficult to find your new career. We highly recommend working with a career counselor before spending serious money or time on new training or classes. However, it makes sense to take low-cost, low-time commitment classes to explore what you enjoy doing, open your mind to new possibilities, or meet new people. This might look like a pottery class at the local community college, learning to 3D print objects at the local hackerspace, or taking an online course in African history.

Recognise there are many different paths in tech

The good news about software finally eating the world is that there are now many ways in which you can work in and around technology, without having to be part of toxic tech culture. Every industry needs tech expertise, and nearly every country around the world is trying to cultivate its own startup ecosystem. Many of these are much saner, kinder places to work than the toxic tech culture you may currently be part of, and a few of these involve industries that are more inclusive and welcoming of marginalized groups. Some of our friends have left the tech industry to work in innovation or technology related jobs in government, education, advocacy, policy, and arts. Though there are no great industries, and no ideal safe places for marginalized groups nearly anywhere in the world, there are varying degrees of toxicity and you can seek out areas with less toxicity. Try not to be swayed by the narrative that the only tech worth doing is the tech that’s written about in the media or receiving significant VC funding.

Step 3: Take care of yourself

Since being part of toxic tech culture is harmful to you as a person, simply focusing on taking care of yourself will help you put tech culture in its proper perspective, leaving you the freedom to be part of tech or not as you choose.

Prioritize self-care

Self-care means doing things that are kind or nurturing for yourself, whatever that looks like for you. Being in toxic tech culture means that many things take priority over self-care: fixing that last bug instead of taking a walk, going to an evening work-related meetup instead of staying home and getting to sleep on time, flying to yet another tech conference instead of spending time with family and friends. For Susan, prioritizing self-care looked like taking a road trip up the Pacific Coast Highway for the weekend instead of going to an industry fundraiser, or eating lunch by herself with a book instead of meeting up with another VC. One of the few constants in life is that you will always be stuck with your own self – so take care of it!

Learn to say no and enforce boundaries

We found that we were saying yes to too many things. The tech industry depends on extracting free or low-cost labor from many people in different ways: everything from salaried employees working 60-hour weeks to writing and giving talks in your “free time” – all of which are considered required for your career to advance. Marginalized people in tech are often expected to work an additional second (third?) shift of diversity-related work for free: giving recruiting advice, mentoring other marginalized people, or providing free counseling to more privileged people.

FOMO (fear of missing out) plays an important role too. It’s hard to cut down on free work when you are wondering, what if this is the conference where you’ll meet the person who will get you that venture capital job you’ve always wanted? What if serving on this conference program committee will get you that promotion? What if going to lunch with this powerful person so they can “pick your brain” for free will get you a new job? Early in your tech career, these kinds of investments often pay off but later on they have diminishing returns. The first time you attend a conference in your field, you will probably meet dozens of people who are helpful to your career. The twentieth conference – not so much.

For Valerie, switching from a salaried job to hourly consulting taught her the value of her time and just how many hours she was spending on unpaid work for the Linux and file systems communities. She taped a note reading “JUST SAY NO” to the wall behind her computer, and then sent a bunch of emails quitting various unpaid responsibilities she had accumulated. A few months later, she found she had made too many commitments again, and had to send another round of emails backing out of commitments. It was painful and embarrassing, but not being constantly frazzled and stressed out was worth it.

When you start saying no to unpaid work, some people will be upset and push back. After all, they are used to getting free work from you which gives them some personal advantage, and many people won’t be happy with this. They may try to make you feel guilty, shame you, or threaten you. Learning to enforce boundaries in the face of opposition is an important part of this step. If this is hard for you, try reading books, practicing with a friend, or working with a therapist. If you are worried about making mistakes when going against external pressure, keep in mind that simply exercising some control over your life choices and career path will often increase your personal happiness, regardless of the outcome.

Care for your mental health

Let’s be brutally honest: toxic tech culture is highly abusive, and there’s an excellent chance you are suffering from depression, trauma, chronic stress, or other serious psychological difficulties. The solution that works for many people is to work with a good therapist or counselor. A good licensed therapist is literally an expert in helping people work through these problems. Even if you don’t think your issues reach the level of seriousness that requires a therapist, a good therapist can help you with processing guilt, fear, anxiety, or other emotions that come up around the idea of leaving toxic tech culture.

Whether or not you work with a therapist, you can make use of many other forms of mental health care: meditation, support groups, mindfulness apps, walking, self-help books, spending time in nature, various spiritual practices, doing exercises in workbooks, doing something creative, getting alone time, and many more. Try a bunch of different things and pick what works for you – everyone is different. For Susan, practicing yoga four times a week, meditating, and working in her vegetable garden instead of reading Hacker News gave her much needed perspective and space.

Finding a therapist can be intimidating for many people, which is why Valerie wrote “HOWTO therapy: what psychotherapy is, how to find a therapist, and when to fire your therapist.” It has some tips on getting low-cost or free therapy if that’s what you need. You can also read Tiffany Howard‘s list of free and low-cost mental health resources which covers a wide range of different options, including apps, peer support groups, and low-cost therapy.

Process your grief

Even if you are certain you want to leave toxic tech culture, actually leaving is a loss – if nothing else, a loss of what you thought your career and future would look like. Grief is an appropriate response to any major life change, even if it is for the better. Give yourself permission to grieve and be sad, for whatever it is that you are sad about. A few of the things we grieved for: the meritocracy we thought we were participating in, our vision for where our careers would be in five years, the good times we had with friends at conferences, a sense of being part of something excited and world-changing, all the good people who left before us, our relationships with people we thought would support us but didn’t, and the people we were leaving behind to suffer without us.

Step 4: Give yourself time

If you do decide to leave toxic tech culture, give yourself a few years to do it, and many more years to process your feelings about it. Valerie decided to stop being a programmer two years before she actually quit her programming job, and then she worked as a file systems consultant on and off for five years after that. Seven years later, she finally feels mostly at peace about being driven out of her chosen career (though she still occasionally has nightmares about being at a Linux conference). Susan’s process of extricating herself from the most toxic parts of tech culture and reinvesting in her own identity and well being has taken many years as well. Her partner (who knows nothing about technology) and her two kids help her feel much more balanced. Because Susan grew up on the Internet and has been building in tech for 25 years, she feels like she’ll probably always be doing something in tech, or tech-related, but wants to use her knowledge and skills to do this on her own terms, and to use her hard won know-how to benefit other marginalized folks to successfully reshape the industry.

An invitation to share your story

We hope this post was helpful to other people thinking about leaving toxic tech culture. There is so much more to say on this topic, and so many more points of view we want to hear about. If you feel safe doing so, we would love to read your story of leaving toxic tech culture. And wherever you are in your journey, we see you and support you, even if you don’t feel safe sharing your story or thoughts.

Planet DebianRenata D'Avila: Not being perfect

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, I was wondering that when I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that could be me. Even though I had written down almost every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong intern.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start doing some work. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I had studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>>

<<EventCalendar(category=CategoryEventCalendar)>>

<<EventCalendar(,category=CategoryEventCalendar)>>

<<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

Planet DebianRenata D'Avila: Not being perfect

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, when I was wondering that I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that was very much me, because even though I had written down pretty much every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong candidate.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>>

<<EventCalendar(category=CategoryEventCalendar)>>

<<EventCalendar(,category=CategoryEventCalendar)>>

<<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

Krebs on SecuritySome Basic Rules for Securing Your IoT Stuff

Most readers here have likely heard or read various prognostications about the impending doom from the proliferation of poorly-secured “Internet of Things” or IoT devices. Loosely defined as any gadget or gizmo that connects to the Internet but which most consumers probably wouldn’t begin to know how to secure, IoT encompasses everything from security cameras, routers and digital video recorders to printers, wearable devices and “smart” lightbulbs.

Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.

-Rule #1: Avoid connecting your devices directly to the Internet — either without a firewall or in front it, by poking holes in your firewall so you can access them remotely. Putting your devices in front of your firewall is generally a bad idea because many IoT products were simply not designed with security in mind and making these things accessible over the public Internet could invite attackers into your network. If you have a router, chances are it also comes with a built-in firewall. Keep your IoT devices behind the firewall as best you can.

-Rule #2: If you can, change the thing’s default credentials to a complex password that only you will know and can remember. And if you do happen to forget the password, it’s not the end of the world: Most devices have a recessed reset switch that can be used to restore to the thing to its factory-default settings (and credentials). Here’s some advice on picking better ones.

I say “if you can,” at the beginning of Rule #2 because very often IoT devices — particularly security cameras and DVRs — are so poorly designed from a security perspective that even changing the default password to the thing’s built-in Web interface does nothing to prevent the things from being reachable and vulnerable once connected to the Internet.

Also, many of these devices are found to have hidden, undocumented “backdoor” accounts that attackers can use to remotely control the devices. That’s why Rule #1 is so important.

-Rule #3: Update the firmware. Hardware vendors sometimes make available security updates for the software that powers their consumer devices (known as “firmware). It’s a good idea to visit the vendor’s Web site and check for any firmware updates before putting your IoT things to use, and to check back periodically for any new updates.

-Rule #4: Check the defaults, and make sure features you may not want or need like UPnP (Universal Plug and Play — which can easily poke holes in your firewall without you knowing it) — are disabled.

Want to know if something has poked a hole in your router’s firewall? Censys has a decent scanner that may give you clues about any cracks in your firewall. Browse to whatismyipaddress.com, then cut and paste the resulting address into the text box at Censys.io, select “IPv4 hosts” from the drop-down menu, and hit “search.”

If that sounds too complicated (or if your ISP’s addresses are on Censys’s blacklist) check out Steve Gibson‘s Shield’s Up page, which features a point-and-click tool that can give you information about which network doorways or “ports” may be open or exposed on your network. A quick Internet search on exposed port number(s) can often yield useful results indicating which of your devices may have poked a hole.

If you run antivirus software on your computer, consider upgrading to a “network security” or “Internet security” version of these products, which ship with more full-featured software firewalls that can make it easier to block traffic going into and out of specific ports.

Alternatively, Glasswire is a useful tool that offers a full-featured firewall as well as the ability to tell which of your applications and devices are using the most bandwidth on your network. Glasswire recently came in handy to help me determine which application was using gigabytes worth of bandwidth each day (it turned out to be a version of Amazon Music’s software client that had a glitchy updater).

-Rule #5: Avoid IoT devices that advertise Peer-to-Peer (P2P) capabilities built-in. P2P IoT devices are notoriously difficult to secure, and research has repeatedly shown that they can be reachable even through a firewall remotely over the Internet because they’re configured to continuously find ways to connect to a global, shared network so that people can access them remotely. For examples of this, see previous stories here, including This is Why People Fear the Internet of Things, and Researchers Find Fresh Fodder for IoT Attack Cannons.

-Rule #6: Consider the cost. Bear in mind that when it comes to IoT devices, cheaper usually is not better. There is no direct correlation between price and security, but history has shown the devices that tend to be toward the lower end of the price ranges for their class tend to have the most vulnerabilities and backdoors, with the least amount of vendor upkeep or support.

In the wake of last month’s guilty pleas by several individuals who created Mirai — one of the biggest IoT malware threats ever — the U.S. Justice Department released a series of tips on securing IoT devices.

One final note: I realize that the people who probably need to be reading these tips the most likely won’t ever know they need to care enough to act on them. But at least by taking proactive steps, you can reduce the likelihood that your IoT things will contribute to the global IoT security problem.

Planet DebianJonathan Dowland: Announcing "Just TODO It"

just TODO it UI

Recently, I wished to use a trivially-simple TODO-list application whilst working on a project. I had a look through what was available to me in the "GNOME Software" application and was surprised to find nothing suitable. In particular I just wanted to capture a list of actions that I could tick off; I didn't want anything more sophisticated than that (and indeed, more sophistication would mean a learning curve I couldn't afford at the time). I then remembered that I'd written one myself, twelve years ago. So I found the old code, dusted it off, made some small adjustments so it would work on modern systems and published it.

At the time that I wrote it, I found (at least) one other similar piece of software called "Tasks" which used Evolution's TODO-list as the back-end data store. I can no longer find any trace of this software, and the old web host (projects.o-hand.com) has disappeared.

My tool is called Just TODO It and it does very little. If that's what you want, great! You can reach the source via that prior link or jump straight to GitHub: https://github.com/jmtd/todo

CryptogramArticle from a Former Chinese PLA General on Cyber Sovereignty

Interesting article by Major General Hao Yeli, Chinese People's Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.

Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.

China has never been opposed to multi-party governance when appropriate, but rejects the denial of government's proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.

In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.

Worse Than FailureIn $BANK We Trust

During the few months after getting my BS and before starting my MS, I worked for a bank that held lots of securities - and gold - in trust for others. There was a massive vault with multiple layers of steel doors, iron door grates, security access cards, armed guards, and signature comparisons (live vs pre-registered). It was a bit unnerving to get in there, so deep below ground, but once in, it looked very much like the Fort Knox vault scene in Goldfinger.

Someone planning things on a whiteboard

At that point, PCs weren't yet available to the masses and I had very little exposure to mainframes. I had been hired as an assistant to one of their drones who had been assigned to find all of the paper-driven-changes that had gone awry and get their books up to date.

To this end, I spent about a month talking to everyone involved in taking a customer order to take or transfer ownership of something, and processing the ledger entries to reflect the transaction. From this, I drew a simple flow chart, listing each task, the person(s) responsible, and the possible decision tree at each point.

Then I went back to each person and asked them to list all the things that could and did go wrong with transaction processing at their junction in the flow.

What had been essentially straight-line processing with a few small decision branches, turned out to be enough to fill a 30 foot long by 8 foot high wall of undesirable branches. This became absolutely unmanageable on physical paper, and I didn't know of any charting programs on the mainframe at that time, so I wrote the whole thing up with an index card at each junction. The "good" path was in green marker, and everything else was yellow (one level of "wrong") or red (wtf-level of "wrong").

By the time it was fully documented, the wall-o-index-cards had become a running joke. I invited the people (who had given me all of the information) in to view their problems in the larger context, and verify that the problems were accurately documented.

Then management was called in to view the true scope of their problems. The reason that the books were so snafu'd was that there were simply too many manual tasks that were being done incorrectly, cascading to deeply nested levels of errors.

Once we knew where to look, it became much easier to track transactions backward through the diagram to the last known valid junction and push them forward until they were both correct and current. A rather large contingent of analysts were then put onto this task to fix all of the transactions for all of the customers of the bank.

It was about the time that I was to leave and go back to school that they were talking about taking the sub-processes off the mainframe and distributing detailed step-by-step instructions for people to follow manually at each junction to ensure that the work flow proceeded properly. Obviously, more manual steps would reduce the chance for errors to creep in!

A few years later when I got my MS, I ran into one of the people that was still working there and discovered that the more-manual procedures had not only not cured the problem, but that entirely new avenues of problems had cropped up as a result.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Google AdsenseReceiving your payment via EFT (Electronic Funds Transfer)


Electronic Funds Transfer (EFT) is our fastest, most secure, and environmentally friendly payment method. It is available across most countries and you can check if this payment method is available to you here.

To use this payment method we first need to verify your bank account to ensure that you will receive your payment. This involves entering specific bank account information and receiving a small test deposit.

Some of our publishers found this process confusing and we want to guide you through it. Our latest video will guide you through adding EFT as a payment method, from start to finish.
If you didn’t receive your test deposit, you can watch this video to understand why. If you have more questions, visit our Help Center.
Posted by: The AdSense Support Team

Planet DebianDirk Eddelbuettel: RcppMsgPack 0.2.1

Am update of RcppMsgPack got onto CRAN today. It contains a number of enhancements Travers had been working on, as well as one thing CRAN asked us to do in making a suggested package optional.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.1 (2018-01-15)

  • Some corrections and update to DESCRIPTION, README.md, msgpack.org.md and vignette (#6).

  • Update to c_pack.cpp and tests (#7).

  • More efficient packing of vectors (#8).

  • Support for timestamps and NAs (#9).

  • Conditional use of microbenchmark in tests/ as required for Suggests: package [CRAN request] (#10).

  • Minor polish to tests relaxing comparison of timestamp, and avoiding a few g++ warnings (#12 addressing #11).

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

LongNowStewart Brand Gives In-Depth and Personal Interview to Tim Ferriss

Tim Ferriss, who wrote the The Four Hour Work Week and gave a Long Now talk on accelerated learning in 02011, recently interviewed Long Now co-founder Stewart Brand on his podcast, “The Tim Ferriss Show”. The interview is wide-ranging, in-depth, and among the most personal Brand has given to date. Over the course of nearly three hours, Brand touches on everything from the Whole Earth Catalog, why he gave up skydiving, how he deals with depression, his early experiences with psychedelics, the influence of Marshall McLuhan and Buckminster Fuller on his thinking, his recent CrossFit regimen, and the ongoing debate between artificial intelligence and intelligence augmentation. He also discusses the ideas and projects of The Long Now Foundation.

Brand frames The Long Now Foundation as a way to augment social intelligence:

The idea of the Long Now Foundation is to give encouragement and permission to society that is rewarded for thinking very, very rapidly, in business terms and, indeed, in scientific terms, of rapid turnaround, and getting inside the adversaries’ loop, move fast and break things, [to think long term]. Long term thinking might be proposing that some things you don’t want to break. They might involve moving slow, and steadily.

The Pace Layer diagram.

He introduces the pace layer diagram as a tool to approach global scale challenges:

What we’re proposing is there are a lot of problems, a lot of issues and a lot of quite wonderful things in that category of being big and slow moving and so I wound up with Brian Eno developing a pace layer diagram of civilization where there’s the fast moving parts like fashion and commerce, and then it goes slower when you get to infrastructure and then things move really slow in how governance changes, and then you go down to culture and language and religion move really slowly and then nature, the tectonic forces in climate change and so on move really big and slow. And what’s interesting about that is that the fast parts get all the attention, but the slow parts have all the power. And if you want to really deal with the powerful forces in the world, bear relation to seeing what can be done with appreciating and maybe helping adjust the big slow things.

Stewart Brand and ecosystem ecologist Elena Bennett during the Q&A of her November 02017 SALT Talk. Photo: Gary Wilson.

Ferris admits that in the last few months he’s been pulled out of the current of long-term thinking by the “rip tide of noise,” and asks Brand for a “homework list” of SALT talks that can help provide him with perspective. Brand recommends Jared Diamond’s 02005 talk on How Societies Fail (And Sometimes Succeed), Matt Ridley’s 02011 talk on Deep Optimism, and Ian Morris’ 02011 talk on Why The West Rules (For Now).

Brand also discusses Revive & Restore’s efforts to bring back the Wooly Mammoth, and addresses the fear many have of meddling with complex systems through de-extinction.

Long-term thinking has figured prominently in Tim Ferriss’ podcast in recent months. In addition to his interview with Brand, Ferris has also interviewed Long Now board member Kevin Kelly and Long Now speaker Tim O’Reilly.

Listen to the podcast in full here.

TEDTED debuts “Small Thing Big Idea” original video series on Facebook Watch

Today we’re debuting a new original video series on Facebook Watch called Small Thing Big Idea: Designs That Changed the World.

Each 3- to 4-minute weekly episode takes a brief but delightful look at the lasting genius of one everyday object – a pencil, for example, or a hoodie – and explains how it is so perfectly designed that it’s actually changed the world around it.

The series features some of design’s biggest names, including fashion designer Isaac Mizrahi, museum curator Paola Antonelli, and graphic designer Michael Bierut sharing their infectious obsession with good design.

To watch the first episode of Small Thing Big Idea (about the little-celebrated brilliance of subway maps!), tune in here, and check back every Tuesday for new episodes.

Cory DoctorowThe Man Who Sold the Moon, Part 02


Here’s part two of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

Planet DebianJamie McClelland: Procrastinating by tweaking my desktop with devilspie2

Tweaking my desktop seems to be my preferred form of procrastination. So, a blog like this is a sure sign I have too much work on my plate.

I have a laptop. I carry it to work and plug it into a large monitor - where I like to keep all my instant or near-instant communications displayed at all times while I switch between workspaces on my smaller laptop screen as I move from email (workspace one), to shell (workspace two), to web (workspace three), etc.

When I'm not at the office, I only have my laptop screen - which has to accomdate everything.

I soon got tired of dragging things around everytime I plugged or unplugged the monitor and starting accumulating a mess of bash scripts running wmctrl and even calling my own python-wnck script. (At first I couldn't get wmctrl to pin a window but I lived with it. But when gajim switched to gtk3 and my openbox window decorations disappeared, then I couldn't even pin my window manually.)

Now I have the following simpler setup.

Manage hot plugging of my monitor.

Symlink to my monitor status device:

0 jamie@turkey:~$ ls -l ~/.config/turkey/monitor.status 
lrwxrwxrwx 1 jamie jamie 64 Jan 15 15:26 /home/jamie/.config/turkey/monitor.status -> /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/status
0 jamie@turkey:~$ 

Create a udev rule to place my monitor to the right of my LCD every time the monitor is plugged in and every time it is unplugged.

0 jamie@turkey:~$ cat /etc/udev/rules.d/90-vga.rules 
# When a monitor is plugged in, adjust my display to take advantage of it
ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/etc/udev/scripts/vga-adjust"
0 jamie@turkey:~$ 

And here is the udev script:

0 jamie@turkey:~$ cat /etc/udev/scripts/vga-adjust 
#!/bin/bash

logger -t "jamie-udev" "Monitor event detected, waiting 1 second for system to detect change."

# We don't know whether the VGA monitor is being plugged in or unplugged so we
# have to autodetect first. And,it takes a few seconds to assess whether the
# monitor is there or not, so sleep for 1 second.
sleep 1 
monitor_status="/home/jamie/.config/turkey/monitor.status"
status=$(cat "$monitor_status")  

XAUTHORITY=/home/jamie/.Xauthority
if [ "$status" = "disconnected" ]; then
  # The monitor is not plugged in   
  logger -t "jamie-udev" "Monitor is being unplugged"
  xrandr --output DP-1 --off
else
  logger -t "jamie-udev" "Monitor is being plugged in"
  xrandr --output DP-1 --right-of eDP-1 --auto
fi  
0 jamie@turkey:~$

Move windows into place.

So far, this handles ensuring the monitor is activated and placed in the right position. But, nothing has changed in my workspace.

Here's where the devilspie2 configuration comes in:

==> /home/jamie/.config/devilspie2/00-globals.lua <==
-- Collect some global varibles to be used throughout.
name = get_window_name();
app = get_application_name();
instance = get_class_instance_name();

-- See if the monitor is plugged in or not. If monitor is true, it is
-- plugged in, if it is false, it is not plugged in.
monitor = false;
device = "/home/jamie/.config/turkey/monitor.status"
f = io.open(device, "rb")
if f then
  -- Read the contents, remove the trailing line break.
  content = string.gsub(f:read "*all", "\n", "");
  if content == "connected" then
    monitor = true;
  end
end


==> /home/jamie/.config/devilspie2/gajim.lua <==
-- Look for my gajim message window. Pin it if we have the monitor.
if string.find(name, "Gajim: conversations.im") then
  if monitor then
    set_window_geometry(1931,31,590,1025);
    pin_window();
  else
    set_window_workspace(4);
    set_window_geometry(676,31,676,725);
    unpin_window();
  end
end

==> /home/jamie/.config/devilspie2/grunt.lua <==
-- grunt is the window I use to connect via irc. I typically connect to
-- grunt via a terminal called spade, which is opened using a-terminal-yoohoo
-- so that bell actions cause a notification. The window is called spade if I
-- just opened it but usually changes names to grunt after I connect via autossh
-- to grunt. 
--
-- If no monitor, put spade in workspace 2, if monitor, then pin it to all
-- workspaces and maximize it vertically.

if instance == "urxvt" then
  -- When we launch, the terminal is called spade, after we connect it
  -- seems to get changed to jamie@grunt or something like that.
  if name == "spade" or string.find(name, "grunt:") then
    if monitor then
      set_window_geometry(1365,10,570,1025);
      set_window_workspace(3);
      -- maximize_vertically();
      pin_window();
    else
      set_window_geometry(677,10,676,375);
      set_window_workspace(2);
      unpin_window();
    end
  end
end

==> /home/jamie/.config/devilspie2/terminals.lua <==
-- Note - these will typically only work after I start the terminals
-- for the first time because their names seem to change.
if instance == "urxvt" then
  if name == "heart" then
    set_window_geometry(0,10,676,375);
  elseif name == "spade" then
    set_window_geometry(677,10,676,375);
  elseif name == "diamond" then
    set_window_geometry(0,376,676,375);
  elseif name == "clover" then
    set_window_geometry(677,376,676,375);
  end
end

==> /home/jamie/.config/devilspie2/zimbra.lua <==
-- Look for my zimbra firefox window. Shows support queue.
if string.find(name, "Zimbra") then
  if monitor then
    unmaximize();
    set_window_geometry(2520,10,760,1022);
    pin_window();
  else
    set_window_workspace(5);
    set_window_geometry(0,10,676,375);
    -- Zimbra can take up the whole window on this workspace.
    maximize();
    unpin_window();
  end
end

And lastly, it is started (and restartd) with:

0 jamie@turkey:~$ cat ~/.config/systemd/user/devilspie2.service 
[Unit]
Description=Start devilspie2, program to place windows in the right locations.

[Service]
ExecStart=/usr/bin/devilspie2

[Install]
WantedBy=multi-user.target
0 jamie@turkey:~$ 

Which I have configured via a key combination that I hit everytime I plug in or unplug my monitor.

CryptogramJim Risen Writes about Reporting Government Secrets

Jim Risen writes a long and interesting article about his battles with the US government and the New York Times to report government secrets.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #142

Here's what happened in the Reproducible Builds effort between Sunday December 31 and Saturday January 13 2018:

Media coverage

Development and fixes in key packages

Chris Lamb implemented two reproducibility checks in the lintian Debian package quality-assurance tool:

  • Warn about packages that ship Hypothesis example files. (#886101, report)
  • Warn about packages that override dh_fixperms without calling dh_fixperms as this makes the build vary depending on the current umask(2). (#885910, report)

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

60 package reviews have been added, 43 have been updated and 76 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been added:

The notes of one issue type was updated:

  • build_dir_in_documentation_generated_by_doxygen: 1, 2

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adam Borowski (2)
  • Adrian Bunk (16)
  • Niko Tyni (1)
  • Chris Lamb (6)
  • Jonas Meurer (1)
  • Simon McVittie (1)

diffoscope development

disorderfs development

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Worse Than FailureWhy Medical Insurance Is So Expensive

VA One AE Preliminary Project Timeline 2001-02

At the end of 2016, Ian S. accepted a contract position at a large medical conglomerate. He was joining a team of 6 developers on a project to automate what was normally a 10,000-hour manual process of cross-checking spreadsheets and data files. The end result would be a Django server offering a RESTful API and MySQL backend.

"You probably won't be doing anything much for the first week, maybe even the first month," Ian's interviewer informed him.

Ian ignored the red flag and accepted the offer. He needed the experience, and the job seemed reasonable enough. Besides, there were only 2 layers of management to deal with: his boss Daniel, who led the team, and his boss' boss Jim.

The office was in an lavish downtown location. The first thing Ian learned was that nobody had assigned desks. Each day, everyone had to clean out their desks and return their computers and peripherals to lockers. Because team members needed to work closely together, everyone claimed the same desk every day anyway. This policy only resulted in frustration and lost time.

As if that weren't bad enough, the computers were also heavily locked down. Ian had to go through the company's own "app store" to install anything. This was followed by an approval process that could take a few days based on how often Jim went through his pending approvals. The one exception was VMWare Workstation. Because this app cost money, it involved a 2-week approval process. In the middle of December, everyone was off on holiday, making it impossible for Ian's team to get approvals or talk to anyone helpful. Thus Ian's only contributions that month were a couple of Visio diagrams and a Django "hello world" that Daniel had requested. (It wasn't as if Daniel could check his work, though. He didn't know anything about Python, Django, REST, MySQL, MVC, or any other technology relevant to the project.)

The company provided Ian a copy of Agile for Dummies, which seemed ironic in retrospect, as the team was forced to the spend entire first week of January breaking the next 6 months into 2-week sprints. They weren't allowed to leave sprints empty, and had to allocate 36-40 hours each week. They could only make stories for features, so no time was penciled in for bug fixes or paying off technical debt. These stories were then chopped into meaningless pieces ("Part 1", "Part 2", etc.) so they'd fit into their arbitrary timelines.

"This is why medical insurance is so expensive", Daniel remarked at one point, either trying to lighten the mood or stave off his pending insanity.

Later in January, Ian arrived one morning to find the rest of his team standing around confused. Their project was now dead at the hands of a VP who'd had it in for Jim. The company had a tenure process, so the VP couldn't just fire Jim, but he could make his life miserable. He reassigned all of Jim's teams that he didn't outright terminate, exiled Jim to New Jersey, and gave him nothing to do but approve timesheets. Meanwhile, Daniel was told not to bother coming in again.

"Don't worry," the powers-that-be said. "We don't usually terminate people here."

Ian's gapingly empty schedule was filled with a completely different task: "shadowing" someone in another state by screen-sharing and watching them work. The main problem with this arrangement was that Ian's disciple was a systems analyst, not a programmer.

Come February, Ian's new team was also terminated.

"We don't have a culture of layoffs," the powers-that-be assured him.

They were still intent on shoving Ian into a systems analyst position despite his requisite lack of experience. It was at that point that he gave up and moved on. He later heard that within a few months, the entire division had been fired.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don MartiRemove all the tracking widgets? Maybe not.

Good one from Mark Pilipczuk: Publisher Advice From a Buyer.

Remove all the tracking widgets from your site. That Facebook “Like” button only serves to exfiltrate your valuable data to an entity that doesn’t have your best interests at heart. If you’ve got a valuable audience, why would you want to help the ad tech industry which promises “I can find the same and bigger audience over here for $2 CPM, so don’t buy from the publisher?” Sticking your own head in the noose is never a good idea.

That advice makes sense for the Facebook "like button." That button is just a data shoplifter. The others, though? All those extra trackers come in as side effects of ad deals, and they're likely to be contractually required to make ads on the site saleable.

Yes, those trackers feed bots and data leakage, and yes, they're even terrible at fighting adfraud. Augustine Fou points out that Fraud filters don't work. "In some cases it's worse when filter is on."

So in an ideal world you would be able to pull all the third-party trackers, but as far as day-to-day operations go, user tracking is a Chesterton's Fence problem. What happens if a legit site unilaterally takes down the third-party trackers? All the targeted ad impressions that would have given that site a (small) payment end up going to bots.

So what can a site do? Understand that the real fix has to happen on the browser end, and nudge the users to either make their browsers less data-leaky, or switch to browsers that are leakage-resistant out of the box.

Start A/B testing some notifications to remind users to turn on tracking protection.

  • Can you get users who are already choosing "Do Not Track" to turn on real protection if you inform them that sites ignore their DNT choice?

  • If a user is running an ad blocker with a paid whitelisting scheme, can you inform them about it to get them to switch to a better tool, or at least add a second layer of protection that limits the damage that paid whitelisting can do?

  • When users visit privacy pages or opt-out of a marketing program, are they also willing to check their browser privacy settings?

Every site's audience is different. It's hard to know in advance how users will respond to different calls to action to turn up their privacy and create a win-win for legit sites and legit brands. We do know that users are concerned and confused about web advertising, and the good news is that the JavaScript needed to collect data and administer nudges is as easy to add as yet another tracker.

More on what sites can do, that might be more effective than just removing trackers: What The Verge can do to help save web advertising

Planet DebianBenjamin Mako Hill: OpenSym 2017 Program Postmortem

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview

Statistics
Papers submitted 44
Papers accepted 20
Acceptance rate 45%
Posters submitted 2
Posters presented 9
Associate Chairs 8
PC Members 59
Authors 108
Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”

Topics

Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

distribution of papers across topics with breakdown by accept/poster/reject

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

scores for each paper submitted to opensym 2017: average, distribution, etc

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.

Rebuttals

This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher
6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

histogram of paper lengths for final accepted papersIn the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.

Bidding

Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.

Conclusions

The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.


This blog post was originally posted on the Community Data Science Collective blog.

Planet DebianRussell Coker: More About the Thinkpad X301

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

Planet DebianAxel Beckert: Tex Yoda II Mechanical Keyboard with Trackpoint

Here’s a short review of the Tex Yoda II Mechanical Keyboard with Trackpoint, a pointer to the next Swiss Mechanical Keyboard Meetup and why I ordered a $300 keyboard with less keys than a normal one.

Short Review of the Tex Yoda II

Pro
  • Trackpoint
  • Cherry MX Switches
  • Compact but heavy alumium case
  • Backlight (optional)
  • USB C connector and USB A to C cable with angled USB C plug
  • All three types of Thinkpad Trackpoint caps included
  • Configurable layout with nice web-based configurator (might be opensourced in the future)
  • Fn+Trackpoint = scrolling (not further configurable, though)
  • Case not clipped, but screwed
  • Backlight brightness and Trackpoint speed configurable via key bindings (usually Fn and some other key)
  • Default Fn keybindings as side printed and backlit labels
  • Nice packaging
Contra
  • It’s only a 60% Keyboard (I prefer TKL) and the two common top rows are merged into one, switched with the Fn key.
  • Cursor keys by default (and labeled) on the right side (mapped to Fn + WASD) — maybe good for games, but not for me.
  • ~ on Fn-Shift-Esc
  • Occassionally backlight flickering (low frequency)
  • Pulsed LED light effect (i.e. high frequency flickering) on all but the lowest brightness level
  • Trackpoint is very sensitive even in the slowest setting — use Fn+Q and Fn+E to adjust the trackpoint speed (“tps”)
  • No manual included or (obviously) downloadable.
  • Only the DIP switches 1-3 and 6 are documented, 4 and 5 are not. (Thanks gismo for the question about them!)
  • No more included USB hub like the Tex Yoda I had or the HHKB Lite 2 (USB 1.1 only) has.
My Modifications So Far
Layout Modifications Via The Web-Based Yoda 2 Configurator
  • Right Control and Menu key are Right and Left cursors keys
  • Fn+Enter and Fn+Shift are Up and Down cursor keys
  • Right Windows key is the Compose key (done in software via xmodmap)
  • Middle mouse button is of course a middle click (not Fn as with the default layout).
Other Modifications
  • Clear dampening o-rings (clear, 50A) under each key cap for a more silent typing experience
  • Braided USB cable

Next Swiss Mechanical Keyboard Meetup

On Sunday, the 18th of February 2018, the 4th Swiss Mechanical Keyboard Meetup will happen, this time at ETH Zurich, building CAB, room H52. I’ll be there with at least my Tex Yoda II and my vintage Cherry G80-2100.

Why I ordered a $300 keyboard

(JFTR: It was actually USD $299 plus shipping from the US to Europe and customs fee in Switzerland. Can’t exactly find out how much of shipping and customs fee were actually for that one keyboard, because I ordered several items at once. It’s complicated…)

I always was and still are a big fan of Trackpoints as common on IBM and Lenovo Thinkapds as well as a few other laptop manufactures.

For a while I just used Thinkpads as my private everyday computer, first a Thinkpad T61, later a Thinkpad X240. At some point I also wanted a keyboard with Trackpoint on my workstation at work. So I ordered a Lenovo Thinkpad USB Keyboard with Trackpoint. Then I decided that I want a permanent workstation at home again and ordered two more such keyboards: One for the workstation at home, one for my Debian GNU/kFreeBSD running ASUS EeeBox (not affected by Meltdown or Spectre, yay! :-) which I often took with me to staff Debian booths at events. There, a compact keyboard with a built-in pointing device was perfect.

Then I met the guys from the Swiss Mechanical Keyboard Meetup at their 3rd meetup (pictures) and knew: I need a mechanical keyboard with Trackpoint.

IBM built one Model M with Trackpoint, the M13, but they’re hard to get. For example, ClickyKeyboards sells them, but doesn’t publish the price tag. :-/ Additionally, back then there were only two mouse buttons usual and I really need the third mouse button for unix-style pasting.

Then there’s the Unicomp Endura Pro, the legit successor of the IBM Model M13, but it’s only available with an IMHO very ugly color combination: light grey key caps in a black case. And they want approximately 50% of the price as shipping costs (to Europe). Additionally it didn’t have some other nice keyboard features I started to love: Narrow bezels are nice and keyboards with backlight (like the Thinkpad X240 ff. has) have their advantages, too. So … no.

Soon I found, what I was looking for: The Tex Yoda, a nice, modern and quite compact mechanical keyboard with Trackpoint. Unfortunately it is sold out since quite some years ago and more then 5000 people on Massdrop were waiting for its reintroduction.

And then the unexpected happened: The Tex Yoda II has been announced. I knew, I had to get one. From then on the main question was when and where will it be available. To my surprise it was not on Massdrop but at a rather normal dealer, at MechanicalKeyboards.com.

At that time a friend heard me talking of mechanical keyboards and of being unsure about which keyboard switches I should order. He offered to lend me his KBTalking ONI TKL (Ten Key Less) keyboard with Cherry MX Brown switches for a while. Which was great, because from theory, MX Brown switches were likely the most fitting ones for me. He also gave me two other non-functional keyboards with other Cherry MX switch colors (variants) for comparision. As a another keyboard to compare I had my programmable Cherry G80-2100 from the early ’90s with vintage Cherry MX Black switches. Another keyboard to compare with is my Happy Hacking Keyboard (HHKB) Lite 2 (PD-KB200B/U) which I got as a gift a few years ago. While the HHKB once was a status symbol amongst hackers and system administrators, the old models (like this one) only had membrane type keyboard switches. (They nevertheless still seem to get built, but only sold in Japan.)

I noticed that I was quickly able to type faster with the Cherry MX Brown switches and the TKL layout than with the classic Thinkpad layout and its rubber dome switches or with the HHKB. So two things became clear:

  • At least for now I want Cherry MX Brown switches.
  • I want a TKL (ten key less) layout, i.e. one without the number block but with the cursor block. As with the Lenovo Thinkpad USB Keyboards and the HHKB, I really like the cursor keys being in the easy to reach lower right corner. The number pad is just in the way to have that.

Unfortunately the Tex Yoda II was without that cursor block. But since it otherwise fitted perfectly into my wishlist (Trackpoint, Cherry MX Brown switches available, Backlight, narrow bezels, heavy weight), I had to buy one once available.

So in early December 2017, I ordered a Tex Yoda II White Backlit Mechanical Keyboard (Brown Cherry MX) at MechanicalKeyboards.com.

Because I was nevertheless keen on a TKL-sized keyboard I also ordered a Deck Francium Pro White LED Backlit PBT Mechanical Keyboard (Brown Cherry MX) which has an ugly font on the key caps, but was available for a reduced price at that time, and the controller got quite good reviews. And there was that very nice Tai-Hao 104 Key PBT Double Shot Keycap Set - Orange and Black, so the font issue was quickly solved with keycaps in my favourite colour: orange. :-)

The package arrived in early January. The aluminum case of the Tex Yoda II was even nicer than I thought. Unfortunately they’ve sent me a Deck Hassium full-size keyboard instead of the wanted TKL-sized Deck Francium. But the support of MechanicalKeyboards.com was very helpful and I assume I can get the keyboard exchanged at no cost.

Krebs on SecuritySerial SWATter Tyler “SWAuTistic” Barriss Charged with Involuntary Manslaughter

Tyler Raj Barriss, a 25-year-old serial “swatter” whose phony emergency call to Kansas police last month triggered a fatal shooting, has been charged with involuntary manslaughter and faces up to eleven years in prison.

Tyler Raj Barriss, in an undated selfie.

Barriss’s online alias — “SWAuTistic” — is a nod to a dangerous hoax known as “swatting,” in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with potentially deadly force.

Barriss was arrested in Los Angeles this month for alerting authorities in Kansas to a fake hostage situation at an address in Wichita, Kansas on Dec. 28, 2017.

Police responding to the alert surrounded the home at the address Barriss provided and shot 28-year old Andrew Finch as he emerged from the doorway of his mother’s home. Finch, a father of two, was unarmed, and died shortly after being shot by police.

The officer who fired the shot that killed Finch has been identified as a seven-year veteran with the Wichita department. He has been placed on administrative leave pending an internal investigation.

Following his arrest, Barriss was extradited to a Wichita jail, where he had his first court appearance via video on FridayThe Los Angeles Times reports that Barriss was charged with involuntary manslaughter and could face up to 11 years and three months in prison if convicted.

The moment that police in Kansas fired a single shot that killed Andrew Finch (in doorway of his mother’s home).

Barriss also was charged with making a false alarm — a felony offense in Kansas. His bond was set at $500,000.

Sedgwick County District Attorney Marc Bennett told the The LA Times Barriss made the fake emergency call at the urging of several other individuals, and that authorities have identified other “potential suspects” that may also face criminal charges.

Barriss sought an interview with KrebsOnSecurity on Dec. 29, just hours after his hoax turned tragic. In that interview, Barriss said he routinely called in bomb threats and fake hostage situations across the country in exchange for money, and that he began doing it after his own home was swatted.

Barriss told KrebsOnSecurity that he felt bad about the incident, but that it wasn’t he who pulled the trigger. He also enthused about the rush that he got from evading police.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” he wrote in an instant message conversation with this author.

In a jailhouse interview Friday with local Wichita news station KWCH, Barriss said he feels “a little remorse for what happened.”

“I never intended for anyone to get shot and killed,” he reportedly told the news station. “I don’t think during any attempted swatting anyone’s intentions are for someone to get shot and killed.”

The Wichita Eagle reports that Barriss also has been charged in Calgary, Canada with public mischief, fraud and mischief for allegedly making a similar swatting call to authorities there. However, no one was hurt or killed in that incident.

Barriss was convicted in 2016 for calling in a bomb threat to an ABC affiliate in Los Angeles. He was sentenced to two years in prison for that stunt, but was released in January 2017.

Using his SWAuTistic alias, Barriss claimed credit for more than a hundred fake calls to authorities across the nation. In an exclusive story published here on Jan. 2, KrebsOnSecurity dissected several months’ worth of tweets from SWAuTistic’s account before those messages were deleted. In those tweets, SWAuTistic claimed responsibility for calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences.

In his public tweets, SWAuTistic claimed credit for bomb threats against a convention center in Dallas and a high school in Florida, as well as an incident that disrupted a much-watched meeting at the U.S. Federal Communications Commission (FCC) in November.

But in private online messages shared by his online friends and acquaintances SWAuTistic can be seen bragging about his escapades, claiming to have called in fake emergencies at approximately 100 schools and 10 homes.

The serial swatter known as “SWAuTistic” claimed in private conversations to have carried out swattings or bomb threats against 100 schools and 10 homes.

,

Planet DebianSteinar H. Gunderson: Retpoline-enabled GCC

Since I assume there are people out there that want Spectre-hardened kernels as soon as possible, I pieced together a retpoline-enabled build of GCC. It's based on the latest gcc-snapshot package from Debian unstable with H.J.Lu's retpoline patches added, but built for stretch.

Obviously this is really scary prerelease code and will possibly eat babies (and worse, it hasn't taken into account the last-minute change of retpoline ABI, so it will break with future kernels), but it will allow you to compile 4.15.0-rc8 with CONFIG_RETPOLINE=y, and also allow you to assess the cost of retpolines (-mindirect-branch=thunk) in any particularly sensitive performance userspace code.

There will be upstream backports at least to GCC 7, but probably pretty far back (I've seen people talk about all the way to 4.3). So you won't have to run my crappy home-grown build for very long—it's a temporary measure. :-)

Oh, and it made Stockfish 3% faster than with GCC 6.3! Hooray.

Krebs on SecurityCanadian Police Charge Operator of Hacked Password Service Leakedsource.com

Canadian authorities have arrested and charged a 27-year-old Ontario man for allegedly selling billions of stolen passwords online through the now-defunct service Leakedsource.com.

The now-defunct Leakedsource service.

On Dec. 22, 2017, the Royal Canadian Mounted Police (RCMP) charged Jordan Evan Bloom of Thornhill, Ontario for trafficking in identity information, unauthorized use of a computer, mischief to data, and possession of property obtained by crime. Bloom is expected to make his first court appearance today.

According to a statement from the RCMP, “Project Adoration” began in 2016 when the RCMP learned that LeakedSource.com was being hosted by servers located in Quebec.

“This investigation is related to claims about a website operator alleged to have made hundreds of thousands of dollars selling personal information,” said Rafael Alvarado, the officer in charge of the RCMP Cybercrime Investigative Team. “The RCMP will continue to work diligently with our domestic and international law enforcement partners to prosecute online criminality.”

In January 2017, multiple news outlets reported that unspecified law enforcement officials had seized the servers for Leakedsource.com, perhaps the largest online collection of usernames and passwords leaked or stolen in some of the worst data breaches — including three billion credentials for accounts at top sites like LinkedIn and Myspace.

Jordan Evan Bloom. Photo: RCMP.

LeakedSource in October 2015 began selling access to passwords stolen in high-profile breaches. Enter any email address on the site’s search page and it would tell you if it had a password corresponding to that address. However, users had to select a payment plan before viewing any passwords.

The RCMP alleges that Jordan Evan Bloom was responsible for administering the LeakedSource.com website, and earned approximately $247,000 from trafficking identity information.

A February 2017 story here at KrebsOnSecurity examined clues that LeakedSource was administered by an individual in the United States.  Multiple sources suggested that one of the administrators of LeakedSource also was the admin of abusewith[dot]us, a site unabashedly dedicated to helping people hack email and online gaming accounts.

That story traced those clues back to a Michigan man who ultimately admitted to running Abusewith[dot]us, but who denied being the owner of LeakedSource.

The RCMP said it had help in the investigation from The Dutch National Police and the FBI. The FBI could not be immediately reached for comment.

LeakedSource was a curiosity to many, and for some journalists a potential source of news about new breaches. But unlike services such as BreachAlarm and HaveIBeenPwned.com, LeakedSource did nothing to validate users.

This fact, critics charged, showed that the proprietors of LeakedSource were purely interested in making money and helping others pillage accounts.

Since the demise of LeakedSource.com, multiple, competing new services have moved in to fill the void. These services — which are primarily useful because they expose when people re-use passwords across multiple accounts — are popular among those involved in a variety of cybercriminal activities, particularly account takeovers and email hacking.

CryptogramFighting Ransomware

No More Ransom is a central repository of keys and applications for ransomware, so people can recover their data without paying. It's not complete, of course, but is pretty good against older strains of ransomware. The site is a joint effort by Europol, the Dutch police, Kaspersky, and McAfee.

Worse Than FailureRepresentative Line: Tern Back

In the process of resolving a ticket, Pedro C found this representative line, which has nothing to do with the bug he was fixing, but was just something he couldn’t leave un-fixed:

$categories = (isset($categoryMap[$product['department']]) ?
                            (isset($categoryMap[$product['department']][$product['classification']])
                                        ?
                                    $categoryMap[$product['department']][$product['classification']]
                                        : NULL) : NULL);

Yes, the venerable ternary expression, used once again to obfuscate and confuse.

It took Pedro a few readings before he even understood what it did, and then it took him a few more readings to wonder about why anyone would solve the problem this way. Then, he fixed it.

$department = $product['department'];
$classification = $product['classification'];
$categories = NULL;
//ED: isset never triggers as error with an undefined expression, but simply returns false, because PHP
if( isset($categoryMap[$department][$classification]) ) { 
    $categories = $categoryMap[$department][$classification];
}

He submitted the change for code-review, but it was kicked back. You see, Pedro had fixed the bug, which had a ticket associated with it. There were to be no code changes without a ticket from a business user, and since this change wasn’t strictly related to the bug, he couldn’t submit this change.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianCyril Brulebois: Quick recap of 2017

I haven’t been posting anything on my personal blog in a long while, let’s fix that!

Partial reason for this is that I’ve been busy documenting progress on the Debian Installer on my company’s blog. So far, the following posts were published there:

After the Stretch release, it was time to attend DebConf’17 in Montreal, Canada. I’ve presented the latest news on the Debian Installer front there as well. This included a quick demo of my little framework which lets me run automatic installation tests. Many attendees mentioned openQA as the current state of the art technology for OS installation testing, and Philip Hands started looking into it. Right now, my little thing is still useful as it is, helping me reproduce regressions quickly, and testing bug fixes… so I haven’t been trying to port that to another tool yet.

I also gave another presentation in two different contexts: once at a local FLOSS meeting in Nantes, France and once during the mini-DebConf in Toulouse, France. Nothing related to Debian Installer this time, as the topic was how I helped a company upgrade thousands of machines from Debian 6 to Debian 8 (and to Debian 9 since then). It was nice to have Evolix people around, since we shared our respective experience around automation tools like Ansible and Puppet.

After the mini-DebConf in Toulouse, another event: the mini-DebConf in Cambridge, UK. I tried to give a lightning talk about “how snapshot.debian.org helped saved the release(s)” but clearly speed was lacking, and/or I had too many things to present, so that didn’t work out as well as I hoped. Fortunately, no time constraints when I presented that during a Debian meet-up in Nantes, France. :)

Since Reproducible Tails builds were announced, it seemed like a nice opportunity to document how my company got involved into early work on reproducibility for the Tails project.

On an administrative level, I’m already done with all the paperwork related to the second financial year. \o/

Next things I’ll likely write about: the first two D-I Buster Alpha releases (many blockers kept popping up, it was really hard to release), and a few more recent release critical bug reports.

Planet DebianDaniel Pocock: RHL'18 in Saint-Cergue, Switzerland

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

,

Planet DebianSean Whitton: lastjedi

A few comments on Star Wars: The Last Jedi.

Vice Admiral Holdo’s subplot was a huge success. She had to make a very difficult call over which she knew she might face a mutiny from the likes of Poe Dameron. The core of her challenge was that there was no speech or argument she could have given that would have placated Dameron and restored unity to the crew. Instead, Holdo had to press on in the face of that disunity. This reflects the fact that, sometimes, living as one should demands pressing on in the face deep disagreement with others.

Not making it clear that Dameron was in the wrong until very late in the film was a key component of the successful portrayal of the unpleasantness of what Holdo had to do. If instead it had become clear to the audience early on that Holdo’s plan was obviously the better one, we would not have been able to observe the strength of Holdo’s character in continuing to pursue her plan despite the mutiny.

One thing that I found weak about Holdo was her dress. You cannot be effective on the frontlines of a hot war in an outfit like that! Presumably the point was to show that women don’t have to give up their femininity in order to take tough tactical decisions under pressure, and that’s indeed something worth showing. But this could have been achieved by much more subtle means. What was needed was to have her be the character with the most feminine outfit, and it would have been possible to fulfill that condition by having her wear something much more practical. Thus, having her wear that dress was crude and implausible overkill in the service of something otherwise worth doing.

I was very disappointed by most of the subplot with Rey and Luke: both the content of that subplot, and its disconnection from the rest of film.

Firstly, the content. There was so much that could have been explored that was not explored. Luke mentions that the Jedi failed to stop Darth Sidious “at the height of their powers”. Well, what did the Jedi get wrong? Was it the Jedi code; the celibacy; the bureaucracy? Is their light side philosophy to absolutist? How are Luke’s beliefs about this connected to his recent rejection of the Force? When he lets down his barrier and reconnects with the force, Yoda should have had much more to say. The Force is, perhaps, one big metaphor for certain human capacities not emphasised by our contemporary culture. It is at the heart of Star Wars, and it was at the heart of Empire and Rogue One. It ought to have been at the heart of The Last Jedi.

Secondly, the lack of integration with the rest of the film. One of the aspects of Empire that enables its importance as a film, I suggest, is the tight integration and interplay between the two main subplots: the training of Luke under Yoda, and attempting to shake the Empire off the trail of the Millennium Falcon. Luke wants to leave the training unfinished, and Yoda begs him to stay, truly believing that the fate of the galaxy depends on him completing the training. What is illustrated by this is the strengths and weaknesses of both Yoda’s traditional Jedi view and Luke’s desire to get on with fighting the good fight, the latter of which is summed up by the binary sunset scene from A New Hope. Tied up with this desire is Luke’s love for his friends; this is an important strength of his, but Yoda has a point when he says that the Jedi training must be completed if Luke is to be ultimately succesful. While the Yoda subplot and what happens at Cloud City could be independently interesting, it is only this integration that enables the film to be great. The heart of the integration is perhaps the Dark Side Cave, where two things are brought together: the challenge of developing the relationship with oneself possessed by a Jedi, and the threat posed by Darth Vader.

In the Last Jedi, Rey just keeps saying that the galaxy needs Luke, and eventually Luke relents when Kylo Ren shows up. There was so much more that could have been done with this! What is it about Rey that enables her to persuade Luke? What character strengths of hers are able to respond adequately to Luke’s fear of the power of the Force, and doubt regarding his abilities as a teacher? Exploring these things would have connected together the rebel evacuation, Rey’s character arc and Luke’s character arc, but these three were basically independent.

(Possibly I need to watch the cave scene from The Last Jedi again, and think harder about it.)

Planet DebianDirk Eddelbuettel: digest 0.6.14

Another small maintenance release, version 0.6.14, of the digest package arrived on CRAN and in Debian today.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

Just like release 0.6.13 a few weeks ago, this release accomodates another request by Luke and Tomas and changes two uses of NAMED to MAYBE_REFERENCED which helps in the transition to the new reference counting model in R-devel. Thierry also spotted a minor wart in how sha1() tested type for matrices and corrected that, and I converted a few references to https URLs and correct one now-dead URL.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMario Lang: I pushed an implementation of myself to GitHub

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario
++++++++++++
===========+:
           ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@
====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.

,

Don MartiEasy question with too many wrong answers

Content warning: Godwin's Law.

Here's a marketing question that should be easy.

How much of my brand's ad budget goes to Nazis?

Here's the right answer.

Zero.

And here's a guy who still seems to be having some trouble answering it: Dear Google (GOOG): Please stop using my advertising dollars to monetize hate speech.

If you're responsible for a brand and somewhere in the mysterious tubes of adtech your money is finding its way to Nazis, what is the right course of action?

One wrong answer is to write a "please help me" letter to a company that will just ignore it. That's just admitting to knowingly sending money to Nazis, which is clearly wrong.

Here's another wrong idea, from the upcoming IAB Annual Leadership Meeting session on "brand safety" (which is the nice, sanitary professional-sounding term for "trying not to sponsor Nazis, but not too hard.")

Threats to brand safety arise internally and externally, in your control and out of your control—and the stakes have never been higher. Learn how to minimize brand safety risks and maximize odds of survival when your brand takes a hit (spoiler alert: overreacting is as bad as underreacting). Best Buy and Starcom share best practices based on real-world encounters with brand safety issues.

Really, people? Overreacting is as bad as underreacting? The IAB wants you to come to a deluxe conference about how it's fine to send a few bucks to Nazis here and there as long as it keeps their whole adtech/adfraud gravy train running on time.

I disagree. If Best Buy is fine with (indirectly of course) paying the occasional Nazi so that the IAB companies can keep sending them valuable eyeballs from the cheapest possible sites, then I can shop elsewhere.

Any nationalist extremist movement has its obvious supporters, who wear the outfits and get the tattoos and go march in the streets and all that stuff, and also the quiet supporters, who come up with the money and make nice with the powers that be. The supporters who can keep it deniable.

Can I, as a potential customer from the outside, tell the difference between quiet Nazi supporters and people who are just bad at online advertising and end up supporting Nazis by mistake? Of course not. Do I care? Of course not. If you're not willing to put the basic "don't pay Nazis to do Nazi stuff" rule ahead of a few ad clicks, I don't want your brand anyway. And I'll make sure to install and use the tracking protection tools that help keep my good data away from bad sites.

,

CryptogramFriday Squid Blogging: Japanese "Dude Food" Includes Squid

This seems to be a trend.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesScreen Capping the News Shows Different Stories for Different Folks

During a year marked by social and political turmoil, the media has found itself under scrutiny from politicians, academics, the general public, and increasingly self-reflexive journalists and editors. Fake news has entered our lexicon both as a form of political meddling from foreign powers and a dismissive insult directed towards any less-than-complimentary news coverage of the current administration.

Paying attention to where people are getting their news and what that news is telling them is an important step to understanding our increasingly polarized society and our seeming inability to talk across political divides. The insight can also help us get at those important and oh-too common questions of “how could they think that?!?” or “how could they support that politician?!?”

My interest in this topic was sparked a few months ago when I began paying attention to the top four stories and single video that magically appear whenever I swipe left on my iPhone. The stories compiled by the Apple News App provide a snapshot of what the dominant media sources consider the newsworthy happenings of the day. After paying an almost obsessive attention to my newsfeed for a few weeks—and increasingly annoying my friends and colleagues by telling them about the compelling patterns I was seeing—I started to take screenshots of the suggested news stories on a daily or twice daily basis. The images below were gathered over the past two months.

It is worth noting that the Apple News App adapts to a user’s interests to ensure that it provides “the stories you really care about.” To minimize this complicating factor I avoided clicking on any of the suggested stories and would occasionally verify that my news feed had remained neutral through comparing the stories with other iPhone users whenever possible.

Some of the differences were to be expected—People simply cannot get enough of celebrity pregnancies and royal weddings. The Washington Post, The New York Times, and CNN frequently feature stories that are critical of the current administration, and Fox News is generally supportive of President Trump and antagonistic towards enemies of the Republican Party.

(Click to Enlarge)

However, there are two trends that I would like to highlight:

1) A significant number of Fox News headlines offer direct critiques of other media sites and their coverage of key news stories. Rather than offering an alternative reading of an event or counter-coverage, the feature story undercuts the journalistic work of other news sources through highlighting errors and making accusations of partisanship motivations. In some cases, this even takes the form of attacking left-leaning celebrities as proxy to a larger movement or idea. Neither of these tactics were employed by any of the other news sources during my observation period.

(Click to Enlarge)

2) Fox News often featured coverage of vile, treacherous, or criminal acts committed by individuals as well as horrifying accidents. This type of story stood out both due to the high frequency and the juxtaposition to coverage of important political events of the time—murderous pigs next to Senate resignations and sexually predatory high school teachers next to massively destructive California wildfires. In a sense, Fox News is effectively cultivating an “asociological” imagination by shifting attention to the individual rather than larger political processes and structural changes. In addition, the repetitious coverage of the evil and devious certainly contributes to a fear-based society and confirms the general loss of morality and decline of conservative values.

(Click to Enlarge)

It is worth noting that this move away from the big stories of the day also occurs through a surprising amount of celebrity coverage.

(Click to Enlarge)

From the screen captures I have gathered over the past two months, it seems apparent that we are not just consuming different interpretations of the same event, but rather we are hearing different stories altogether. This effectively makes the conversation across political affiliation (or more importantly, news source affiliation) that much more difficult if not impossible.

I recommend taking time to look through the images that I have provided on your own. There are a number of patterns I did not discuss in this piece for the sake of brevity and even more to be discovered. And, for those of us who spend our time in the front of the classroom, the screenshot approach could provide the basis for a great teaching activity where the class collectively takes part in both the gathering of data and conducting the analysis. 

Kyle Green is an Assistant Professor of Sociology at Utica College. He is a proud TSP alumnus and the co-author /co-host of Give Methods a Chance.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Hamilton, Hamilton, Hamilton, Hamilton

"Good news! I can get my order shipped anywhere I want...So long as the city is named Hamilton," Daniel wrote.

 

"I might have forgotten my username, but at least I didn't forget to change the email template code in Production," writes Paul T.

 

Jamie M. wrote, "Using Lee Hecht Harrison's job search functionality is very meta."

 

"When I decided to go to Cineworld, wasn't sure what I wanted to watch," writes Andy P., "The trailer for 'System Restore' looks good, but it's got a bad rating on Rotten Tomatoes."

 

Mattias writes, "I get the feeling that Visual Studio really doesn't like this error."

 

"While traveling in Philadelphia's airport, I was pleased to see Macs competing in the dumb error category too," Ken L. writes.

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

TEDExploring the boundaries of legacy at TED@Westpac

Cyndi Stivers and Adam Spencer host TED@Westpac — a day of talks and performances themed around “The Future Legacy” — in Sydney, Australia, on Monday, December 11th. (Photo: Jean-Jacques Halans / TED)

Legacy is a delightfully complex concept, and it’s one that the TED@Westpac curators took on with gusto for the daylong event held in Sydney, Australia, on Monday December 11th. Themed around the idea of “The Future Legacy,” the day was packed with 15 speakers and two performers and hosted by TED’s Cyndi Stivers and TED speaker and monster prime number aficionado Adam Spencer. Topics ranged from education to work-health balance to designer babies to the importance of smart conversations around death.

For Westpac managing director and CEO Brian Hartzer, the day was an opportunity both to think back over the bank’s own 200-year-legacy — and a chance for all gathered to imagine a bold new future that might suit everyone. He welcomed talks that explored ideas and stories that may shape a more positive global future. “We are so excited to see the ripple effect of your ideas from today,” he told the collected speakers before introducing Aboriginal elder Uncle Ray Davison to offer the audience a traditional “welcome to country.”

And with that, the speakers were up and running.

“Being an entrepreneur is about creating change,” says Linda Zhang. She suggests we need to encourage the entrepreneurial mindset in high-schoolers. (Photo: Jean-Jacques Halans / TED)

Ask questions, challenge the status quo, build solutions. Who do you think of when you hear the word “entrepreneur?” Steve Jobs, Mark Zuckerberg, Elon Musk and Bill Gates might come to mind. What about a high school student? Linda Zhang might just have graduated herself but she’s been taking entrepreneurial cues from her parents, who started New Zealand’s second-largest thread company. Zhang now runs a program to pair students with industry mentors and get them to work for 48 hours on problems they actually want to solve. The results: a change in mindset that could help prepare them for a tumultuous but opportunity-filled job market. “Being an entrepreneur is about creating change,” Zhang says. “This is what high school should be about … finding things you care about, having the curiosity to learn about those things and having the drive to take that knowledge and implement it into problems you care about solving.”

Should we bribe kids to study math? In this sparky talk, Mohamad Jebara shares a favorite quote from fellow mathematician Francis Su: “We study mathematics for play, for beauty, for truth, for justice, and for love.” Only problem: kids today, he says, often don’t tend to agree, instead finding math “difficult and boring.” Jebara has a counterintuitive potential solution: he wants to bribe kids to study math. His financial incentive plan works like this: his company charges parents a monthly subscription fee; if students complete their weekly math goal then the program refunds that amount of the fee directly into the student’s bank account; if not, the company pockets the profit. Ultimately, Jebara wants kids to discover math’s intrinsic worth and beauty, but until they get there, he’s happy to pay them. And this isn’t just about his own business model. “Unless we find a way to improve student engagement with mathematics, we’ll have not only a huge skills shortage crisis, but a fickle population easily manipulated by whoever can get the most airtime,” he says.

You, cancer and the workplace. When lawyer Sarah Donnelly was diagnosed with breast cancer, she turned to her friends and family for support — but she also sought refuge in her work. “My job and my coworkers would make me feel valuable and human at times when I would have otherwise felt like a statistic,” she says. “Work gave me focus and stability when I was dealing with so many unknowns and difficult personal decisions.” But, she says, not all employers realize that work can be a sanctuary for the sick, and often — believing themselves polite and thoughtful — cast out their employees. Now, Donnelly is striving to change the experiences of individuals coping with serious illness — and the perceptions others might have of them. Together with a colleague, she created a “Working with Cancer” toolkit that provides a framework and guidance for all those professionally involved in an employee’s life, and she is traveling to different companies around Australia to implement it.

Digital strategist Will Jenkins asks that we need to think about what we really want from life, not just our day-to-day. (Photo: Jean-Jacques Halans / TED)

The connection between time and money. We all need more time, says digital strategist Will Jenkins, and historically we’ve developed systems and technologies to save time for ourselves and others by reducing waste and inefficiency. But there’s a problem: even after spending centuries trying to perfect time-saving techniques, it too often still doesn’t feel like we’re getting anywhere. “As individuals, we’re busier than ever,” Jenkins points out, before calling for us to look beyond specialized techniques to think about what we actually really want from life itself, not just our day-to-day. In taking a holistic approach to time, we might, he says, channel John Maynard Keynes to figure out new ways that will allow all of us “to live wisely, agreeably, and well.”

Creating a digital future for Australia’s First People. Aboriginal Australian David Unaipon (1862-1967) was called his country’s Leonardo da Vinci — he was responsible for at least 19 inventions, including a tool that led to modern sheep shears. But according to Westpac business analyst Michael Mieni, we need to find better ways to encourage future Unaipons. Right now, he says, too many Indigenous Australians are on the far side of the digital divide, lacking access to computers and the Internet as well as basic schooling in technology. Mieni was the first Indigenous IT honors students at the University of Technology Sydney and he makes the case that tech-savvy Indigenous Australians are badly needed to serve as role models and teachers, as inventors of ways to record and promote their culture and as guardians of their people’s digital rights. “What if the next ground-breaking idea is already in the mind of a young Aboriginal student but will never surface because they face digital disadvantage or exclusion?” he asks. Everyone in Australia — not just the First Peoples — gains when every citizen has the opportunity and resources to become digitally literate.

Shade Zahrai and Aric Yegudkin perform a gorgeous, sensual dance at TED@Westpac. (Photo: Jean-Jacques Halans / TED)

The beauty of a dance duet. “Partner dance embodies the coming together of two people,” Shade Zahrai‘s voice whispers to a dark auditorium as she and her partner take the TED stage. In the middle of session one, the pair perform a gorgeous and sensual modern dance, complete with Zahrai’s recorded voiceover explaining the coordination and unity that partner dance requires of its participants.

The power of inclusiveness. Inclusion strategist Hayley Yeates shares how her identity as a proud Australian was dimmed by prejudice shown towards her by those who saw her as Asian. When in school, she says, fellow students didn’t want to associate with her in classrooms, while she didn’t add a picture to her LinkedIn profile for fear her race would deem her less worthy of a job. But Yeates focuses on more than the personal stories of those who’ve been dubbed an outsider, and makes the case that diversity leads to innovation and greater profitability for companies. She calls for us all to sponsor safe spaces where authentic, unrestrained conversations about the barriers faced by cultural minorities can be held freely. And she invites leaders to think about creating environments where people’s whole selves can work, and where an organization can thrive because of, not in spite of, its employees’ differences.

Olivia Tyler tracks the complexity of global supply chains, looking to develop smart technology that can allow both corporations and consumers to understand buying decisions. (Photo: Jean-Jacques Halans / TED)

How to do yourself out of a job. As a sustainability practitioner, Olivia Tyler is trying hard to develop systems that will put her out of work. Why? For the good of us all, of course. And how? By encouraging all of us to ask questions about where what we buy, wear or eat comes from. Tyler tracks the fiendish complexity of today’s global supply chains, and she is attempting to develop smart technology that can allow both corporations and consumers to have the visibility they need to understand the buying decisions they make. When something as ostensibly simple as a baked good can include hundreds of data points about the ingredients it contains — a cake can be a minefield, she jokes — it’s time to open up the cupboard and use tech such as the blockchain to crack open the sustainability code. “We can adopt new and exciting ways to change the game on how we conduct ourselves as corporates and consumers across our increasingly smaller world,” she promises.

Can machine intelligence liberate human purpose? Much has been made of the threat robots place to the very existence of certain jobs, with some estimates reckoning that as much as 80% of low skill jobs have already been automated. Self-styled “datapreneur” Tomer Garzberg shares how he researched 11,000 of the world’s most widely held jobs to create the “Short-Term Automation Susceptibility Index” to identify the types of role that might be up for automation next. Perhaps unsurprisingly, highly specialized roles held by those such as neurosurgeons, chemical engineers and, well, acrobats face the least risk of being automated, while even senior blue collar positions or standard white collar roles such as pharmacists, accountants and health inspectors can expect a 25% shrinkage over the next 10 years. But Garzberg believes that we can — must — embrace this cybernated future.”Prepare your family to be okay with change, as uncomfortable as it may be,” he says. “We’ll likely be switching careers far more frequently in the near future.”

Everything’s gonna be alright. After a quick break and a breather, Westpac’s own Rowan Fitzpatrick and his band Heart of Mind played in session two with a sweet, uplifting rock ballad about better days and leaning on one another with love and hope. “Keep looking forward / Don’t lose your grip / One step at a time,” the trained jazz singer croons.

Alastair O’Neill shares the ethical wrangling his family undertook as they figured out how they felt about potentially eradicating a debilitating disease with gene editing. (Photo: Jean-Jacques Halans / TED)

You have the ability to end a hereditary disease. Do you take it? “Recently I had to sign a form promising that I wouldn’t have sex with my wife,” says a deadpan Alastair O’Neill as he kicks off the session’s talks. “Why? Because we decided to have a baby.” He waits a beat. “Let me rewind.” As the audience settles in for a rollercoaster talk of emotional highs and lows, he explains his family’s journey through the ethical minefield of embryonic genetic testing, also known as preimplantation genetic diagnosis or PGD. It was a journey prompted by a hereditary condition in his wife’s family — his father-in-law Phil had inherited the gene for retinal dystrophy and was declared legally blind at 30 years old. The odds that his own young family would have a baby either carrying or inheriting the disease were as low as one in two. In this searingly personal talk, O’Neill shares the ups and downs of both the testing process and the ethical wrangling that their entire family undertook as they tried to figure out how they felt about potentially eradicating a debilitating disease. Spoiler alert: O’Neill is in favor. “PGD gives couples the ability to choose to end a hereditary disease,” he says. “I think we should give every potential parent that choice.”

A game developer’s solution to the housing crisis. When Sarah Murray wanted to buy her first house, she discovered that home prices far exceeded her budget — and building a new house would be prohibitively costly and time-consuming. Frustrated by her lack of self-determination, Murray decided to create a computer game to give control back to buyers. The program allows you to design all aspects of your future home (even down to attention to price and environmental impact) and then delivers the final product directly to you in modular components that can be assembled onsite. Murray’s innovative idea both cuts costs and makes more sustainable dwellings; the first physical houses should be ready by 2018. But the digital housing developer isn’t done yet. Now she is working on adapting the program and investing in construction techniques such as 3D printing so that when a player designs and builds a home, they can also contribute to a home for someone in need. As she says, “I want to put every person who wants one in a home of their own design.”

Tough guys need mental-health help, too. In 2013 in Castlemaine, Victoria, painter and decorator Jeremy Forbes was shaken when a friend and fellow tradie (or tradesman), committed suicide. But what truly shocked him were the murmurs he overheard at the man’s wake — people asking, “Who’s next?” Tradies deal with the same struggles faced by many — depression, alcohol and drug dependency, gambling, financial hardship — but they often don’t feel comfortable opening up about them. “You’re expected to be silent in the face of adversity,” says Forbes. So he and artist Catherine Pilgrim founded HALT (Hope Assistance Local Tradies), a mental health awareness organization for tradie men and women, apprentices, builders, farmers, and their partners. HALT meets people where they are, hosting gatherings at hardware stores, football and sports clubs, and vocational training facilities. There, people learn about the warning signs of depression and anxiety and the available services. According to Forbes, who received a Westpac Social Change Fellowship in 2016, HALT has now held around 150 events, and he describes the process as both empowering and cathartic. We need to know how to respond if people are not OK, he says.

The conversation about death you need to have. “Most of us don’t want to acknowledge death, we don’t want to plan for it, and we don’t want to discuss it with the most important people in our lives,” says mortal realist and portfolio manager Michelle Knox. She’s got stats to prove it: 45% of people in Australia over the age of 18 don’t have a legal will. But dying without one is complicated and expensive for those left behind, and just one reason Knox believes it’s time we take ownership of our own deaths. Others include that talking about death before it happens can help us experience a good death, reduce stress on our loved ones, and also help us support others who are grieving. Knox experienced firsthand the power of talking about death ahead of time when her father passed away earlier this year. “I discovered this year it’s actually a privilege to help someone exit this life and although my heart is heavy with loss and sadness, it is not heavy with regret,” she says, “I knew what Dad wanted and I feel at peace knowing I could support his wishes.”

“What would water do?” asks Raymond Tang. “This simple and powerful question has changed my life for the better.” (Photo: Jean-Jacques Halans / TED)

The philosophy of water. How do we find fulfillment in a world that’s constantly changing? IT strategy manager and “agent of flow” Raymond Tang struggled mightily with this question — until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water and, inspired, he’s now applying the concepts to his everyday life. In this charming talk, he shares three lessons he’s learned so far from the “philosophy of water.” First, humility: in the same way water helps plants and animals grow without seeking reward, Tang finds fulfillment and meaning in helping others overcome their challenges. Next, harmony: just as water is able to navigate its way around obstacles without force or conflict, Tang believes we can find a greater sense of fulfillment in our endeavors by shifting our focus away from achieving success and towards achieving harmony. Finally, openness: water can be a liquid, solid or gas, and it adapts to the shape in which it’s contained. Tang finds in his professional life that the teams most open to learning (and un-learning) do the best work. “What would water do?” Tang asks. “This simple and powerful question has changed my life for the better.”

With great data comes great responsibility. Remember the hacks on companies such as Equifax and JP Morgan? Well, you ain’t seen nothing yet. As computer technology becomes more powerful (think quantum) the systems we use to protect our wells of data become ever more vulnerable. However, there is still time to plan countermeasures against the impending data apocalypse, reassures encryption expert Vikram Sharma. He and his team are designing security devices and programs that also rely on quantum physics to power a defense against the most sophisticated attacks. “The race is on to build systems that will remain secure in the face of rapid technological advance,” he says.

Rach Ranton brings the leadership lessons she learned in the military to corporations, suggesting that leaders succeed when everyone knows the final goal they’re working toward. (Photo: Jean-Jacques Halans / TED)

Leadership lessons from the front line. How does a leader give their people a sense of purpose and direction? Rach Ranton spent more than a decade in the Australian Army, including tours of Afghanistan and East Timor. Now, she brings the lessons she learned in the military to companies, blending organizational psychology aimed at corporations with the planning and best practices of a well-oiled military unit. Even in a situation of extreme uncertainty, she says, military units function best if everyone understands the leader’s objective exactly as well as they understand their own role, not just their individual part to play but also the whole. She suggests leaders spend time thinking about how to communicate “commander’s intent,” the final goal that everyone is working toward. As a test, she asks: If you as a leader were absent from the scene, would your team still know what to do … and why they were doing it?

CryptogramFingerprinting Digital Documents

In this era of electronic leakers, remember that zero-width spaces and homoglyph substitution can fingerprint individual instances of files.

Krebs on SecurityBitcoin Blackmail by Snail Mail Preys on Those with Guilty Conscience

KrebsOnSecurity heard from a reader whose friend recently received a remarkably customized extortion letter via snail mail that threatened to tell the recipient’s wife about his supposed extramarital affairs unless he paid $3,600 in bitcoin. The friend said he had nothing to hide and suspects this is part of a random but well-crafted campaign to prey on men who may have a guilty conscience.

The letter addressed the recipient by his first name and hometown throughout, and claimed to have evidence of the supposed dalliances.

“You don’t know me personally and nobody hired me to look into you,” the letter begins. “Nor did I go out looking to burn you. It is just your bad luck that I stumbled across your misadventures while working on a job around Bellevue.”

The missive continues:

“I then put in more time than I probably should have looking into your life. Frankly, I am ready to forget all about you and let you get on with your life. And I am going to give you two options that will accomplish that very thing. These two options are to either ignore this letter, or simply pay me $3,600. Let’s examine those two options in more detail.”

The letter goes on to say that option 1 (ignoring the threat) means the author will send copies of his alleged evidence to the man’s wife and to her friends and family if he does not receive payment within 12 days of the letter’s post marked date.

“So [name omitted], even if you decide to come clean with your wife, it won’t protect her from the humiliation she will feel when her friends and family find out your sordid details from me,” the extortionist wrote.

Option 2, of course, involves sending $3,600 in Bitcoin to an address specified in the letter. That bitcoin address does not appear to have received any payments. Attached to the two-sided extortion note is a primer on different ways to quickly and easily obtain bitcoin.

“If I don’t receive the bitcoin by that date, I will go ahead and release the evidence to everyone,” the letter concludes. “If you go that route, then the least you could do is tell your wife so she can come up with an excuse to prepare her friends and family before they find out. The clock is ticking, [name omitted].”

Of course, sending extortion letters via postal mail is mail fraud, a crime which carries severe penalties (fines of up to $1 million and up to 30 years in jail). However, as the extortionist rightly notes in his letter, the likelihood that authorities would ever be able to catch him is probably low.

The last time I heard of or saw this type of targeted extortion by mail was in the wake of the 2015 breach at online cheating site AshleyMadison.com. But those attempts made more sense to me since obviously many AshleyMadison users quite clearly did have an affair to hide.

In any case, I’d wager that this scheme — assuming that the extortionist is lying and has indeed sent these letters to targets without actual knowledge of extramarital affairs on the part of the recipients — has a decent chance of being received by someone who really does have a current or former fling that he is hiding from his spouse. Whether that person follows through and pays the extortion, though, is another matter.

I searched online for snippets of text from the extortion letter and found just one other mention of what appears to be the same letter: It was targeting people in Wellesley, Mass, according to a local news report from December 2017.

According to that report, the local police had a couple of residents drop off letters or call to report receiving them, “but to our knowledge no residents have fallen prey to the scam. The envelopes have no return address and are postmarked out of state, but from different states. The people who have notified us suspected it was a scam and just wanted to let us know.”

In the Massachusetts incidents, the extortionist was asking for $8,500 in bitcoin. Assuming it is the same person responsible for sending this letter, perhaps the extortionist wasn’t getting many people to bite and thus lowered his “fee.”

I opted not to publish a scan of the letter here because it was double-sided and redacting names, etc. gets dicey thanks to photo and image manipulation tools. Here’s a transcription of it instead (PDF).

CryptogramYet Another FBI Proposal for Insecure Communications

Deputy Attorney General Rosenstein has given talks where he proposes that tech companies decrease their communications and device security for the benefit of the FBI. In a recent talk, his idea is that tech companies just save a copy of the plaintext:

Law enforcement can also partner with private industry to address a problem we call "Going Dark." Technology increasingly frustrates traditional law enforcement efforts to collect evidence needed to protect public safety and solve crime. For example, many instant-messaging services now encrypt messages by default. The prevent the police from reading those messages, even if an impartial judge approves their interception.

The problem is especially critical because electronic evidence is necessary for both the investigation of a cyber incident and the prosecution of the perpetrator. If we cannot access data even with lawful process, we are unable to do our job. Our ability to secure systems and prosecute criminals depends on our ability to gather evidence.

I encourage you to carefully consider your company's interests and how you can work cooperatively with us. Although encryption can help secure your data, it may also prevent law enforcement agencies from protecting your data.

Encryption serves a valuable purpose. It is a foundational element of data security and essential to safeguarding data against cyber-attacks. It is critical to the growth and flourishing of the digital economy, and we support it. I support strong and responsible encryption.

I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so.

Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a "backdoor." In fact, those very capabilities are marketed and sought out.

I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.

The question is whether to require a particular goal: When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help. The government does not need to hold the key.

Rosenstein is right that many services like Gmail naturally keep plaintext in the cloud. This is something we pointed out in our 2016 paper: "Don't Panic." But forcing companies to build an alternate means to access the plaintext that the user can't control is an enormous vulnerability.

Worse Than FailureCodeSOD: Dictionary Definition

Guy’s eight-person team does a bunch of computer vision (CV) stuff. Guy is the “framework Guy”: he doesn’t handle the CV stuff so much as provide an application framework to make the CV folks lives easy. It’s a solid division of labor, with one notable exception: Richard.

Richard is a Computer Vision Researcher, head of the CV team. Guy is a mere “code monkey”, in Richard’s terms. Thus, everything Richard does is correct, and everything Guy does is “cute” and “a nice attempt”. That’s why, for example, Richard needed to take a method called readFile() and turn it into readFileHandle(), “for clarity”.

The code is a mix of C++ and Python, and much of the Python was written before Guy’s time. While the style in use doesn’t fit PEP–8 standards (the official Python style), Guy has opted to follow the in use standards, for consistency. This means some odd things, like putting a space before the colons:

    def readFile() :
      # do stuff

Which Richard felt the need to comment on in his code:

    def readFileHandle() : # I like the spaced out :'s, these are cute =]

There’s no “tone of voice” in code, but the use of “=]” instead of a more conventional smile emoticon is a clear sign that Richard is truly a monster. The other key sign is that Richard has taken an… unusual approach to object-oriented programming. When tasked with writing up an object, he takes this approach:

class WidgetSource:
    """
    Enumeration of various sources available for getting the data needed to construct a Widget object.
    """

    LOCAL_CACHE    = 0
    DB             = 1
    REMOTE_STORAGE = 2
    #PROCESSED_DATA  = 3

    NUM_OF_SOURCES = 3

    @staticmethod
    def toString(widget_source):
        try:
            return {
                WidgetSource.LOCAL_CACHE:     "LOCAL_CACHE",
                WidgetSource.DB:              "DB",
                #WidgetSource.PROCESSED_DATA:   "PROCESSED_DATA", # @DEPRECATED - Currently not to be used
                WidgetSource.REMOTE_STORAGE:  "REMOTE_STORAGE"
            }[widget_source]
        except KeyError:
            return "UNKNOWN_SOURCE"

def deserialize_widget(id, curr_src) :
     # SNIP
     widget = {
         WidgetSource.LOCAL_CACHE: _deserialize_from_cache,
         WidgetSource.DB: _deserialize_from_db,
         WidgetSource.REMOTE_STORAGE: _deserialize_from_remote
         #WidgetSource.PROCESSED_DATA: widgetFactory.fromProcessedData,
     }[curr_src](id)

For those not up on Python, there are a few notable elements here. First, by convention, anything in ALL_CAPS is a constant. A dictionary/map literal takes the form {aKey: aValue, anotherKey: anotherValue}.

So, the first thing to note is that both the deserialize_widget and toString methods create a dictionary. The keys are drawn from constants… which have the values 0, 1, 2, and 3. So… it’s an array, represented as a map, but without the ability to iterate across it in order.

But the dictionary isn’t what gets returned. It’s being used as a lookup table. This is actually quite common, as Python doesn’t have a switch construct, but it does leave one scratching one’s head wondering why.

The real thing that makes one wonder “why” is this, though: Why is newly written code already marked as @DEPRECATED? This code was not yet released, and nothing outside of Richard’s newly written feature depended on it. I suspect Richard recently learned what deprecated means, and just wanted to use it in a sentence.

It’s okay, though. I like the @deprecated, those are cute =]

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

CryptogramSusan Landau's New Book: Listening In

Susan Landau has written a terrific book on cybersecurity threats and why we need strong crypto. Listening In: Cybersecurity in an Insecure Age. It's based in part on her 2016 Congressional testimony in the Apple/FBI case; it examines how the Digital Revolution has transformed society, and how law enforcement needs to -- and can -- adjust to the new realities. The book is accessible to techies and non-techies alike, and is strongly recommended.

And if you've already read it, give it a review on Amazon. Reviews sell books, and this one needs more of them.

CryptogramSpectre and Meltdown Attacks Against Microprocessors

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution -- which of course is not a solution -- is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world's computers for the past 15-20 years. They've been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer -- maybe one running in a browser window from that sketchy site you're visiting, or as a result of a phishing attack -- can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other users on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

"Throw it away and buy a new one" is ridiculous security advice, but it's what US-CERT recommends. It is also unworkable. The problem is that there isn't anything to buy that isn't vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there's no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here's a running list of who's patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren't security teams on call to write patches, and there often aren't mechanisms to push patches onto the devices. We're already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can't be fixed.

The second is that some of the patches require updating the computer's firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn't get that update directly to users; it had to work with the individual hardware companies, and some of them just weren't capable of getting the update to their customers.

We're already seeing this. Some patches require users to disable the computer's password, which means organizations can't automate the patch. Some antivirus software blocks the patch, or -- worse -- crashes the computer. This results in a three-step process: patch your antivirus software, patch your operating system, and then patch the computer's firmware.

The final reason is the nature of these vulnerabilities themselves. These aren't normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn't to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don't click on strange e-mail attachments, don't visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won't notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don't do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It's a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they'll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they'll be worse. 2018 will be the year of microprocessor vulnerabilities, and it's going to be a wild ride.


Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

,

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.


Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.


Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.


Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.


Olga Iurkova
Olga Iurkova (Ukraine)
Journalist + editor
Journalist and co-founder of StopFake.org, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.


Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.


Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.


Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.


Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.


In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.


Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.


Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.


Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.


Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.


In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Ryan Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.


Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.


DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.


Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.


Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.


Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.


TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Ichthyologist
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.


Aziza Chaouni
Aziza Chaouni (Morocco)
Architect
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.


Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.


A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.


Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.


Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.


Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.


Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.


An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

v
Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.


David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.

TEDWhy Oprah’s talk works: Insight from a TED speaker coach

By Abigail Tenembaum and Michael Weitz of Virtuozo

When Oprah Winfrey spoke at the Golden Globes last Sunday night, her speech lit up social media within minutes. It was powerful, memorable and somehow exactly what the world wanted to hear. It inspired multiple standing O’s — and even a semi-serious Twitter campaign to elect her president #oprah2020

All this in 9 short minutes.

What made this short talk so impactful? My colleagues and I were curious. We are professional speaker coaches who’ve worked with many, many TED speakers, analyzing their scripts and their presentation styles to help each person make the greatest impact with their idea. And when we sat down and looked at Oprah’s talk, we saw a lot of commonality with great TED Talks.

Among the elements that made this talk so effective:

A strong opening that transports us. Oprah got on stage to give a “thank you” speech for a lifetime achievement award. But she chose not to start with the “thank you.” Instead she starts with a story. Her first words? “In 1964, I was a little girl sitting on the linoleum floor of my mother’s house in Milwaukee.” Just like a great story should, this first sentence transports us to a different time and place, and introduces the protagonist. As TED speaker Uri Hasson says: Our brain loves stories. Oprah’s style of opening signals to the audience that it’s story time, by using the opening similar to any fairy tale: “Once upon a time” (In 1964) “There was a princess” (I was a little girl) “In a land far far away” (…my mother’s house in Milwaukee.”

Alternating between ideas and anecdotes. A great TED Talk illustrates an idea. And, just like Oprah does in her talk, the idea is illustrated through a mix of stories, examples and facts. Oprah tells a few anecdotes, none longer than a minute. But they are masterfully crafted, to give us, the audience, just enough detail to invite us to imagine it. When TED speaker Stefan Larsson tells us an anecdote about his time at medical school, he says: “I wore the white coat” — one concrete detail that allows us, the audience, to imagine a whole scene. Oprah describes Sidney Poitier with similar specificity – down to the detail that “his tie was white.” Recy Taylor was “walking home from a church service.” Oprah the child wasn’t sitting on the floor but on the “linoleum floor.” Like a great sketch artist, a great storyteller draws a few defined lines and lets the audience’s imagination fill in the rest to create the full story.

A real conversation with the audience. At TED, we all know it’s called a TED talk — not “speech,” not “lecture.” We feel it when Sir Ken Robinson looks at the audience and waits for their reaction. But it’s mostly not in the words. It’s in the tone, in the fact that the speaker’s attention is on the audience, focusing on one person at a time, and having a mini conversation with us. Oprah is no different. She speaks to the people in the room, and this intimacy translates beautifully on camera.

It’s Oprah’s talk — and only Oprah’s. A great TED talk, just like any great talk or speech, is deeply connected to the person delivering it. We like to ask speakers, “What makes this a talk that only you can give?” Esther Perel shares anecdotes from her unique experience as a couples therapist, intimate stories that helped her develop a personal perspective on love and fidelity. Only Ray Dalio could tell the story of personal failure and rebuilding that lies behind the radical transparency he’s created in his company. Uri Hasson connects his research on the brain and stories to his own love of film. Oprah starts with the clearest personal angle – her personal story. And along her speech she brings her own career as an example, and her own way of articulating her message.

A great TED Talk invites the audience to think and to feel. Oprah’s ending is a big invitation to the audience to act. And it’s done not by telling us what to do, but by offering an optimistic vision of the future and inviting us all to be part of it.

Here’s a link to the full speech.

TEDGet ready for TED Talks India: Nayi Soch, premiering Dec. 10 on Star Plus

This billboard is showing up in streets around India, and it’s made out of pollution fumes that have been collected and made into ink — ink that’s, in turn, made into an image of TED Talks India: Nayi Soch host Shah Rukh Khan. Tune in on Sunday night, Dec. 10, at 7pm on Star Plus to see what it’s all about.

TED is a global organization with a broad global audience. With our TED Translators program working in more than 100 languages, TEDx events happening every day around the world and so much more, we work hard to present the latest ideas for everyone, regardless of language, location or platform.

Now we’ve embarked on a journey with one of the largest TV networks in the world — and one of the biggest movie stars in the world — to create a Hindi-language TV series and digital series that’s focused on a country at the peak of innovation and technology: India.

Hosted and curated by Shah Rukh Khan, the TV series TED Talks India: Nayi Soch will premiere in India on Star Plus on December 10.

The name of the show, Nayi Soch, literally means ‘new ideas’ — and this kick-off episode seeks to inspire the nation to embrace and cultivate ideas and curiosity. Watch it and discover a program of speakers from India and the world whose ideas might inspire you to some new thinking of your own! For instance — the image on this billboard above is made from the fumes of your car … a very new and surprising idea!

If you’re in India, tune in at 7pm IST on Sunday night, Dec. 10, to watch the premiere episode on Star Plus and five other channels. Then tune in to Star Plus on the next seven Sundays, at the same time, to hear even more great talks on ideas, grouped into themes that will certainly inspire conversations. You can also explore the show on the HotStar app.

On TED.com/india and for TED mobile app users in India, each episode will be conveniently turned into five to seven individual TED Talks, one talk for each speaker on the program. You can watch and share them on their own, or download them as playlists to watch one after another. The talks are given in Hindi, with professional subtitles in Hindi and in English. Almost every talk will feature a short Q&A between the speaker and the host, Shah Rukh Khan, that dives deeper into the ideas shared onstage.

Want to learn more about TED Talks? Check out this playlist that SRK curated just for you.

Google AdsenseOur continued investment in AdSense Experiments

Experimentation is at the heart of everything we do at Google — so much so that many of our products, including Analytics and AdSense, allow you to run your own experiments.

The AdSense Experiments page has allowed you to experiment with ad unit settings, and allowing and blocking ad categories to see how this affects your earnings. As of today, you can run more experiment types and have a better understanding of how they impact your earnings and users with some new updates.

Understand user impact with session metrics

Curious to know how the settings you experiment with impact your user experience? You can now see how long users spend on your site with a new “Ad session length” metric that has been added to the Experiments results page. Longer ad session lengths are usually a good indicator of a healthy user experience.

Ad balance experiments

Ad balance is a tool that allows you to reduce the number of ads shown by displaying only those ads that perform the best. You can now run experiments to see how different ad fill rates impact revenue and ad session lengths. Try it out and let us know what you think in the comments below!

Service announcement: We're auto-completing some experiments, and deleting experiments that are more than a year old.

To ensure you can focus your time efficiently on experiments, we'll soon be auto-completing the experiments for which no winner has been chosen after 30 days of being marked “Ready to complete”. You can manually choose a winner during those 30 days, or (if you’re happy for us to close the experiment) you don't need to do anything. Learn more about the status of experiments.

We’ll also be deleting experiments that were completed more than one year ago. Old experiments are rarely useful in the fast-moving world of the Internet and clutter the Experiments page with outdated information. If you wish to keep old experiments, you can download all existing data by using the “Download Data” button on the Experiments page.

We look forward to hearing your thoughts on these new features.

Posted by: Amir Hosseini Rad, AdSense Product Manager

TEDA photograph by Paul Nicklen shows the tragedy of extinction, and more news from TED speakers

The past few weeks have brimmed over with TED-related news. Here, some highlights:

This is what extinction looks like. Photographer Paul Nicklen shocked the world with footage of a starving polar bear that he and members of his conservation group SeaLegacy captured in the Canadian Arctic Archipelago. “It rips your heart out of your chest,” Nicklen told The New York Times. Published in National Geographic, on Nicklen’s Instagram channel, and via SeaLegacy in early December, the footage and a photograph taken by Cristina Mittermeier spread rapidly across the Internet, to horrified reaction. Polar bears are hugely threatened by climate change, in part because of their dependence on ice cover, and their numbers are projected to drop precipitously in coming years. By publishing the photos, Nicklen said to the Times, he hoped to make a scientific data point feel real to people. (Watch Nicklen’s TED Talk)

Faster 3D printing with liquids. Attendees at Design Miami witnessed the first public demonstration of MIT’s 3D liquid printing process. In a matter of minutes, a robotic arm printed lamps and handbags inside a glass tank filled with gel, showing that 3D printing doesn’t have to be painfully slow. The technique upends the size constraints and poor material quality that have plagued 3D printing, say the creators, and could be used down the line to print larger objects like furniture, reports Dezeen. Steelcase and the Self-Assembly lab at MIT, co-directed by TED Fellow Skylar Tibbits and Jared Laucks, developed the revolutionary technique. (Watch Tibbits’ TED Talk)

The crazy mathematics of swarming and synchronization. Studies on swarming often focus on animal movement (think schools of fish) but ignore their internal framework, while studies on synchronization tend to focus solely on internal dynamics (think coupled lasers). The two phenomena, however, have rarely been studied together. In new research published in Nature Communications, mathematician Steven Strogatz and his former postdoctoral student Kevin O’Keefe studied systems where both synchronization and swarming occur simultaneously. Male tree frogs were one source of inspiration for the research by virtue of the patterns that they form in both space and time, mainly related to reproduction. The findings open the door to future research of unexplored behaviors and systems that may also exhibit these two behaviors concurrently. (Watch Strogatz’s TED Talk)

A filmmaker’s quest to understand white nationalism. Documentary filmmaker and human rights activist Deeyah Khan’s new documentary, White Right: Meeting the Enemy, seeks to understand neo-Nazis and white nationalists beyond their sociopolitical beliefs. All too familiar with racism and hate related threats in her own life, her goal is not to sympathize or rationalize their beliefs or behaviors. She instead intends to discover the evolution of their ideology as individuals, which can provide insights into how they became attracted to and involved in these movements. Deeyah uses this film to answer the question: “Is it possible for me to sit with my enemy and for them to sit with theirs?” (Watch Khan’s TED Talk)

The end of an era at the San Francisco Symphony. Conductor Michael Tilson Thomas announced that he will be stepping down from his role as music director of the San Francisco Symphony in 2020. In that year, he will be celebrating his 75th birthday and his 25th anniversary at the symphony, and although his forthcoming departure will be the end of an era, Thomas will continue to work as the artistic director for the New World Symphony at the training academy he co-founded in Miami. Thus, 2020 won’t be the last time we hear from the musical great, given that he intends to pick up compositions, stories, and poems that he’s previously worked on. (Watch Tilson Thomas’ TED Talk)

A better way to weigh yourself. The Shapa Smart Scale is all words, no numbers. Behavioral economist Dan Ariely helped redesign the scale in the hope that eliminating the tyranny of the number would help people make better decisions about their health (something we’re notoriously bad at). The smart scale sends a small electrical current through the person’s body and gathers information, such as muscle mass, bone density, and water percentage. Then, it compares it to personal data collected over time. Instead of spitting out a single number, it simply tells you whether you’re doing a little better, a little worse, much better, much worse, or essentially the same. (Watch Ariely’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.

Krebs on SecurityMicrosoft’s Jan. 2018 Patch Tuesday Lowdown

Microsoft on Tuesday released 14 security updates, including fixes for the Spectre and Meltdown flaws detailed last week, as well as a zero-day vulnerability in Microsoft Office that is being exploited in the wild. Separately, Adobe pushed a security update to its Flash Player software.

Last week’s story, Scary Chip Flaws Raise Spectre of Meltdown, sought to explain the gravity of these two security flaws present in most modern computers, smartphones, tablets and mobile devices. The bugs are thought to be mainly exploitable in chips made by Intel and ARM, but researchers said it was possible they also could be leveraged to steal data from computers with chips made by AMD.

By the time that story had published, Microsoft had already begun shipping an emergency update to address the flaws, but many readers complained that their PCs experienced the dreaded “blue screen of death” (BSOD) after applying the update. Microsoft warned that the BSOD problems were attributable to many antivirus programs not yet updating their software to play nice with the security updates.

On Tuesday, Microsoft said it was suspending the patches for computers running AMD chipsets.

“After investigating, Microsoft determined that some AMD chipsets do not conform to the documentation previously provided to Microsoft to develop the Windows operating system mitigations to protect against the chipset vulnerabilities known as Spectre and Meltdown,” the company said in a notice posted to its support site.

“To prevent AMD customers from getting into an unbootable state, Microsoft has temporarily paused sending the following Windows operating system updates to devices that have impacted AMD processors,” the company continued. “Microsoft is working with AMD to resolve this issue and resume Windows OS security updates to the affected AMD devices via Windows Update and WSUS as soon as possible.”

In short, if you’re running Windows on a computer powered by an AMD, you’re not going to be offered the Spectre/Meltdown fixes for now. Not sure whether your computer has an Intel or AMD chip? Most modern computers display this information (albeit very briefly) when the computer first starts up, before the Windows logo appears on the screen.

Here’s another way. From within Windows, users can find this information by pressing the Windows key on the keyboard and the “Pause” key at the same time, which should open the System Properties feature. The chip maker will be displayed next to the “Processor:” listing on that page.

Microsoft also on Tuesday provided more information about the potential performance impact on Windows computers after installing the Spectre/Meltdown updates. To summarize, Microsoft said Windows 7, 8.1 and 10 users on older chips (circa 2015 or older), as well as Windows server users on any silicon, are likely to notice a slowdown of their computer after applying this update.

Any readers who experience a BSOD after applying January’s batch of updates may be able to get help from Microsoft’s site: Here are the corresponding help pages for Windows 7, Windows 8.1 and Windows 10 users.

As evidenced by this debacle, it’s a good idea to get in the habit of backing up your system on a regular basis. I typically do this at least once a month — but especially right before installing any updates from Microsoft. 

Attackers could exploit a zero-day vulnerability in Office (CVE-2018-0802) just by getting a user to open a booby-trapped Office document or visit a malicious/hacked Web site. Microsoft also patched a flaw (CVE-2018-0819) in Office for Mac that was publicly disclosed prior to the patch being released, potentially giving attackers a heads up on how to exploit the bug.

Of the 56 vulnerabilities addressed in the January Patch Tuesday batch, at least 16 earned Microsoft’s critical rating, meaning attackers could exploit them to gain full access to Windows systems with little help from users. For more on Tuesday’s updates from Microsoft, check out blogs from Ivanti and Qualys.

As per usual, Adobe issued an update for Flash Player yesterday. The update brings Flash to version 28.0.0.137 on Windows, Mac, and Linux systems. Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version.

When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are waiting to be installed.

Standard disclaimer: Because Flash remains such a security risk, I continue to encourage readers to remove or hobble Flash Player unless and until it is needed for a specific site or purpose. More on that approach (as well as slightly less radical solutions ) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

Another, perhaps less elegant, solution is to keep Flash installed in a browser that you don’t normally use, and then to only use that browser on sites that require it.

CryptogramDetecting Adblocker Blockers

Interesting research on the prevalence of adblock blockers: "Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis":

Abstract: Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered "free" Web. They have started to retaliate against adblockers by employing anti-adblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers.

We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top-10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.

News article.

Worse Than FailureCodeSOD: Warp Me To Halifax

Greenwich must think they’re so smart, being on the prime meridian. Starting in the 1840s, the observatory was the international standard for time (and thus vital for navigation). And even when the world switched to UTC, GMT is only different from that by 0.9s. If you want to convert times between time zones, you do it by comparing against UTC, and you know what?

I’m sick of it. Boy, I wish somebody would take them down a notch. Why is a tiny little strip of London so darn important?

Evan’s co-worker obviously agrees with the obvious problem of Greenwich’s unearned superiority, and picks a different town to make the center of the world: Halifax.

function time_zone_time($datetime, $time_zone, $savings, $return_format="Y-m-d g:i a"){
        date_default_timezone_set('America/Halifax');
        $time = strtotime(date('Y-m-d g:i a', strtotime($datetime)));
        $halifax_gmt = -4;
        $altered_tdf_gmt = $time_zone;
        if ($savings && date('I', $time) == 1) {
                $altered_tdf_gmt++;
        } // end if
        if(date('I') == 1){
                $halifax_gmt++;
        }
        $altered_tdf_gmt -= $halifax_gmt;
        $new_time = mktime(date("H", $time), date("i", $time), date("s", $time),date("m", $time)  ,date("d", $time), date("Y", $time)) + ($altered_tdf_gmt*3600);
        $new_datetime = date($return_format, $new_time);
        return $new_datetime;
}
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJonathan Adamczewski: Priorities for my team

(unthreaded from here)

During the day, I’m a Lead of a group of programmers. We’re responsible for a range of tools and tech used by others at the company for making games.

I have a list of the my priorities (and some related questions) of things that I think are important for us to be able to do well as individuals, and as a team:

  1. Treat people with respect. Value their time, place high value on their well-being, and start with the assumption that they have good intentions
    (“People” includes yourself: respect yourself, value your own time and well-being, and have confidence in your good intentions.)
  2. When solving a problem, know the user and understand their needs.
    • Do you understand the problem(s) that need to be solved? (it’s easy to make assumptions)
    • Have you spoken to the user and listened to their perspective? (it’s easy to solve the wrong problem)
    • Have you explored the specific constraints of the problem by asking questions like:
      • Is this part needed? (it’s easy to over-reach)
      • Is there a satisfactory simpler alternative? (actively pursue simplicity)
      • What else will be needed? (it’s easy to overlook details)
    • Have your discussed your proposed solution with users, and do they understand what you intend to do? (verify, and pursue buy-in)
    • Do you continue to meet regularly with users? Do they know you? Do they believe that you’re working for their benefit? (don’t under-estimate the value of trust)
  3. Have a clear understanding of what you are doing.
    • Do you understand the system you’re working in? (it’s easy to make assumptions)
    • Have you read the documentation and/or code? (set yourself up to succeed with whatever is available)
    • For code:
      • Have you tried to modify the code? (pull a thread; see what breaks)
      • Can you explain how the code works to another programmer in a convincing way? (test your confidence)
      • Can you explain how the code works to a non-programmer?
  4. When trying to solve a problem, debug aggressively and efficiently.
    • Does the bug need to be fixed? (see 1)
    • Do you understand how the system works? (see 2)
    • Is there a faster way to debug the problem? Can you change code or data to cause the problem to occur more quickly and reliably? (iterate as quickly as you can, fix the bug, and move on)
    • Do you trust your own judgement? (debug boldly, have confidence in what you have observed, make hypotheses and test them)
  5. Pursue excellence in your work.
    • How are you working to be better understood? (good communication takes time and effort)
    • How are you working to better understand others? (don’t assume that others will pursue you with insights)
    • Are you responding to feedback with enthusiasm to improve your work? (pursue professionalism)
    • Are you writing high quality, easy to understand, easy to maintain code? How do you know? (continue to develop your technical skills)
    • How are you working to become an expert and industry leader with the technologies and techniques you use every day? (pursue excellence in your field)
    • Are you eager to improve (and fix) systems you have worked on previously? (take responsibility for your work)

The list was created for discussion with the group, and as an effort to articulate my own expectations in a way that will help my team understand me.

Composing this has been useful exercise for me as a lead, and definitely worthwhile for the group. If you’ve never tried writing down your own priorities, values, and/or assumptions, I encourage you to try it :)

,

CryptogramDaniel Miessler on My Writings about IoT Security

Daniel Miessler criticizes my writings about IoT security:

I know it's super cool to scream about how IoT is insecure, how it's dumb to hook up everyday objects like houses and cars and locks to the internet, how bad things can get, and I know it's fun to be invited to talk about how everything is doom and gloom.

I absolutely respect Bruce Schneier a lot for what he's contributed to InfoSec, which makes me that much more disappointed with this kind of position from him.

InfoSec is full of those people, and it's beneath people like Bruce to add their voices to theirs. Everyone paying attention already knows it's going to be a soup sandwich -- a carnival of horrors -- a tragedy of mistakes and abuses of trust.

It's obvious. Not interesting. Not novel. Obvious. But obvious or not, all these things are still going to happen.

I actually agree with everything in his essay. "We should obviously try to minimize the risks, but we don't do that by trying to shout down the entire enterprise." Yes, definitely.

I don't think the IoT must be stopped. I do think that the risks are considerable, and will increase as these systems become more pervasive and susceptible to class breaks. And I'm trying to write a book that will help navigate this. I don't think I'm the prophet of doom, and don't want to come across that way. I'll give the manuscript another read with that in mind.

Cory DoctorowWith repetition, most of us will become inured to all the dirty tricks of Facebook attention-manipulation

In my latest Locus column, “Persuasion, Adaptation, and the Arms Race for Your Attention,” I suggest that we might be too worried about the seemingly unstoppable power of opinion-manipulators and their new social media superweapons.


Not because these techniques don’t work (though when someone who wants to sell you persuasion tools tells you that they’re amazing and unstoppable, some skepticism is warranted), but because a large slice of any population will eventually adapt to any stimulus, which is why most of us aren’t addicted to slot machines, Farmville and Pokemon Go.


When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures. The predator carves the prey, the prey carves the preda­tor. To get a sense of just how far the state of the art has advanced since Farmville, fire up Universal Paperclips, the free browser game from game designer Frank Lantz, which challenges you to balance resource acquisi­tion, timing, and resource allocation to create paperclips, progressing by purchasing upgraded paperclip-production and paperclip-marketing tools, until, eventually, you produce a sentient AI that turns the entire universe into paperclips, exterminating all life.

Universal Paperclips makes Farmville seem about as addictive as Candy­land. Literally from the first click, it is weaving an attentional net around your limbic system, carefully reeling in and releasing your dopamine with the skill of a master fisherman. Universal Paperclips doesn’t just suck you in, it harpoons you.

Persuasion, Adaptation, and the Arms Race for Your Attention [Cory Doctorow/Locus]

Krebs on SecurityWebsite Glitch Let Me Overstock My Coinbase

Coinbase and Overstock.com just fixed a serious glitch that allowed Overstock customers to buy any item at a tiny fraction of the listed price. Potentially more punishing, the flaw let anyone paying with bitcoin reap many times the authorized bitcoin refund amount on any canceled Overstock orders.

In January 2014, Overstock.com partnered with Coinbase to let customers pay for merchandise using bitcoin, making it among the first of the largest e-commerce vendors to accept the virtual currency.

On December 19, 2017, as the price of bitcoin soared to more than $17,000 per coin, Coinbase added support for Bitcoin Cash — an offshoot (or “fork”) from bitcoin designed to address the cryptocurrency’s scalability challenges.

As a result of the change, Coinbase customers with balances of bitcoin at the time of the fork were given an equal amount of bitcoin cash stored by Coinbase. However, there is a significant price difference between the two currencies: A single bitcoin is worth almost $15,000 right now, whereas a unit of bitcoin cash is valued at around $2,400.

On Friday, Jan. 5, KrebsOnSecurity was contacted by JB Snyder, owner of North Carolina-based Bancsec, a company that gets paid to break into banks and test their security. An early adopter of bitcoin, Snyder said he was using some of his virtual currency to purchase an item at Overstock when he noticed something alarming.

During the checkout process for those paying by bitcoin, Overstock.com provides the customer a bitcoin wallet address that can be used to pay the invoice and complete the transaction. But Snyder discovered that Overstock’s site just as happily accepted bitcoin cash as payment, even though bitcoin cash is currently worth only about 15 percent of the value of bitcoin.

To confirm and replicate Snyder’s experience firsthand, KrebsOnSecurity purchased a set of three outdoor solar lamps from Overstock for a grand total of $78.27.

The solar lights I purchased from Overstock.com to test Snyder’s finding. They cost $78.27 in bitcoin, but because I was able to pay for them in bitcoin cash I only paid $12.02.

After indicating I wished to pay for the lamps in bitcoin, the site produced a payment invoice instructing me to send exactly 0.00475574 bitcoins to a specific address.

The payment invoice I received from Overstock.com.

Logging into Coinbase, I took the bitcoin address and pasted that into the “pay to:” field, and then told Coinbase to send 0.00475574 in bitcoin cash instead of bitcoin. The site responded that the payment was complete. Within a few seconds I received an email from Overstock congratulating me on my purchase and stating that the items would be shipped shortly.

I had just made a $78 purchase by sending approximately USD $12 worth of bitcoin cash. Crypto-currency alchemy at last!

But that wasn’t the worst part. I didn’t really want the solar lights, but also I had no interest in ripping off Overstock. So I cancelled the order. To my surprise, the system refunded my purchase in bitcoin, not bitcoin cash!

Consider the implications here: A dishonest customer could have used this bug to make ridiculous sums of bitcoin in a very short period of time. Let’s say I purchased one of the more expensive items for sale on Overstock, such as this $100,000, 3-carat platinum diamond ring. I then pay for it in Bitcoin cash, using an amount equivalent to approximately 1 bitcoin ($~15,000).

Then I simply cancel my order, and Overstock/Coinbase sends me almost $100,000 in bitcoin, netting me a tidy $85,000 profit. Rinse, wash, repeat.

Reached for comment, Overstock.com said the company changed no code in its site and that a fix implemented by Coinbase resolved the issue.

“We were made aware of an issue affecting cryptocurrency transactions and refunds by an independent researcher. After working with the researcher to confirm the finding, that method of payment was disabled while we worked with our cryptocurrency integration partner, Coinbase, to ensure they resolved the issue. We have since confirmed that the issue described in the finding has been resolved, and the cryptocurrency payment option has been re-enabled.”

Coinbase said “the issue was caused by the merchant partner improperly using the return values in our merchant integration API. No other Coinbase customer had this problem.”Coinbase told me the bug only existed for approximately three weeks.”

“After being made aware of an issue in our joint refund processing code on SaturdayCoinbase and Overstock worked together to deploy a fix within hours,” The Coinbase statement continued. “While a patch was being developed and tested, orders were proactively disabled to protect customers. To our knowledge, a very small number of transactions were impacted by this issue. Coinbase actively works with merchant partners to identify and solve issues like this in an ongoing, collaborative manner and since being made aware of this have ensured that no other partners are affected.”

Bancsec’s Snyder and I both checked for the presence of this glitch at multiple other merchants that work directly with Coinbase in their checkout process, but we found no other examples of this flaw.

The snafu comes as many businesses that have long accepted bitcoin are now distancing themselves from the currency thanks to the recent volatility in bitcoin prices and associated fees.

Earlier this week, it emerged that Microsoft had ceased accepting payments in Bitcoin, citing volatility concerns. In December, online game giant Steam said it was dropping support for bitcoin payments for the same reason.

And, as KrebsOnSecurity noted last month, even cybercriminals who run online stores that sell stolen identities and credit cards are urging their customers to transact in something other than bitcoin.

Interestingly, bitcoin is thought to have been behind a huge jump in Overstock’s stock price in 2017. In December, Overstock CEO Patrick Byrne reportedly stoked the cryptocurrency fires when he said that he might want to sell Overstock’s e-tailing operations and pour the extra cash into accelerating his blockchain-based business ideas instead.

In case anyone is wondering what I did with the “profit” I made from this scheme, I offered to send it back to Overstock, but they told me to keep it. Instead, I donated it to archive.org, a site that has come in handy for many stories published here.

Update, 3:15 p.m. ET: A previous version of this story stated that neither Coinbase nor Overstock would say which of the two was responsible for this issue. The modified story above resolves that ambiguity.

CryptogramNSA Morale

The Washington Post is reporting that poor morale at the NSA is causing a significant talent shortage. A November New York Times article said much the same thing.

The articles point to many factors: the recent reorganization, low pay, and the various leaks. I have been saying for a while that the Shadow Brokers leaks have been much more damaging to the NSA -- both to morale and operating capabilities -- than Edward Snowden. I think it'll take most of a decade for them to recover.

Worse Than FailureCodeSOD: Whiling Away the Time

There are two ways of accumulating experience in our profession. One is to spend many years accumulating and mastering new skills to broaden your skill set and ability to solve more and more complex problems. The other is to repeat the same year of experience over and over until you have one year of experience n times.

Anon took the former path and slowly built up his skills, adding to his repertoire with each new experience and assignment. At his third job, he encountered The Man, who took the latter path.

If you wanted to execute a block of code once, you have several options. You could just put the code in-line. You could put it in a function and call said function. You could even put it in a do { ... } while (false); construct. The Man would do as below because it makes it easier and less error prone to comment out a block of code:

  Boolean flag = true;
  while (flag) {
    flag = false;
    // code>
    break;
  }

The Man not only built his own logging framework (because you can't trust the ones out there), but he demanded that every. single. function. begin and end with:

  Log.methodEntry("methodName");
  ...
  Log.methodExit("methodName");

...because in a multi-threaded environment, that won't flood the logs with all sorts of confusing and mostly useless log statements. Also, he would routinely use this construct in places where the logging system had not yet been initialized, so any logged errors went the way of the bit-bucket.

Every single method was encapsulated in its own try-catch-finally block. The catch block would merely log the error and continue as though the method was successful, returning null or zero on error conditions. The intent was to keep the application from ever crashing. There was no concept of rolling the error up to a place where it could be properly handled.

His concept of encapsulation was to wrap not just each object, but virtually every line of code, including declarations, in a region tag.

To give you a taste of what Anon had to deal with, the following is a procedure of The Man's:


  #region Protected methods
    protected override Boolean ParseMessage(String strRemainingMessage) {
       Log.LogEntry(); 
  
  #    region Local variables
         Boolean bParseSuccess = false;
         String[] strFields = null;
  #    endregion //Local variables
  
  #    region try-cache-finally  [op: SIC]
  #      region try
           try {
  #            region Flag to only loop once
                 Boolean bLoop = true;
  #            endregion //Flag to only loop once
  
  #            region Loop to parse the message
                while (bLoop) {
  #                region Make sure we only loop once
                     bLoop = false;
  #                endregion //Make sure we only loop once
  
  #                region parse the message
                     bParseSuccess = base.ParseMessage(strRemainingMessage);
  #                endregion //parse the message
  
  #                region break the loop
                     break;
  #                endregion //break the loop
                }
  #            endregion //Loop to parse the message
           }
  #      endregion //try
    
  #      region cache // [op: SIC]
            catch (Exception ex) {
              Log.Error(ex.Message);
            }
  #      endregion //cache [op: SIC]
  	  
  #      region finally
           finally {
             if (null != strFields) {
                strFields = null; // op: why set local var to null?
             }
           }
  #      endregion //finally
  
  #      endregion //try-cache-finally [op: SIC]
  
       Log.LogExit();
  
       return bParseSuccess;
     }
  #endregion //Protected methods

The corrected version:

  // Since the ParseMessage method has it's own try-cache
  // on "Exception", it will never throw any exceptions 
  // and logging entry and exit of a method doesn't seem 
  // to bring us any value since it's always disabled. 
  // I'm not even sure if we have a way to enable it 
  // during runtime without recompiling and installing 
  // the application...
  protected override Boolean ParseMessage(String remainingMessage){
    return base.ParseMessage(remainingMessage); 
  }

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

,

CryptogramTourist Scams

A comprehensive list. Most are old and obvious, but there are some clever variants.

Worse Than FailureCodeSOD: JavaScript Centipede

Starting with the film Saw, in 2004, the “torture porn” genre started to seep into the horror market. Very quickly, filmmakers in that genre learned that they could abandon plot, tension, and common sense, so long as they produced the most disgusting concepts they could think of. The game of one-downsmanship arguably reached its nadir with the conclusion of The Human Centipede trilogy. Yes, they made three of those movies.

This aside into film critique is because Greg found the case of a “JavaScript Centipede”: the refuse from one block of code becomes the input to the next block.

function dynamicallyLoad(win, signature) {
    for (var i = 0; i < this.addList.length; i++) {
        if (window[this.addList[i].object] != null)
            continue;
        var object = win[this.addList[i].object];
        if (this.addList[i].type == 'function' || typeof (object) == 'function') {
            var o = String(object);
            var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                .replace(/\n/g, "\\n").replace(/'/g, "\\'");
            var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                .replace(/,/g, "','");
            if (params != "")
                params += "','";
            window.eval(String(this.addList[i].object) +
                        "=new Function('" + String(params + body) + "')");
            var c = window[this.addList[i].object];
            if (this.addList[i].type == 'class') {
                for (var j in object.prototype) {
                    var o = String(object.prototype[j]);
                    var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                        .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                        .replace(/\n/g, "\\n").replace(/'/g, "\\'");
                    var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                        .replace(/,/g, "','");
                    if (params != "")
                        params += "','";
                    window.eval(String(this.addList[i].object) + ".prototype." + j +
                        "=new Function('" + String(params + body) + "')");
                }
                if (object.statics) {
                    window[this.addList[i].object].statics = new Object();
                    for (var j in object.statics) {
                        var obj = object.statics[j];
                        if (typeof (obj) == 'function') {
                            var o = String(obj);
                            var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                                .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                                .replace(/\n/g, "\\n").replace(/'/g, "\\'");
                            var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                                .replace(/,/g, "','");
                            if (params != "")
                                params += "','";
                            window.eval(String(this.addList[i].object) + ".statics." +
                                j + "=new Function('" + String(params + body) + "')");
                        } else
                            window[this.addList[i].object].statics[j] = obj;
                    }
                }
            }
        } else if (this.addList[i].type == 'image') {
            window[this.addList[i].object] = new Image();
            window[this.addList[i].object].src = object.src;
        } else
            window[this.addList[i].object] = object;
    }
    this.addList.length = 0;
    this.isLoadedArray[signature] = new Date().getTime();
}

I’m not going to explain what this code does, I’m not certain I could. Like a Human Centipede film, you’re best off just being disgusted at the concept on display. If you're not sure why it's bad, just note the eval calls. Don’t think too much about the details.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Planet Linux AustraliaDavid Rowe: Engage the Silent Drive

I’ve been busy electrocuting my boat – here are our first impressions of the Torqueedo Cruise 2.0T on the water.

About 2 years ago I decided to try sailing, so I bought a second hand Hartley TS16; a popular small “trailer sailor” here in Australia. Since then I have been getting out once every week, having some very pleasant days with friends and family, and even at times by myself. Sailing really takes you away from everything else in the world. It keeps you busy as you are always pulling a rope or adjusting this and that, and is physically very active as you are clambering all over the boat. Mentally there is a lot to learn, and I started as a complete nautical noob.

Sailing is so quiet and peaceful, you get propelled by the wind using aerodynamics and it feels like like magic. However this is marred by the noise of outboard motors, which are typically used at the start and end of the day to get the boat to the point where it can sail. They are also useful to get you out of trouble in high seas/wind, or when the wind dies. I often use the motor to “un hit” Australia when I accidentally lodge myself on a sand bar (I have a lot of accidents like that).

The boat came with an ancient 2 stroke which belched smoke and noise. After about 12 months this motor suffered a terminal melt down (impeller failure and over heated) so it was replaced with a modern 5HP Honda 4-stroke, which is much quieter and very fuel efficient.

My long term goal was to “electrocute” the boat and replace the infernal combustion outboard engine with an electric motor and battery pack. I recently bit the bullet and obtained a Torqeedo Cruise 2kW outboard from Eco Boats Australia.

My friend Matt and I tested the motor today and are really thrilled. Matt is an experienced Electrical Engineer and sailor so was an ideal companion for the first run of the Torqueedo.

Torqueedo Cruise 2.0 First Impressions

It’s silent – incredibly so. Just a slight whine conducted from the motor/gearbox pod beneath the water. The sound of water flowing around the boat is louder!

The acceleration is impressive, better than the 4-stroke. Make sure you sit down. That huge, low RPM prop and loads of torque. We settled on 1000W, experimenting with other power levels.

The throttle control is excellent, you can dial up any speed you want. This made parking (mooring) very easy compared to the 4-stroke which is more of a “single speed” motor (idles at 3 knots, 4-5 knots top speed) and is unwieldy for parking.

It’s fit for purpose. This is not a low power “trolling” motor, it is every bit as powerful as the modern Honda 5HP 4-stroke. We did a A/B test and obtained the same top speed (5 knots) in the same conditions (wind/tide/stretch of water). We used it with 15 knot winds and 1m seas and it was the real deal – pushing the boat exactly where we wanted to go with authority. This is not a compromise solution. The Torqueedo shows internal combustion who’s house it is.

We had some fun sneaking up on kayaks at low power, getting to within a few metres before they heard us. Other boaties saw us gliding past with the sails down and couldn’t work out how we were moving!

A hidden feature is Azipod steering – it steers through more than 270 degrees. You can reverse without reverse gear, and we did “donuts” spinning on the keel!

Some minor issues: Unlike the Honda the the Torqueedo doesn’t tilt complete out of the water when sailing, leaving some residual drag from the motor/propeller pod. It also has to be removed from the boat for trailering, due to insufficient road clearance.

Walk Through

Here are the two motors with the boat out of the water:

It’s quite a bit longer than the Honda, mainly due to the enormous prop. The centres of the two props are actually only 7cm apart in height above ground. I had some concerns about ground clearance, both when trailering and also in the water. I have enough problems hitting Australia and like the way my boat can float in just 30cm of water. I discussed this with my very helpful Torqueedo dealer, Chris. He said tests with short and long version suggested this wasn’t a problem and in fact the “long” version provided better directional control. More water on top of the prop is a good thing. They recommend 50mm minimum, I have about 100mm.

To get started I made up a 24V battery pack using a plastic tub and 8 x 3.2V 100AH Lithium cells, left over from my recent EV battery upgrade. The cells are in varying conditions; I doubt any of them have 100AH capacity after 8 years of being hammered in my EV. On the day we ran for nearly 2 hours before one of the weaker cells dipped beneath 2.5V. I’ll sort through my stock of second hand cells some time to optimise the pack.

The pack plus motor weighs 41kg, the 5HP Honda plus 5l petrol 32kg. At low power (600W, 3.5 knots), this 2.5kWHr pack will give us a range of 14 nm or 28km. Plenty – on a huge days sailing we cover 40km, of which just 5km would be on motor.

All that power on board is handy too, for example the load of a fridge would be trivial compared to the motor, and a 100W HF radio no problem. So now I can quaff ice-cold sparkling shiraz or a nice beer, while having an actual conversation and not choking on exhaust fumes!

Here’s Matt taking us for a test drive, not much to the Torqueedo above the water:

For a bit of fun we ran both motors (maybe 10HP equivalent) and hit 7 knots, almost getting the Hartley up on the plane. Does this make it a Hybrid boat?

Conclusions

We are in love. This is the future of boating. For sale – one 5HP Honda 4-stroke.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Annual Penguin Picnic, January 28, 2018

Jan 28 2018 12:00
Jan 28 2018 18:00
Jan 28 2018 12:00
Jan 28 2018 18:00
Location: 
Infoxchange, 33 Elizabeth St. Richmond

PLEASE NOTE NEW LOCATION

The Linux Users of Victoria Annual Penguin Picnic will be held on Sunday, January 28, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.

Due to the predicted extreme hot weather on Sunday, the LUV committee has decided to change to an indoor picnic with dips, cheeses, cured meats, fruits, cakes, icecreams and icy poles, cool drinks, etc. instead of a BBQ.  The meeting will now be held at our regular workshop venue, Infoxchange at 33 Elizabeth St. Richmond, right by Victoria Parade and North Richmond railway station.

LUV would like to acknowledge Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is a subcommitee of Linux Australia.

January 28, 2018 - 12:00

read more

CryptogramSpectre and Meltdown Attacks

After a week or so of rumors, everyone is now reporting about the Spectre and Meltdown attacks against pretty much every modern processor out there.

These are side-channel attacks where one process can spy on other processes. They affect computers where an untrusted browser window can execute code, phones that have multiple apps running at the same time, and cloud computing networks that run lots of different processes at once. Fixing them either requires a patch that results in a major performance hit, or is impossible and requires a re-architecture of conditional execution in future CPU chips.

I'll be writing something for publication over the next few days. This post is basically just a link repository.

EDITED TO ADD: Good technical explanation. And a Slashdot thread.

EDITED TO ADD (1/5): Another good technical description. And how the exploits work through browsers. A rundown of what vendors are doing. Nicholas Weaver on its effects on individual computers.

EDITED TO ADD (1/7): xkcd.