Planet Russell

,

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff - August 2018

Summer is slowly coming to an end in Montreal and as much as I would like it to last another month, I'm also glad to fall back into my regular routine.

Part of that routine means the return of Montreal's Debian & Stuff - our informal gathering of the local Debian community!

If you are in Montreal on August 26th, come and say hi: everyone's welcome!

Some of us plan to work on specific stuff (I want to show people how nice the Tomu boards I got are) - but hanging out and having a drink is also a perfectly reasonable option.

Here's a link to the event's page.

,

CryptogramFriday Squid Blogging: Firefly Squid Museum

The Hotaruika Museum is a museum devoted to firefly squid in Toyama, Japan.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDMoving healthcare forward: The talks of TED Salon: Catalyst

TED and Optum partnered to cultivate the dialogue and collaboration that’s needed to understand and guide changes in healthcare. (Photo: Marla Aufmuth / TED)

Healthcare is at a turning point. Big data, evolving consumer preferences and shifting cost structures are just a few of the many complex factors shaping the opportunities and challenges that will define the future. How can we all become forces for positive change and progress?

For the first time, TED partnered with Optum, a health services and innovation company, for a salon focused on what happens when we trust our ideas to change health and healthcare for the better. At the salon, held on July 31 at the ARIA Las Vegas, six speakers and a performer shared fresh thinking on how we can make a health system that works better for everyone.

Empathy shouldn’t be a nice-to-have, says Adrienne Boissy — it’s a hard skill that should be integrated into everything we do. (Photo: Marla Aufmuth / TED)

How we can put empathy back in healthcare. Many in healthcare believe that empathy — imagining another person’s feelings and then doing something to help them — is a “soft skill,” and not an important factor in the success or failure of medical treatments. But according to Adrienne Boissy, chief experience officer for the Cleveland Clinic Health System, empathy is a critical part of healthcare that, when cultivated, delivers proven, positive impacts to everything from controlling high blood pressure to the outcomes of diabetes. Best of all, it’s something that healthcare workers can learn, in order to “bake caring fixes back into every single part of the healthcare system.” Boissy knows that patients and doctors both suffer under current healthcare systems and their long wait times, communications gaps, and the endemic pressures that lead to staff burnout. To address these problems in her health system, Boissy implemented some big fixes, like same-day appointments for patients, communications training for doctors and less bureaucratic pressure. Her strategies are designed to build empathy back into the healthcare system and “transform the human experience into something much more humane.”

The myth of obesity and the need for a social movement. The global obesity crisis has reached epidemic proportions — but its root cause may not be what you think. Obesity expert Lee Kaplan has studied the issue for nearly 20 years, and the misconceptions around obesity have remained fairly constant throughout: if people simply ate less and exercised more, the thinking goes, they’d be able to control their weight. But the reality is much more complex. “Numerous studies demonstrate that each of our bodies has a powerful, and very accurate, system for seeking and maintaining the right amount of fat,” Kaplan says. “Obesity is the disease in which that finely tuned system goes awry.” There are many types of obesity, with many causes — genetics, brain damage, sleep deprivation, medications that promote weight gain — but in the end, all obesity reflects the disruption of this internal system (controlled by the body’s adipostat). In order to begin solving this massive health crisis, Kaplan calls for us to stop stigmatizing obesity and take collective action to improve the lives of those affected. “We need to change the public perception of blame and responsibility, and support a social movement that will lead to real progress,” Kaplan says. “In so doing, we will begin to see society shrink before our eyes.”

If we design healthcare systems with trust, innovation and ambition, says Dr. Andrew Bastawrous, we can create solutions that change the lives of millions of people worldwide. (Photo: Marla Aufmuth / TED)

Innovating the healthcare funding and distribution model. While working in an eye care clinic in Kenya, Andrew Bastawrous was frustrated to find that because of rigid funding regulations, he wasn’t able to help people in desperate need who didn’t have “the right problems.” Though specific resource allocation makes business sense, Bastawrous says, inflexible rules often block healthcare organizations from adapting to shifting situations on the ground. This makes it difficult to deliver even simple medical treatments — for example, though we’ve had glasses for over 700 years, 2.5 billion people still don’t have access to them. That’s why Peek Vision, the eye care organization Bastawrous co-founded and leads, is set up as both a company and a charity — an innovation that allows them to sustainably create healthcare products and serve the communities who need them most. Peek Vision’s successful partnership with the Botswana government to screen and treat every child in the country by 2021 shows that this model can work — now, it needs to be scaled globally. If we design health care systems with trust, innovation and ambition, Bastawrous says, we can create solutions that fulfill the needs of financial partners and improve the lives of millions of people worldwide.

One pill to rule them all? We live in the age of the “quantified self,” where it’s possible to measure, monitor and track much of our physiology and behavior with a few taps of a finger. (Think smartwatches and fitness trackers.) With all this information, says Daniel Kraft, we should be able to make the shift into “quantified health” and design truly personalized medicine that allows us to synthesize many of our medications into a single pill. Onstage, Kraft revealed a prototype that would not only engender an easier time taking medications but also print the drugs he envisions right in the home. “I’m hopeful that with the help of novel approaches like this, we can move from an era of intermittent data, reactive one-size-fits-all therapy,” he says, “improving health and medicine across the planet.”

When it comes to health, we’re not as divided as we think we are, says Rebecca Onie. (Photo: Marla Aufmuth / TED)

Divided on healthcare, united on health. The American conversation around healthcare has long been divisive. Yet as health services innovator Rebecca Onie reveals in new research, people in the US are not as polarized as they think. She launched a new initiative to ask voters around the country one question: “What do you need to be healthy?” As it turns out, across economic, political and racial divides, Americans are aligned when it comes to their healthcare priorities: healthy food, safe housing and good wages. “When you ask the right questions, it becomes pretty clear: our country may be fractured on healthcare, but we are unified on health,” she says. The insights from her research demonstrate how our common experience can inform our approach to pressing healthcare questions — and even bring people across the political spectrum together.

Medicine isn’t made by miracles. Our narratives of our greatest medical and healthcare advances all follow the same script, Darshak Sanghavi says: “The heroes are either swashbuckling doctors fighting big odds and taking big risks, or miracle drugs found in the unlikeliest of places.” We love to hear — and tell — stories based on this script. But these stories cause us to redirect our resources toward creating hero doctors and revolutionary medications, and by doing so, “we potentially harm more people than we help,” Sanghavi says. He believes we should turn away from these myths and focus on what really matters: teamwork. Incremental refinements in treatments, painstakingly assembled by healthcare workers pooling their resources over time, are what really lead to improved survival rates and higher-quality lives for patients. “We don’t need to wait for a hero in order to make our lives better,” Sanghavi says. “We already know what to do. Small steps over time will get us where we need to go.”

Jessica Care Moore performs her poem “Gratitude Is a Recipe for Survival” to close out the salon. (Photo: Marla Aufmuth / TED)

She has decided to live. Poet, performer and artist Jessica Care Moore closes out the salon with a performance of “Gratitude Is a Recipe for Survival” — a vigorous, personal, lyrical journey through the mind and life of a professional poet raising a young son in a thankless world.

Krebs on SecurityIndian Bank Hit in $13.5M Cyberheist After FBI ATM Cashout Warning

On Sunday, Aug. 12, KrebsOnSecurity carried an exclusive: The FBI was warning banks about an imminent “ATM cashout” scheme about to unfold across the globe, thanks to a data breach at an unknown financial institution. On Aug. 14, a bank in India disclosed hackers had broken into its servers, stealing nearly $2 million in fraudulent bank transfers and $11.5 million unauthorized ATM withdrawals from cash machines in more than two dozen countries.

The FBI put out its alert on Friday, Aug. 10. The criminals who hacked into Pune, India-based Cosmos Bank executed their two-pronged heist the following day, sending co-conspirators to fan out and withdraw a total of about $11.5 million from ATMs in 28 countries.

The FBI warned it had intelligence indicating that criminals had breached an unknown payment provider’s network with malware to access bank customer card information and exploit network access, enabling large scale theft of funds from ATMs.

Organized cybercrime gangs that coordinate these so-called “unlimited attacks” typically do so by hacking or phishing their way into a bank or payment card processor. Just prior to executing on ATM cashouts, the intruders will remove many fraud controls at the financial institution, such as maximum withdrawal amounts and any limits on the number of customer ATM transactions daily.

The perpetrators alter account balances and security measures to make an unlimited amount of money available at the time of the transactions, allowing for large amounts of cash to be quickly removed from the ATM.

My story about the FBI alert was breaking news on Sunday, but it was just a day short of useful to financial institutions impacted by the breach and associated ATM cashout blitz.

But according to Indian news outlet Dailypionneer.com, there was a second attack carried out on August 13, when the Cosmos Bank hackers transferred nearly $2 million to the account of ALM Trading Limited at Hang Seng Bank in Hong Kong.

“The bank came to know about the malware attack on its debit card payment system on August 11, when it was observed that unusually repeated transactions were taking place through ATM VISA and Rupay Card for nearly two hours,” writes TN Raghunatha for the Daily Pioneer.

Cosmos Bank was quick to point out that the attackers did not access systems tied to customer accounts, and that the money taken was from the bank’s operating accounts. The 112-year-old bank blamed the attack on “a switch which is operative for the payment gateway of VISA/Rupay Debit card and not on the core banking system of the bank, the customers’ accounts and the balances are not at all affected.”

Visa issued a statement saying it was aware of the compromise affecting a client financial institution in India.

“Our systems were able to identify the issue quickly, enabling the financial institution to take appropriate action,” the company said. “Visa is working closely with the client in supporting their ongoing investigations on the matter.”

The FBI said these types of ATM cashouts are most common at smaller financial institutions that may not have sufficient resources dedicated to staying up to date with the latest security measures for handling payment card data.

“Historic compromises have included small-to-medium size financial institutions, likely due to less robust implementation of cyber security controls, budgets, or third-party vendor vulnerabilities,” the alert read. “The FBI expects the ubiquity of this activity to continue or possibly increase in the near future.”

In July 2018, KrebsOnSecurity broke the news of two separate cyber break-ins at tiny National Bank of Blacksburg in Virginia in a span of just eight months that led to ATM cashouts netting thieves more than $2.4 million. The Blacksburg bank is now suing its insurance provider for refusing to fully cover the loss.

As reported by Reuters, Cosmos Bank said in a press statement that its main banking software receives debit card payment requests via a “switching system” that was bypassed in the attack. “During the malware attack, a proxy switch was created and all the fraudulent payment approvals were passed by the proxy switching system,” the bank said.

Translation: If a financial institution is not fully encrypting its payment processing network, this can allow intruders with access to the network to divert and/or alter the response that gets sent when an ATM transaction is requested. In one such scenario, the network might say a given transaction should be declined, but thieves could still switch the signal for that ATM transaction from “declined” to “approved.”

One final note: Several news outlets have confused the attack that hit Cosmos Bank with another ATM crime called “jackpotting,” which requires thieves to have physical access to the inside of the cash machine and the ability to install malicious software that makes the ATM spit out large chunks of cash at once. Like ATM cashouts/unlimited operations, jackpotting attacks do not directly affect customer accounts but instead drain ATMs of currency.

Update, 8:10 p.m. ET: An earlier version of this story incorrectly stated that there were only 25 ATMs used in the cashout against Cosmos. The figure was meant to represent the number of countries with ATMs that were used in the heist, not ATMs, and that number is 28 at last count.

TEDA model of possibility: Tiq Milan on being the architect of his own destiny

30308530590_032c31041e_o (1)

“I saw the exact person I wanted to be in my mind and I manifested that in this world. If I can do that, I can do anything,” says Tiq Milan, left, who spoke with partner Kim Katrin Milan onstage at TEDWomen 2016. Photo: Marla Aufmuth/ TED

Tiq Milan and Kim Katrin Milan brought warmth and light to the TEDWomen stage in 2016, sharing their vision of queer love and possibility. As a Black trans activist, writer and media maker, Tiq Milan expands the cultural imaginary on what it is to live beyond the margins. It’s an interesting time to be Tiq; he’s working on a book, just completed a video project with GLAAD and Netflix — and recently became a first-time parent too. He made time to talk with us last month about his work as a trans advocate, what it means to redefine masculinity and how he lives as a model of possibility for LGBTQ+ youth.

This interview has been edited and condensed. (Learn more about TEDWomen 2018, coming up this fall.)

Can you tell me a little bit about your journey and your work? Who is Tiq Milan and how have you gotten here?

I started off working in hip hop journalism, but I was becoming increasingly masculine in my appearance and I was trying to figure out if I was trans or not. In that environment, being a masculine woman at the time was really hard. People weren’t necessarily hostile. People were awkward — and it was just humiliating. People would misgender me, then look at me weird. I decided to switch it up and work in LGBT nonprofit and work with youth, which I had done before. I figured that if I was able to work in communities that would give me the space to transition in a way that felt really comfortable, I could be a role model and model of possibility for people around me. I was able to find the space where I could use media as a space for advocacy.

I started my transition about 12 years ago, in 2007. Transitioning was an evolution; there wasn’t a point in my life where I was like, “I’m trans and I have to do this.” It really was something that evolved over time. My book, Man of My Design, is about the evolution — it’s not so much about the legal and physical transition but rather about my journey throughout the spectrum of gender, from being a tomboy to a feminine teenager to a butch lesbian to a man. Being me has definitely been a process, and I’m still in that process to becoming my best self.

I am intrigued by that title. I think it’s a really interesting concept, especially in a world where — to some people — gender is immovable, inherent and unchanging. What does designing your own masculinity mean to you?

We’re changing the idea that gender is innate and immovable, and understanding it as self-determined. As transgender people, we’re showing other people in the world — particularly cisgendered people — that we’re all having gendered experiences. But we’re also securing the space to be who we are in our genders, whether you’re trans or cis. As a person who was not born into manhood, I’ve had to curate my masculinity from a blank slate. I had to look at different examples and tropes of masculine and decide what I wanted to engage in and what I didn’t. I had to think about how I could find a home in masculinity and not engage in what is so toxic about it. I had to intentionally not revel in the idea that being a man means being the one in control, being the one who has all the strength or power. It’s easy to fall in that place, particularly as a transman who is always assumed to be cisgender. I don’t deal with a lot of trans antagonism because people perceive me as a cisgender person, but that doesn’t mean that I’m going to take up space in the perceived privileges that come with that.

“What does it look like to be a man tethered to my spirit, not so much to what I can control?”

This is about what I call organic masculinity. Manhood — particularly cisgendered heterosexual masculinity — defines itself by what it can control, and when it loses that control, when the entitlement is taken away, men lose their fucking mind. They get violent, they get awful. What happens when I take away that entitlement, take away that control and just start to create the man that I want to be? I am masculine and I have masculine traits, but I’m also compassionate. I believe as a man I can have a range of emotions — it doesn’t have to stop at lust and anger. There’s an idea that men can’t have fear, that men can’t be complicated. I want to turn that on its head.

29974636243_a12f03d2d7_o

When Tiq Milan and Kim Katrin Milan spoke at TEDWomen 2016, they shared a vision of love and marriage that allowed each person to be who they were. Tiq, left, is a thoughtful spokesperson for a new vision of masculinity that involves choosing aspects of manhood that work for you — and leaving the negativity behind. Photo: Marla Aufmuth / TED

What drives you to do the work that you do, toward “living visibly and living out loud”?

I’m visible so other people don’t have to be. Somebody has to be visible. Someone has to be a model of possibility for younger people, and for older people who aren’t out or are still dealing with their gender. Someone has to do it, so why not me? Particularly as a Black man, it’s important to push up against these ideas that being queer and being trans is something that is white. Making sure that people see that this is an intersectional human experience. Here I am, in the flesh, being Black, being queer, being a man; I am all of those things.

I’m starting to become obsessed with this idea of becoming my best self. I listen to Oprah’s SuperSoul Sunday podcast. She’s on that guru shit. I’m trying to figure out what the formula is for this life. I was born a girl and I’m going to die a man. I saw the exact person I wanted to be in my mind and I manifested that in this world. If I can do that, I can do anything.

“What does it look like to be the architect of your own destiny? I want to use the trans experience of self-determination as a blueprint.”

I’m inspired by our journey as trans people, by us taking the reins and saying, “This is the person I want to be, and this is who I’m going to be.” I’m really interested in what that next step looks like spiritually. I want to raise my consciousness. My purpose is my wildest dreams, so what does it look like to live in that purpose? To live and breathe on another frequency is to stay in a place of gratitude, even when it’s hard, even when things aren’t going the way they should be. If I stay in a place of gratitude, then I stay understanding that what I want in this life is unequivocally possible. I think it’s about trying to let go of ego. What does it look like to be selfless? What does it mean to understand that we’re all in this together? Particularly now, with the rampant, vile racism that’s happening in the world I have to keep myself grounded in the fact that we’re all in this together. I try to operate with the understanding that the things that I say and do in this world have a ripple effect. You never know who you’re going to affect. That’s what I mean by raising my consciousness; I want to have a spiritual base, and understand myself as part of a community rather than as an individual.

As you navigate this world existing at multiple intersections of identity and marginalization, what are your core values? As you and Kim said in your talk, you exist at these intersections but you don’t live marginalized lives.

My most core value is to stay true and speak with integrity. I try to say what I mean and mean what I say. Because I hold those values, rarely do I say things that I can’t take back. I’m really conscious about thinking before I speak. What we speak is what we put out into the world, it’s what we create. What we write is what creates truth and what creates this world. I take that very seriously.

In your TED Talk, you mention having to hold up a mirror to yourself and interrogate masculinity, and that it was a process of learning and unlearning. What does that process of reflection look like to you? What does it look like to build your masculinity in a way that doesn’t subscribe to misogyny and toxic patriarchal ideals?

In my process of becoming a man, I had to understand that I swallowed a lot about the superiority of men and the inferiority of femininity. I had to do a lot of unlearning and check myself on a lot of things. What’s been helping has been being surrounded by so many amazing women in my life who would also check me too, and say, “You think you’re so smart and sophisticated, but you’re a sexist and I’m gonna show you all the ways you’re sexist.” It took a lot of hard conversations with really brilliant people to work through these things. I’m not perfect; I feel like I’m always working towards letting go of hardcore, engrained shit about gender.

This goes back to what it means to be a man who is compassionate. I have a heart. I empathize with people. I try to understand the space I take up as a man, and try to be really deliberate about creating space for other people. For instance, when I’m on panels or moderate panels with people of different genders, I make a point to make sure that the feminine people and women on the panels speak the most. I try not to take up space where there are women and feminine people who could speak to something in a better way than I can. I try to be conscious of those things.

29975499933_aca11fa50c_o

“We can change the culture and start saying, ‘Being compassionate, empathic, emotionally complicated and available is a part of being a masculine person because it’s a part of being a human being,'” says Tiq Milan, shown here with his wife Kim Katrin Milan at TEDWomen 2016. Photo: Marla Aufmuth / TED

Research has shown that people who are conditioned to be men have been taught to emotionally repress, and that has devastating consequences, both to those men and to everyone else in the world who faces the backlash from that repression. How do we encourage boys and men to be vulnerable and emotionally communicative? How do we help men heal?

We need to teach little boys to be vulnerable, that nothing is taken away from them if they cry, nothing is taken from them if they’re scared or if they’re in pain. Nothing is taken from them if they’re in love. We have to start early. Growing up, I was a little girl. I’m not the trans person who knew I was trans when I was six. Not growing up in that man culture has had a huge influence on the man I am today. It has allowed me to be better. I don’t feel vulnerable around my own fear or falling in love. If I’m scared, I’ll tell you, I’m petrified. If I need help, I ask for it.

There are so many things that can fuck manhood up. You wear the color pink, you’re not man enough. You show some fear, you’re not man enough. If you actually love a person and show how much you love them, it’s not manly.

“Refusing emotions takes away from the complexity and wholeness of a human being.”

We can change the culture and start saying, “Being compassionate, empathic, emotionally complicated and available is a part of being a masculine person because it’s a part of being a human being,” instead of limiting masculinity to being one kind of person. That’s why there are so many men who are so oppressed, violent and awful. There are just so many cisgendered men who are just awful to everyone. How can you be happy with your humanity if everything tells you that if you don’t act in a very specific way you’ll be stripped of your masculinity, which is something that men hold dear?

There are lot of men who deny there’s a problem, who don’t care, or who just don’t realize. These men are still a part of a misogynistic, homophobic, transphobic social fabric — how do we reach them?

I think it takes a lot of hard conversations. The thing is — people need to be willing to change. We can’t force people. I can meet people where they’re at.

“I can educate people who are ready to change, who say, ‘I’m ready to be uncomfortable, and I’m ready to have my truths complicated so I can grow.’”

If they’re not saying that, there’s no conversation to be had. There’s just so many people out there who don’t care and who don’t want to care, because once they know there’s a problem, there’s an obligation to do something about it, and they don’t want that responsibility. We say ignorance is bliss — it’s easier to pretend that nothing’s going on. You can’t tell me that it’s natural for men to be so violent towards each other, and towards women and children in their homes. I don’t think that’s natural; I think that it’s conditioned. I think a lot of men are coming to place where they’re ready to change, and they’re becoming more disinvested in toxic masculinity. Look at Terry Crews — he’s one of the only men to come out and talk about sexual assault; yet women have been talking about sexual assault for centuries. It’s good to see a man finally say, “This has happened to me too, and I’m understanding this toxic culture that creates these systems.” We need men to understand that toxic masculinity exists in our culture, that we benefit from it, and that we created it so we have to change it.

How are you navigating fatherhood, and what does queering family mean for you?

I’m just trying to do my best. [laughs] I’m trying to make sure my kid doesn’t fall off the bed, doesn’t choke on anything, doesn’t poison herself. A lot of fatherhood is just making sure your kid is fine. My wife is such a good partner, and we’re both parenting full-time. Your whole life changes when you become a parent. My daughter is the light of my life. My kid has a cisgendered queer mom and a transgendered dad. We want her to grow up in a world where gender isn’t a binary system, gender is a spectrum of possibilities. She’s going to know that as a truth in her life; she’s going to know that gender looks so many different ways, and that her gender can look however she chooses as she gets older. Her journey in gender is not a process of coming out, it just is. We also want her to know that families can look a whole bunch of ways. We’re being really intentional about meeting other queer parents, other queer parents of color, other gay parents, so that she has a really open idea around family and around love.

Queerness is freedom to create family and love how we want. She’s going to be raised with queerness as a culture. I think queerness is the future.”

 

IMG_1943

Tiq and Kim with their daughter. As Tiq says: “My kid has a cisgendered queer mom and a transgendered dad. We want her to grow up in a world where gender isn’t a binary system, gender is a spectrum of possibilities. She’s going to know that as a truth in her life.”

Find out more about TEDWomen 2018: Showing Up, coming up this fall in Palm Springs, California.

TEDNnedi Okorafor pens a new Black Panther comic series, and more updates from TED speakers

We’ve been on break but the TED community definitely hasn’t — here are some highlights from the past few weeks.

Black Panther’s Shuri stars in her own comic. Writer Nnedi Okorafor will team up with visual artist Leonardo Romero to bring Marvel’s newest Black Panther comic series to life. Shuri will follow Wakandan princess and tech genius Shuri as she struggles to lead Wakanda after the mysterious disappearance of her brother, T’Challa, the Black Panther and Wakandan king. Okorafor will infuse her signature Afrofuturist style into the African fantasy franchise, which has also been written by Ta-Nehisi Coates and TED speaker Roxane Gay. In an interview with Bustle, Okorafor said “[Shuri is] a character in the Marvel Universe who really sings to me.” (Watch Okorafor’s TED Talk.)

Saving media through blockchain technology. Alongside Jen Poyant, journalist Manoush Zomorodi has launched Stable Genius Productions, a podcast production company that aims to “help people navigate personal and global change” through the lens of technological advances. In an innovative move, the company has joined forces with Civil, a decentralized marketplace operating with blockchain and cryptocurrency technologies to fund digital journalism. Their first project, ZigZag, is a podcast about “changing the course of capitalism, journalism and women’s lives,” and documents the co-founders’ journey building Stable Genius Productions. In an interview with Recode, Zomorodi comments on her partnership with Civil: “The idea is that there’s this ecosystem of news sites … niche is okay; they don’t need to be massive. We’re not trying to build another New York Times on here. This is small and specific and quality.” (Watch Zomorodi’s TED Talk.)

Pope declares death penalty “inadmissible.” Pope Francis recently instituted a change in the Catholic Church’s position on capital punishment, naming it an “attack” on the “dignity of the person.” Though the Catholic Church has been vocally opposed to the death penalty for several decades, with Pope John Paul II calling the practice “cruel and unnecessary,” this move sets a clear and firm position from the Vatican that the death penalty is inexcusable. Pope Francis also urged bishops to advocate for rehabilitation and social integration for offenders, rather than punishment for the sake of deterring future crimes, and announced a goal to work toward the abolishment of the death penalty globally. (Watch the Pope’s TED Talk.)

Two nominations for the alternative Nobel Prize in literature. More great news for Nnedi Okorafor! Both Chimamanda Ngozi Adichie and Nnedi Okorafor have been longlisted for the New Academy Prize in Literature. Following the announcement that the Swedish Academy would withhold awarding a 2018 Nobel Prize in Literature due to sexual assault allegations, The New Academy was founded to ensure that an international literary prize was awarded this year. Adichie and Okorafor have been nominated along with other international literary luminaries such as Jamaica Kincaid, Neil Gaiman, Arundhati Roy and Margaret Atwood. (Watch Adichie’s TED Talk.)

A new exhibition on the strength and beauty of the Black Madonna. Artist Theaster Gates has funneled his fascination with how the Virgin Mary and Christ are represented into his new solo exhibition at Kunstmuseum Basel in Switzerland. Inspired by Maerten van Heemskerck’s Virgin and Child, Gates’ new work urges viewers to complicate their understanding of the Virgin Mary, a character who is most often rendered as white in traditional fine art. Speaking to BBC Culture, Gates says his show “weaves back and forth from religious adoration to political manifesto to self-empowerment to historical reflection.” Other aspects of the exhibition include a 2,600-strong photo collection of black women whom Gates calls “Black Madonnas…everyday women who do miraculous things,” drawn from the iconic Ebony magazine archive. (Watch Gates’ TED Talk.)

 

CryptogramNew Ways to Track Internet Browsing

Interesting research on web tracking: "Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies:

Abstract: Nowadays, cookies are the most prominent mechanism to identify and authenticate users on the Internet. Although protected by the Same Origin Policy, popular browsers include cookies in all requests, even when these are cross-site. Unfortunately, these third-party cookies enable both cross-site attacks and third-party tracking. As a response to these nefarious consequences, various countermeasures have been developed in the form of browser extensions or even protection mechanisms that are built directly into the browser.

In this paper, we evaluate the effectiveness of these defense mechanisms by leveraging a framework that automatically evaluates the enforcement of the policies imposed to third-party requests. By applying our framework, which generates a comprehensive set of test cases covering various web mechanisms, we identify several flaws in the policy implementations of the 7 browsers and 46 browser extensions that were evaluated. We find that even built-in protection mechanisms can be circumvented by multiple novel techniques we discover. Based on these results, we argue that our proposed framework is a much-needed tool to detect bypasses and evaluate solutions to the exposed leaks. Finally, we analyze the origin of the identified bypass techniques, and find that these are due to a variety of implementation, configuration and design flaws.

The researchers discovered many new tracking techniques that work despite all existing anonymous browsing tools. These have not yet been seen in the wild, but that will change soon.

Three news articles. BoingBoing post.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.100.5.0

armadillo image

A new RcppArmadillo release 0.9.100.5.0, based on the new Armadillo release 9.100.5 from earlier today, is now on CRAN and in Debian.

It once again follows our (and Conrad's) bi-monthly release schedule. Conrad started with a new 9.100.* series a few days ago. I ran reverse-depends checks and found an issue which he promptly addressed; CRAN found another which he also very promptly addressed. It remains a true pleasure to work with such experienced professionals as Conrad (with whom I finally had a beer around the recent useR! in his home town) and of course the CRAN team whose superb package repository truly is the bedrock of the R community.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 479 other packages on CRAN.

This release once again brings a number of improvements to the sparse matrix functionality. We also fixed one use case of the OpemMP compiler and linker flags which will likely hit a number of the by now 501 (!!) CRAN packages using RcppArmadillo.

Changes in RcppArmadillo version 0.9.100.5.0 (2018-08-16)

  • Upgraded to Armadillo release 9.100.4 (Armatus Ad Infinitum)

    • faster handling of symmetric/hermitian positive definite matrices by solve()

    • faster handling of inv_sympd() in compound expressions

    • added .is_symmetric()

    • added .is_hermitian()

    • expanded spsolve() to optionally allow keeping solutions of systems singular to working precision

    • new configuration options ARMA_OPTIMISE_SOLVE_BAND and ARMA_OPTIMISE_SOLVE_SYMPD smarter use of the element cache in sparse matrices

    • smarter use of the element cache in sparse matrices

  • Aligned OpenMP flags in the RcppArmadillo.package.skeleton used Makevars,.win to not use one C and C++ flag.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

Edited on 2018-08-17 to correct one sentence (thanks, Barry!) and adjust the RcppArmadillo to 501 (!!) as we crossed the threshold of 500 packages overnight.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureError'd: The Illusion of Choice

"So I can keep my current language setting or switch to Pakistani English. THERE IS NO IN-BETWEEN," Robert K. writes.

 

"I guess robot bears aren't allowed to have the honey, or register the warranty on their trailer hitch" wrote Charles R.

 

"Not to be outdone by King's Cross's platform 0 [and fictional platform 9 3/4], it looks like Marylebone is jumping on the weird band-wagon," David L. writes.

 

Alex wrote, "If the percentage it to be believed, I'm downloading Notepad+++++++++++++++."

 

"Hmm, so many choices?" writes Dave A.

 

Ergin S. writes, "My card number starts with 36 and is 14 digits long so it might take me a little while to get there, but thanks to the dev for at least trying to make things more convenient."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianSune Vuorela: Invite me to your meetings

I was invited by my boss to a dinner. He uses exchange or outlook365 or something like that. The KMail TNEF parser didn’t succeed in parsing all the info, so I’m kind of trying to fix it.

But I need test data. From Exchange or outlook or outlook365. That I can add to the repoository for unit tests.

So if you can help me generate test data, please setup a meeting and invite me. publicinvites@sune.vuorela.dk

Just to repeat. The data will be made public.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.100.5.0

armadillo image

A new RcppArmadillo release 0.9.100.5.0, based on the new Armadillo release 9.100.5 from earlier today, is now on CRAN and in Debian.

It once again follows our (and Conrad's) bi-monthly release schedule. Conrad started with a new 9.100.* series a few days ago. I ran reverse-depends checks and found an issue which he promptly addressed; CRAN found another which he also very promptly addressed. It remains a true pleasure to work with such experienced professionals as Conrad (with whom I finally had a beer around the recent useR! in his home town) and of course the CRAN team whose superb package repository truly is the bedrock of the R community.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 479 other packages on CRAN.

This release once again brings a number of improvements to the sparse matrix functionality. We also also one use case of the OpemMP compiler and linker flags which will likely hit a number of the by now 499 (!!) CRAN packages using RcppArmadillo.

Changes in RcppArmadillo version 0.9.100.5.0 (2018-08-16)

  • Upgraded to Armadillo release 9.100.4 (Armatus Ad Infinitum)

    • faster handling of symmetric/hermitian positive definite matrices by solve()

    • faster handling of inv_sympd() in compound expressions

    • added .is_symmetric()

    • added .is_hermitian()

    • expanded spsolve() to optionally allow keeping solutions of systems singular to working precision

    • new configuration options ARMA_OPTIMISE_SOLVE_BAND and ARMA_OPTIMISE_SOLVE_SYMPD smarter use of the element cache in sparse matrices

    • smarter use of the element cache in sparse matrices

  • Aligned OpenMP flags in the RcppArmadillo.package.skeleton used Makevars,.win to not use one C and C++ flag.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianSteve McIntyre: 25 years...

We had a small gathering in the Haymakers pub tonight to celebrate 25 years since Ian Murdock started the Debian project.

people in the pub!

We had 3 DPLs, a few other DDs and a few more users and community members! Good to natter with people and share some history. :-) The Raspberry Pi people even chipped in for some drinks. Cheers! The celebrations will continue at the big BBQ at my place next weekend.

Planet DebianSteinar H. Gunderson: Solskogen 2018: Tireless wireless (a retrospective)

These days, Internet access is a bit like oxygen—hard to get excited about, but living without it can be profoundly annoying. With prevalent 4G coverage and free roaming within the EU, the need for wifi in the woods has diminished somewhat, but it's still important for computers (bleep bloop!), and even more importantly, streaming.

As Solskogen's stream wants 5 Mbit/sec out of the party place (we reflect it outside, where bandwidth is less scarce), we were a bit dismayed when we arrived a week before the party for pre-check and discovered that the Internet access from the venue was capped at 5/0.5. After some frenzied digging, we discovered the cause: Since Solskogen is the only event at Flateby that uses the Internet much, they have reverted to the cheapest option except in July—and that caused us to eventually being relegated to an ADSL line card in the DSLAM, as opposed to the VDSL we've had earlier (which gave us 50/10). Even worse, with a full DSLAM, the change back would take weeks. We needed a plan B.

The obvious first choice would be 4G, but it's not a perfect match; just the stream alone would be 150+ GB (although it can be reduced or turned off when there's nothing happening on the big screen), and it's not the only thing that wants bandwidth. In other words, it would have a serious cost issue, and then there was the question to what degree it could deliver rock-stable streaming or not. There would be the option to use multiple providers and/or use the ADSL line for non-prioritized traffic (ie., participant access), but in the end, it didn't look so attractive, so we filed this as plan C and moved on to find another B.

Plan B eventually materialized in the form of the Ubiquiti Litebeam M5, a ridiculously cheap ($49 MSRP!) point-to-point link based on a somewhat tweaked Wi-Fi chipset. The idea was to get up on the roof (køb min fisk!), shoot to somewhere else with better networking and then use that link for everything. Øyafestivalen, by means of Daniel Husand, borrowed us a couple of M5s on short notice, and off we went to find trampolines on Google Maps. (For the uninitiated, trampolines = kids = Internet access.)

We considered the home of a fellow demoscener living nearby—at 1.4 km, it's well within the range of the M5 (we know of deployments running over 17 km).. However, the local grocery store in Flateby, Spar, managed to come up with something even more interesting; it turns out that behind the store, more or less across the street, there's a volunteer organization called Frivillighetssentralen that were willing to borrow out their 20/20 fiber Internet from Viken Fiber. Even better, after only a quick phone call, the ISP was more than willing to boost the line to 200/200 for the weekend. (The boost would happen Friday or so, so we'd run most of our testing with 20/20, but even that would be plenty.)

After a trip up on the roof of the party place, we decided approximately where to put the antenna, and put one of the M5s in the window of Frivillighetssentralen pointing roughly towards that spot. In a moment of hubris, we decided to try without going up on the roof again, just holding the other M5 out of the window, pointed it roughly in the right directoin… and lo and behold, it synced on 150 Mbit/sec both ways, reporting a distance of 450 meters. (This was through another house that was in the way, ie., no clear path. Did we mention the M5s are impossibly good for the price?)

So, after mounting it on the wall, we started building the rest of the network. Having managed switches everywhere paid off; instead of having to pull a cable from the wireless to the central ARM machine (an ODROID XU4) running as a router, we could just plug it into the closest participant switch and configure the ports. I'm aware that most people would consider VLANs overkill for a 200-person network, but it really helps in flexibility when something unexpected happens—and also in terms of cable.

However, as the rigging progressed and we started getting to the point where we could run test streams, it became clear that something was wrong. The Internet link just wasn't pushing the amount of bandwidth we wanted it to; in particular, the 5 Mbit/sec stream just wouldn't go through. (In parallel, we also had some problems with access points refusing to join the wireless controller, which turned out to be a faulty battery that caused the clock on the WLC to revert to year 2000, which in turn caused its certificate to be invalid. If we'd had Internet at that stage, it would have had NTP and never seen the problem, but of course, we didn't because we were still busy trying to figure out the best place on the roof at the time!)

Of course, frantic debugging ensued. We looked through every setting we could find on the M5s, we moved them to a spot with clear path and pointed them properly at each other (bringing the estimated link up to 250 Mbit/sec) and upgraded their software to the latest version. Nothing helped at all.

Eventually, we started looking elsewhere in our network. We run a fairly elaborate shaping and tunneling setup; this allows us to be fully in control over relative bandwidth prioritization, both ways (the stream really gets dedicated 5 Mbit/sec, for example), but complexity can also be scary when you're trying to debug. TCP performance can also be affected by multiple factors, and then of course, there's the Internet on its way. We tried blasting UDP at the other end full speed, which the XU4 would police down to 13 Mbit/sec, accurate to two decimals, for us (20 Mbit uplink, minus 5 for the stream, minus some headroom)—but somehow, the other end only received 12. Hmm. We reduced the policer to 12 Mbit/sec, and only got 11… what the heck?

At this point, we understood we had a packet loss problem on our hands. It would either be the XU4s or the M5s; something dropped 10% or so of all packets, indiscriminately. Again, the VLANs helped; we could simply insert a laptop on the right VLAN and try to send traffic outside of the XU4. We did so, and after some confusion, we figured out it wasn't that. So what was wrong with the M5s?

It turns out the latest software version has iperf built-in; you can simply ssh to the box and run from there. We tried the one on the ISP side; it got great TCP speeds to the Internet. We tried the one on the local side; it got… still great speeds! What!?

So, after six hours of debugging, we found the issue; there was a faulty Cat5 cable between two switches in the hall, that happened to be on the path out to the inner M5. Somehow it got link at full gigabit, but it caused plenty of dropped packets—I've never seen this failure mode before, and I sincerely hope we'll never be seeing it again. We replaced the cable, and tada, Internet.

Next week, we'll talk about how the waffle irons started making only four hearts instead of five, and how we traced it to a poltergeist that we brought in a swimming pool when we moved from Ås to Flateby five years ago.

CryptogramSpeculation Attack Against Intel's SGX

Another speculative-execution attack against Intel's SGX.

At a high level, SGX is a new feature in modern Intel CPUs which allows computers to protect users' data even if the entire system falls under the attacker's control. While it was previously believed that SGX is resilient to speculative execution attacks (such as Meltdown and Spectre), Foreshadow demonstrates how speculative execution can be exploited for reading the contents of SGX-protected memory as well as extracting the machine's private attestation key. Making things worse, due to SGX's privacy features, an attestation report cannot be linked to the identity of its signer. Thus, it only takes a single compromised SGX machine to erode trust in the entire SGX ecosystem.

News article.

The details of the Foreshadow attack are a little more complicated than those of Meltdown. In Meltdown, the attempt to perform an illegal read of kernel memory triggers the page fault mechanism (by which the processor and operating system cooperate to determine which bit of physical memory a memory access corresponds to, or they crash the program if there's no such mapping). Attempts to read SGX data from outside an enclave receive special handling by the processor: reads always return a specific value (-1), and writes are ignored completely. The special handling is called "abort page semantics" and should be enough to prevent speculative reads from being able to learn anything.

However, the Foreshadow researchers found a way to bypass the abort page semantics. The data structures used to control the mapping of virtual-memory addresses to physical addresses include a flag to say whether a piece of memory is present (loaded into RAM somewhere) or not. If memory is marked as not being present at all, the processor stops performing any further permissions checks and immediately triggers the page fault mechanism: this means that the abort page mechanics aren't used. It turns out that applications can mark memory, including enclave memory, as not being present by removing all permissions (read, write, execute) from that memory.

EDITED TO ADD: Intel has responded:

L1 Terminal Fault is addressed by microcode updates released earlier this year, coupled with corresponding updates to operating system and hypervisor software that are available starting today. We've provided more information on our web site and continue to encourage everyone to keep their systems up-to-date, as it's one of the best ways to stay protected.

I think this is the "more information" they're referring to, although this is a comprehensive link to everything the company is saying about the vulnerability.

Planet DebianBdale Garbee: Mixed Emotions On Debian Anniversary

When I woke up this morning, my first conscious thought was that today is the 25th anniversary of a project I myself have been dedicated to for nearly 24 years, the Debian GNU/Linux distribution. I knew it was coming, but beyond recognizing the day to family and friends, I hadn't really thought a lot about what I might do to mark the occasion.

Before I even got out of bed, however, I learned of the passing of Aretha Franklin, the Queen of Soul. I suspect it would be difficult to be a caring human being, born in my country in my generation, and not feel at least some impact from her mere existence. Such a strong woman, with amazing talent, whose name comes up in the context of civil rights and women's rights beyond the incredible impact of her music. I know it's a corny thing to write, but after talking to my wife about it over coffee, Aretha really has been part of "the soundtrack of our lives". Clearly, others feel the same, because in her half-century-plus professional career, "Ms Franklin" won something like 18 Grammy awards, the Presidential Medal of Freedom, and other honors too numerous to list. She will be missed.

What's the connection, if any, between these two? In 2002, in my platform for election as Debian Project Leader, I wrote that "working on Debian is my way of expressing my most strongly held beliefs about freedom, choice, quality, and utility." Over the years, I've come to think of software freedom as an obvious and important component of our broader freedom and equality. And that idea was strongly reinforced by the excellent talk Karen Sandler and Molly de Blanc gave at Debconf18 in Taiwan recently, in which they pointed out that in our modern world where software is part of everything, everything can be thought of as a free software issue!

So how am I going to acknowledge and celebrate Debian's 25th anniversary today? By putting some of my favorite Aretha tracks on our whole house audio system built entirely using libre hardware and software, and work to find and fix at least one more bug in one of my Debian packages. Because expressing my beliefs through actions in this way is, I think, the most effective way I can personally contribute in some small way to freedom and equality in the world, and thus also the finest tribute I can pay to Debian... and to Aretha Franklin.

Krebs on SecurityHanging Up on Mobile in the Name of Security

An entrepreneur and virtual currency investor is suing AT&T for $224 million, claiming the wireless provider was negligent when it failed to prevent thieves from hijacking his mobile account and stealing millions of dollars in cryptocurrencies. Increasingly frequent, high-profile attacks like these are prompting some experts to say the surest way to safeguard one’s online accounts may be to disconnect them from the mobile providers entirely.

The claims come in a lawsuit filed this week in Los Angeles on behalf of Michael Terpin, who co-founded the first angel investor group for bitcoin enthusiasts in 2013. Terpin alleges that crooks stole almost $24 million worth of cryptocurrency after fraudulently executing a “SIM swap” on his mobile phone account at AT&T in early 2018.

A SIM card is the tiny, removable chip in a mobile device that allows it to connect to the provider’s network. Customers can legitimately request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

But SIM swaps are frequently abused by scam artists who trick mobile providers into tying a target’s service to a new SIM card and mobile phone that the attackers control. Unauthorized SIM swaps often are perpetrated by fraudsters who have already stolen or phished a target’s password, as many banks and online services rely on text messages to send users a one-time code that needs to be entered in addition to a password for online authentication.

Terpin alleges that on January 7, 2018, someone requested an unauthorized SIM swap on his AT&T account, causing his phone to go dead and sending all incoming texts and phone calls to a device the attackers controlled. Armed with that access, the intruders were able to reset credentials tied to his cryptocurrency accounts and siphon nearly $24 million worth of digital currencies.

According to Terpin, this was the second time in six months someone had hacked his AT&T number. On June 11, 2017, Terpin’s phone went dead. He soon learned his AT&T password had been changed remotely after 11 attempts in AT&T stores had failed. At the time, AT&T suggested Terpin take advantage of the company’s “extra security” feature — a customer-specified six-digit PIN which is required before any account changes can be made.

Terpin claims an investigation by AT&T into the 2018 breach found that an employee at an AT&T store in Norwich, Conn. somehow executed the SIM swap on his account without having to enter his “extra security” PIN, and that AT&T knew or should have known that employees could bypass its customer security measures.

Terpin is suing AT&T for his $24 million worth of cryptocurrencies, plus $200 million in punitive damages. A copy of his complaint is here (PDF).

AT&T declined to comment on specific claims in the lawsuit, saying only in a statement that, “We dispute these allegations and look forward to presenting our case in court.”

AN ‘IDENTITY CRISIS’?

Mobile phone companies are a major weak point in authentication because so many companies have now built their entire procedure for authenticating customers on a process that involves sending a one-time code to the customer via SMS or automated phone call.

In some cases, thieves executing SIM swaps have already phished or otherwise stolen a target’s bank or email password. But many major social media platforms — such as Instagramallow users to reset their passwords using nothing more than text-based (SMS) authentication, meaning thieves can hijack those accounts just by having control over the target’s mobile phone number.

Allison Nixon is director of security research at Flashpoint, a security company in New York City that has been closely tracking the murky underworld of communities that teach people how to hijack phone numbers assigned to customer accounts at all of the major mobile providers.

Nixon calls the current SIM-jacking craze “a major identity crisis” for cybersecurity on multiple levels.

“Phone numbers were never originally intended as an identity document, they were designed as a way to contact people,” Nixon said. “But because of all these other companies are building in security measures, a phone number has become an identity document.”

In essence, mobile phone companies have become “critical infrastructure” for security precisely because so much is riding on who controls a given mobile number. At the same time, so little is needed to undo weak security controls put in place to prevent abuse.

“The infrastructure wasn’t designed to withstand the kind of attacks happening now,” Nixon said. “The protocols need to be changed, and there are probably laws affecting the telecom companies that need to be reviewed in light of how these companies have evolved.”

Unfortunately, with the major mobile providers so closely tied to your security, there is no way you can remove the most vulnerable chunks of this infrastructure — the mobile store employees who can be paid or otherwise bamboozled into helping these attacks succeed.

No way, that is, unless you completely disconnect your mobile phone number from any sort of SMS-based authentication you currently use, and replace it with Internet-based telephone services that do not offer “helpful” customer support — such as Google Voice.

Google Voice lets users choose a phone number that gets tied to their Google account, and any calls or messages to that number will be forwarded to your mobile number. But unlike phone numbers issued by the major mobile providers, Google Voice numbers can’t be stolen unless someone also hacks your Google password — in which case you likely have much bigger problems.

With Google Voice, there is no customer service person who can be conned over the phone into helping out. There is no retail-store employee who will sell access to your SIM information for a paltry $80 payday. In this view of security, customer service becomes a customer disservice.

Mind you, this isn’t my advice. The above statement summarizes the arguments allegedly made by one of the most accomplished SIM swap thieves in the game today. On July 12, 2018, police in California arrested Joel Ortiz, a 20-year-old college student from Boston who’s accused of using SIM swaps to steal more than $5 million in cryptocurrencies from 40 victims.

Ortiz allegedly had help from a number of unnamed accomplices who collectively targeted high-profile and wealthy people in the cryptocurrency space. In one of three brazen attacks at a bitcoin conference this year, Ortiz allegedly used his SIM swapping skills to steal more than $1.5 million from a cryptocurrency entrepreneur, including nearly $1 million the victim had crowdfunded.

A July 2018 posting from the “OG” Instagram account “0”, allegedly an account hijacked by Joel Ortiz (pictured holding an armload of Dom Perignon champagne).

Ortiz reportedly was a core member of OGUsers[dot]com, a forum that’s grown wildly popular among criminals engaging in SIM swaps to steal cryptocurrency and hijack high-value social media accounts. OG is short for “original gangster,” and it refers to a type of “street cred” for possession of social media account names that are relatively short (between one and six characters). On ogusers[dot]com, Ortiz allegedly picked the username “j”. Short usernames are considered more valuable because they confer on the account holder the appearance of an early adopter on most social networks.

Discussions on the Ogusers forum indicate Ortiz allegedly is the current occupant of perhaps the most OG username on Twitter — an account represented by the number zero “0”. The alias displayed on that twitter profile is “j0”. He also apparently controls the Instagram account by the same number, as well as the Instagram account “t”, which lists its alias as “Joel.”

Shown below is a cached snippet from an Ogusers forum posting by “j” (allegedly Ortiz), advising people to remove their mobile phone number from all important multi-factor authentication options, and to replace it with something like Google Voice.

Ogusers SIM swapper “j” advises forum members on how not to become victims of SIM swapping. Click to enlarge.

WHAT CAN YOU DO?

All four major wireless carriers — AT&T, Sprint, T-Mobile and Verizon — let customers add security against SIM swaps and related schemes by setting a PIN that needs to be provided over the phone or in person at a store before account changes should be made. But these security features can be bypassed by incompetent or corrupt mobile store employees.

Mobile store employees who can be bought or tricked into conducting SIM swaps are known as “plugs” in the Ogusers community, and without them SIM swapping schemes become much more difficult.

Last week, KrebsOnSecurity broke the news that police in Florida had arrested a 25-year-old man who’s accused of being part of a group of at least nine individuals who routinely conducted fraudulent SIM swaps on high-value targets. Investigators in that case say they have surveillance logs that show the group discussed working directly with mobile store employees to complete the phone number heists.

In May I wrote about a 27-year-old Boston man who had his three-letter Instagram account name stolen after thieves hijacked his number at T-Mobile. Much like Mr. Terpin, the victim in that case had already taken T-Mobile’s advice and placed a PIN on his account that was supposed to prevent the transfer of his mobile number. T-Mobile ultimately acknowledged that the heist had been carried out by a rogue T-Mobile store employee.

So consider establishing a Google Voice account if you don’t already have one. In setting up a new number, Google requires you to provide a number capable of receiving text messages. Once your Google Voice number is linked to your mobile, the device at the mobile number you gave to Google should notify you instantly if anyone calls or messages the Google number (this assumes your phone has a Wi-Fi or mobile connection to the Internet).

After you’ve done that, take stock of every major account you can think of, replacing your mobile phone number with your Google Voice number in every case it is listed in your profile.

Here’s where it gets tricky. If you’re all-in for taking the anti-SIM-hacking advice allegedly offered by Mr. Ortiz, once you’ve changed all of your multi-factor authentication options from your mobile number to your Google Voice number, you then have to remove that mobile number you supplied to Google from your Google Voice account. After that, you can still manage calls/messages to and from your Google Voice number using the Google Voice mobile app.

And notice what else Ortiz advises in the screen shot above to secure one’s Gmail and other Google accounts: Using a physical security key (where possible) to replace passwords. This post from a few weeks back explains what security keys are, how they can help harden your security posture, and how to use them. If Google’s own internal security processes count for anything, the company recently told this author that none of its 85,000 employees had been successfully phished for their work credentials since January 2017, when Google began requiring all employees to use physical security keys in place of one-time passwords sent to a mobile device.

Standard disclaimer: If the only two-factor authentication offered by a company you use is based on sending a one-time code via SMS or automated phone call, this is still better than relying on simply a password alone. But one-time codes generated by a mobile phone app such as Authy or Google Authenticator are more secure than SMS-based options because they are not directly vulnerable to SIM-swapping attacks.

The web site twofactorauth.org breaks down online service providers by the types of secondary authentication offered (SMS, call, app-based one-time codes, security keys). Take a moment soon to review this important resource and harden your security posture wherever possible.

Worse Than FailureRepresentative Line: Tern This Statement Around and Go Home

When looking for representative lines, ternaries are almost easy mode. While there’s nothing wrong with a good ternary expression, they have a bad reputation because they can quickly drift out towards “utterly unreadable”.

Or, sometimes, they can drift towards “incredibly stupid”. This anonymous submission is a pretty brazen example of the latter:

return (accounts == 1 ? 1 : accounts)

Presumably, once upon a time, this was a different expression. The code changed. Nobody thought about what was changing or why. They just changed it and moved on. Or, maybe, they did think about it, and thought, “someday this might go back to being complicated again, so I’ll leave the ternary in place”, which is arguably a worse approach.

We’ll never know which it was.

Since that was so simple, let’s look at something a little uglier, as a bonus. “WDPS” sends along a second ternary violation, this one has the added bonus of being in Objective-C. This code was written by a contractor (whitespace added to keep the article readable- original is all on one line):

    NSMutableArray *buttonItems = [NSMutableArray array];
    buttonItems = !negSpacer && !self.buttonCog
            ? @[] : (!negSpacer && self.buttonCog 
            ? @[self.buttonCog] : (!self.buttonCog && negSpacer 
            ? @[negSpacer] : @[negSpacer,self.buttonCog]));

This is a perfect example of a ternary which simply got out of control while someone tried to play code golf. Either this block adds no items to buttonItems, or it adds a buttonCog or it adds a negSpacer, or it adds both. Which means it could more simply be written as:

   NSMutableArray *buttonItems = [NSMutableArray array];
   if (negSpacer) {
        [buttonItems addObject:negSpacer];
    }
    if (self.buttonCog) {
        [buttonItems addObject:self.buttonCog];
    }
[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianBits from Debian: 25 years and counting

Debian is 25 years old by Angelo Rosa

When the late Ian Murdock announced 25 years ago in comp.os.linux.development, "the imminent completion of a brand-new Linux release, [...] the Debian Linux Release", nobody would have expected the "Debian Linux Release" to become what's nowadays known as the Debian Project, one of the largest and most influential free software projects. Its primary product is Debian, a free operating system (OS) for your computer, as well as for plenty of other systems which enhance your life. From the inner workings of your nearby airport to your car entertainment system, and from cloud servers hosting your favorite websites to the IoT devices that communicate with them, Debian can power it all.

Today, the Debian project is a large and thriving organization with countless self-organized teams comprised of volunteers. While it often looks chaotic from the outside, the project is sustained by its two main organizational documents: the Debian Social Contract, which provides a vision of improving society, and the Debian Free Software Guidelines, which provide an indication of what software is considered usable. They are supplemented by the project's Constitution which lays down the project structure, and the Code of Conduct, which sets the tone for interactions within the project.

Every day over the last 25 years, people have sent bug reports and patches, uploaded packages, updated translations, created artwork, organized events about Debian, updated the website, taught others how to use Debian, and created hundreds of derivatives.

Here's to another 25 years - and hopefully many, many more!

Planet DebianNorbert Preining: DebConf 18 – Day 3

Most of Japan is on summer vacation now, only a small village in the north resists the siege, so I am continuing my reports on DebConf. See DebConf 18 – Day 1 and DebConf 18 – Day 2 for the previous ones.

With only a few talks of interest for me in the morning, I spent the time preparing my second presentation Status of Japanese (and CJK) typesetting (with TeX in Debian) during the morning, and joined for lunch and the afternoon session.

First to attend was the Deep Learning BoF by Mo Zou. Mo reported on the problems of getting Deep Learning tools into Debian: Here not only the pure software, where proprietary drivers for GPU acceleration are often highly advisable, but also the data sets (pre-trained data) which often fall under a non-free license, pose problems with integration into Debian. With several deep learning practitioners around, we had a lively discussion how to deal with all this.

Next up was Markus Koschany with Debian Java, where he gave an overview on the packaging tools for Java programs and libraries, and their interaction with the Java build tools like Maven, Ant, and Gradle.

After the coffee break I gave my talk about Status of Japanese (and CJK) typesetting (with TeX in Debian), and I must say I was quite nervous. As a non CJK-native foreigner speaking about the intricacies of typesetting with Kanji was a bit a challenge. At the end I think it worked out quite well, and I got some interesting questions after the talk.

Last for today was Nathan Willis’ presentation Rethinking font packages—from the document level down. With design, layout, and fonts being close to my personal interests, too, this talk was one of the highlights for me. Starting from a typical user’s workflow in selecting a font set for a specific project, Nathan discussed the current situation of fonts in Linux environment and Debian, and suggested improvements. Unfortunately what would be actually needed is a complete rewrite of the font stack, management, system organization etc, a rather big task at hand.

After the group photo shot by Aigars Mahinovs who also provided several more photos and a relaxed dinner I went climbing with Paul Wise to a nearby gym. It was – not surprisingly – quite humid and warm in the gym, so the amount of sweat I lost was considerable, but we had some great boulders and a fun time. In addition to that, I found a very nice book, nice out of two reasons: first, it was about one of my (and my daughters – seems to be connected) favorite movies, Totoro by Miyazaki Hayao, and second, it was written in Taiwanese Mandarin with some kind of Furigana to aid reading for kids – something that is very common in Japan (even in books for adults in case of rare readings), but I have never seen before with Chinese. The proper name is Zhùyīn Zìmǔ 註音字母 or (or more popular) Bopomofo.

This interesting and long day finished in my hotel with a cold beer to compensate for the loss of minerals during climbing.

,

Planet Linux AustraliaDavid Rowe: How Fourier Transforms Work

The best explanation of the Discrete Fourier Transform (DFT) I have ever seen from Bill Cowley on his Low SNR blog.

Cory DoctorowTalking surveillance, elections, monopolies, and Facebook on the Bots and Ballots podcast

Grant Burningham interviewed me for his Bots and Ballots podcast (MP3), covering a bunch of extremely timely tech-politics issues: Facebook and the impact of commercial surveillance on democratic elections; Alex Jones, censorship and market concentration; and monopolism and the future of the internet.

Google AdsenseLighthouse Auditing for Publishers

In February, we announced the launch of SEO auditing tools in Lighthouse Chrome Extension, which is now available in Google Chrome Developer tools.

Lighthouse is an open-source, automated tool for improving the quality of web pages.  Lighthouse is designed to help publishers improve the quality of their web sites, by providing tools and features for their developers to run audits for performance, compatibility, progressive web apps, SEO, and more.

The SEO audit category within Lighthouse was designed to validate and reflect the SEO basics that every site should get right, and provides detailed guidance to fix those issues. Current audits include checks for valid rel=canonical tag, successful HTTP response code, page title and description and more.

How do I use Lighthouse?

You can run Lighthouse in several ways:

Using the Lighthouse Chrome Extension:
  1. Install the Lighthouse Chrome Extension
  2. Click on the Lighthouse icon in the extension bar
  3. Select the Options menu, click “SEO” and click OK, then generate report



Using Chrome Developer tools on Google Chrome:
  1. Open Chrome Developer Tools
  2. Go to Audits
  3. Click Perform and audit
  4. Click the “SEO” checkbox and click Run Audit.

The SEO audits category is not designed to replace any of your current strategies, or tools, nor does it make any SEO guarantees for Google websearch or other search engines.  However, it covers some of the SEO best practices that are relevant for Webmasters and publishers who want to ensure their website is visible in Search. These checks are part of best practices that we provide for our publisher partners.


Posted by:
John Brown
Head of Publisher Policy Communications



Rondam RamblingsRon prognosticates: Manafort jury will hang

God, I hope I'm wrong about this.  If ever there was a slam-dunk case, the one against Paul Manafort is it.  Multiple witnesses whose testimony is supported by miles of paper trail.  So why do I think the jury will hang?  Because math, and the cult of personality that has formed around Donald Trump.  The sad fact of the matter is that there are people lining up to lick Donald Trump's anus because

Krebs on SecurityPatch Tuesday, August 2018 Edition

Adobe and Microsoft each released security updates for their software on Tuesday. Adobe plugged five security holes in its Flash Player browser plugin. Microsoft pushed 17 updates to fix at least 60 vulnerabilities in Windows and other software, including two “zero-day” flaws that attackers were already exploiting before Microsoft issued patches to fix them.

According to security firm Ivanti, the first of the two zero-day flaws (CVE-2018-8373) is a critical flaw in Internet Explorer that attackers could use to foist malware on IE users who browse to hacked or booby-trapped sites. The other zero-day is a bug (CVE-2018-8414) in the Windows 10 shell that could allow an attacker to run code of his choice.

Microsoft also patched more variants of the Meltdown/Spectre memory vulnerabilities, collectively dubbed “Foreshadow” by a team of researchers who discovered and reported the Intel-based flaws. For more information about how Foreshadow works, check out their academic paper (PDF), and/or the video below. Microsoft’s analysis is here.

One nifty little bug fixed in this patch batch is CVE-2018-8345. It addresses a problem in the way Windows handles shortcut files; ending in the “.lnk” extension, shortcut files are Windows components that link (hence the “lnk” extension) easy-to-recognize icons to specific executable programs, and are typically placed on the user’s Desktop or Start Menu.

That description of a shortcut file was taken verbatim from the first widely read report on what would later be dubbed the Stuxnet worm, which also employed an exploit for a weakness in the way Windows handled shortcut (.lnk) files. According to security firm Qualys, this patch should be prioritized for both workstations and servers, as the user does not need to click the file to exploit. “Simply viewing a malicious LNK file can execute code as the logged-in user,” Qualys’ Jimmy Graham wrote.

Not infrequently, Redmond ships updates that end up causing stability issues for some users, and it doesn’t hurt to wait a day or two before seeing if any major problems are reported with new updates before installing them. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

It’s a good idea to get in the habit of backing up your computer before applying monthly updates from Microsoft. Windows has some built-in tools that can help recover from bad patches, but restoring the system to a backup image taken just before installing updates is often much less hassle and an added peace of mind while you’re sitting there praying for the machine to reboot successfully after patching.

Adobe’s Flash update brings the program to v. 30.0.0.154 for Windows, macOS, Chrome and Linux. Most readers here know how I feel about Flash, which is a major security liability and a frequent target of browser-based attacks. The updates from Microsoft include these Flash fixes for IE, and Google Chrome has already pushed an update to address these five Flash flaws (although a browser restart may be needed).

But seriously, if you don’t have a specific need for Flash, just disable it already. Chrome is set to ask before playing Flash objects, but disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Adobe also released security updates for its PDF Reader and Acrobat products.

As always, please leave a note in the comments below if you experience any problems installing any of these updates.

CryptogramHacking Police Bodycams

Suprising no one, the security of police bodycams is terrible.

Mitchell even realized that because he can remotely access device storage on models like the Fire Cam OnCall, an attacker could potentially plant malware on some of the cameras. Then, when the camera connects to a PC for syncing, it could deliver all sorts of malicious code: a Windows exploit that could ultimately allow an attacker to gain remote access to the police network, ransomware to spread across the network and lock everything down, a worm that infiltrates the department's evidence servers and deletes everything, or even cryptojacking software to mine cryptocurrency using police computing resources. Even a body camera with no Wi-Fi connection, like the CeeSc, can be compromised if a hacker gets physical access. "You know not to trust thumb drives, but these things have the same ability," Mitchell says.

BoingBoing post.

Worse Than FailureCodeSOD: Isn't There a Vaccine For MUMPS?

Alex F is suffering from a disease. No, it’s not disfiguring, it’s not fatal. It’s something much worse than that.

It’s MUMPS.

MUMPS is a little bit infamous. MUMPS is its own WTF.

Alex is a support tech, which in their organization means that they sometimes write up tickets, or for simple problems even fix the code themselves. For this issue, Alex wrote up a ticket, explaining that the users was submitting a background job to run a report, but instead got an error.

Alex sent it to the developer, and the developer replied with a one line code fix:

 i $$zRunAsBkgUser(desc_$H,"runReportBkg^KHUTILLOCMAP",$na(%ZeData)) d
 . w !,"Search has been started in the background."
 e  w !,"Search failed to start in the background."

Alex tested it, and… it didn’t work. So, fully aware of the risks they were taking, Alex dug into the code, starting with the global function $$zRunAsBkgUser.

Before I post any more code, I am legally required to offer a content warning: the rest of this article is going to be full of MUMPS code. This is not for the faint of heart, and TDWTF accepts no responsibility for your mental health if you continue. Don’t read the rest of this article if you have eaten any solid food in the past twenty minutes. If you experience a rash, this may be a sign of a life threatening condition, and you should seek immediate treatment. Do not consume alcohol while reading this article. Save that for after, you’ll need it.

 ;---------
  ; NAME:         zRunAsBkgUser
  ; SCOPE:        PUBLIC
  ; DESCRIPTION:  Run the specified tag as the correct OS-level background user. The process will always start in the system default time zone.
  ; PARAMETERS:
  ;  %uJobID (I,REQ)      - Free text string uniquely identifying the request
  ;                         If null, the tag will be used instead but -- as this is not guaranteed unique -- this ID should be considered required
  ;  %uBkgTag (I,REQ)     - The tag to run
  ;  %uVarList (I,OPT)    - Variables to be passed from the current process' symbol table
  ;  %uJobParams (I,OPT)  - An array of additional parameters to be passed to %ZdUJOB
  ;                         Should be passed with the names of the parameters in %ZdUJOB, e.g. arr("%ZeDIR")="MGR"
  ;                         Currently supports only: %ZeDIR, %ZeNODE, %ZeBkOv
  ;  %uError (O,OPT)      - Error message in case of failure
  ;  %uForceBkg (I,OPT)   - If true, will force the request to be submitted to %ZeUMON
  ;  %uVerifyCond (I,OPT) - If null, this tag will return immediately after submitting the request
  ;                         If non-null, should contain code that will be evaluated to determine the success or failure of the request
  ;                         Will be executed as s @("result=("_%uVerifyCond_")")
  ;  %uVerifyTmo (I,OPT)  - Length of time, in seconds, to try to verify the success of the request
  ;                         Defaults to 1 second
  ; RETURNS:      If %uVerifyCond is not set: 1 if it's acceptable to run, 0 otherwise
  ;               If %uVerifyCond is set: 1 if the condition is verified after the specified timeout, 0 otherwise
zRunAsBkgUser(%uJobID,%uBkgTag,%uVarList,%uJobParams,%uError,%uForceBkg,%uVerifyCond,%uVerifyTmo) q $$RunBkgJob^%ZeUMON($$zCurrRou(),%uJobID,%uBkgTag,%uVarList,.%uJobParams,.%uError,%uForceBkg,%uVerifyCond,%uVerifyTmo) ;;#eof#  ;;#inline#

Thank the gods for comments, I guess. Alex’s eyes locked upon the sixth parameter- %uForceBkg. That seems a bit odd, for a function which is supposed to be submitting a background job. The zRunAsBkgUser function is otherwise quite short- it’s a wrapper around RunBkgJob.

Let’s just look at the comments:

 ;---------
  ; NAME:         RunBkgJob
  ; SCOPE:        INTERNAL
  ; DESCRIPTION:  Submit request to monitor daemon to run the specified tag as a background process
  ;               Used to ensure the correct OS-level user in the child process
  ;               Will fork off from the current process if the correct OS-level user is already specified,
  ;               unless the %uForceBkg flag is set. It will always start in the system default time zone.
  ; KEYWORDS:     run,background,job,submit,%ZeUMON,correct,user
  ; CALLED BY:    ($$)zRunAsBkgUser
  ; PARAMETERS:
  ;  %uOrigRou (I,REQ)    - The routine submitting the request
  ;  %uJobID (I,REQ)      - Free text string uniquely identifying the request
  ;                         If null, the tag will be used instead but -- as this is not guaranteed unique -- this ID should be considered required
  ;  %uBkgTag (I,REQ)     - The tag to run
  ;  %uVarList (I,OPT)    - Variables to be passed from the current process' symbol table
  ;                         If "", pass nothing; if 1, pass everything
  ;  %uJobParams (I,OPT)  - An array of additional parameters to be passed to %ZdUJOB
  ;                         Should be passed with the names of the parameters in %ZdUJOB, e.g. arr("%ZeDIR")="MGR"
  ;                         Currently supports only: %ZeDIR, %ZeNODE, %ZeBkOv
  ;  %uError (O,OPT)      - Error message in case of failure
  ;  %uForceBkg (I,OPT)   - If true, will force the request to be submitted to %ZeUMON
  ;  %uVerifyCond (I,OPT) - If null, this tag will return immediately after submitting the request
  ;                         If non-null, should contain code that will be evaluated to determine the success or failure of the request
  ;                         Will be executed as s @("result=("_%uVerifyCond_")")
  ;  %uVerifyTmo (I,OPT)  - Length of time, in seconds, to try to verify the success of the request
  ;                         Defaults to 1 second
  ; RETURNS:      If %uVerifyCond is not set: 1 if it's acceptable to run, 0 otherwise
  ;               If %uVerifyCond is set: 1 if the condition is verified after the specified timeout, 0 otherwise

Once again, the suspicious uForceBkg parameter is getting passed it. The comments claim that this only controls the timezone, which implies either the parameter is horribly misnamed, or the comments are wrong. Or, possibly, both. Wait, no, it's talking about ZeUMON. My brain wants it to be timezones. MUMPS is getting to me. Since the zRunAsBkgUser has different comments, I suspect it’s both, but it’s MUMPS. I have no idea what could happen. Let’s look at the code.

  RunBkgJob(%uOrigRou,%uJobID,%uBkgTag,%uVarList,%uJobParams,%uError,%uForceBkg,%uVerifyCond,%uVerifyTmo) ;
  n %uSecCount,%uIsStarted,%uCondCode,%uVarCnt,%uVar,%uRet,%uTempFeat
  k %uError
  i %uBkgTag="" s %uError="Need to pass a tag" q 0
  i '$$validrou(%uBkgTag) s %uError="Tag does not exist" q 0
  ;if we're already the right user, just fork off directly
  i '%uForceBkg,$$zValidBkgOSUser() d  q %uRet
  . d inheritOff^%ZdDEBUG()
  . s %uRet=$$^%ZdUJOB(%uBkgTag,"",%uVarList,%uJobParams("%ZeDIR"),%uJobParams("%ZeNODE"),$$zTZSystem(1),"","","","",%uJobParams("%ZeOvBk"))
  . d inheritOn^%ZdDEBUG()
  ;
  s:%uJobID="" %uJobID=%uBkgTag   ;this *should* be uniquely identifying, though it might not be...
  s ^%ZeUMON("START","J",%uJobID,"TAG")=%uBkgTag
  s ^%ZeUMON("START","J",%uJobID,"CALLER")=%uOrigRou
  i $$zFeatureCanUseTempFeatGlobal() s %uTempFeat=$$zFeatureSerializeTempGlo() s:%uTempFeat'="" ^%ZeUMON("START","J",%uJobID,"FEAT")=%uTempFeat
  m:$D(%uJobParams) ^%ZeUMON("START","J",%uJobID,"PARAMS")=%uJobParams
  i %uVarList]"" d
  . s ^%ZeUMON("START","J",%uJobID,"VARS")=%uVarList
  . d inheritOff^%ZdDEBUG()
  . i %uVarList=1 d %zSavVbl($name(^%ZeUMON("START","J",%uJobID,"VARS"))) i 1   ;Save whole symbol table if %uVarList is 1
  . e  f %uVarCnt=1:1:$L(%uVarList,",") s %uVar=$p(%uVarList,",",%uVarCnt) m:%uVar]"" ^%ZeUMON("START","J",%uJobID,"VARS",%uVar)=@%uVar
  . d inheritOn^%ZdDEBUG()
  s ^%ZeUMON("START","G",%uJobID)=""   ;avoid race conditions by setting pointer only after the data is complete
  d log("BKG","Request to launch tag "_%uBkgTag_" from "_%uOrigRou)
  q:%uVerifyCond="" 1   ;don't hang around if there's no need
  d
  . s %uError="Verification tag crashed"
  . d SetTrap^%ZeERRTRAP("","","Error verifying launch of background tag "_%uBkgTag)
  . s:%uVerifyTmo<1 %uVerifyTmo=1
  . s %uIsStarted=0
  . s %uCondCode="%uIsStarted=("_%uVerifyCond_")"
  . f %uSecCount=1:1:%uVerifyTmo h 1 s @%uCondCode q:%uIsStarted
  . d ClearTrap^%ZeERRTRAP
  . k %uError
  i %uError="",'%uIsStarted s %uError="Could not verify that job started successfully"
  q %uIsStarted
  ;
  q  ;;#eor#

Well, there you have it, the bug is so simple to spot, I’ll leave it as an exercise to the readers.

I’m kidding. The smoking gun, as Alex calls it, is the block:

  i '%uForceBkg,$$zValidBkgOSUser() d  q %uRet
  . d inheritOff^%ZdDEBUG()
  . s %uRet=$$^%ZdUJOB(%uBkgTag,"",%uVarList,%uJobParams("%ZeDIR"),%uJobParams("%ZeNODE"),$$zTZSystem(1),"","","","",%uJobParams("%ZeOvBk"))
  . d inheritOn^%ZdDEBUG()
  ;

This is what passes for an “if” statement in MUMPS. Specifically, if the %uForceBkg parameter is set, and the zValidBkgOSUser function returns true, then we’ll submit the job. Otherwise, we don’t submit the job, and thus get errors when we check on whether or not it’s done.

So, the underlying bug, such as it were, is a confusing parameter with an unreasonable default. This is not all that much of a WTF, I admit, but I really really wanted you all to see this much MUMPS code in a single sitting, and I wanted to remind you: there are people who work with this every day.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet Linux Australiasthbrx - a POWER technical blog: Improving performance of Phoronix benchmarks on POWER9

Recently Phoronix ran a range of benchmarks comparing the performance of our POWER9 processor against the Intel Xeon and AMD EPYC processors.

We did well in the Stockfish, LLVM Compilation, Zstd compression, and the Tinymembench benchmarks. A few of my colleagues did a bit of investigating into some the benchmarks where we didn't perform quite so well.

LBM / Parboil

The Parboil benchmarks are a collection of programs from various scientific and commercial fields that are useful for examining the performance and development of different architectures and tools. In this round of benchmarks Phoronix used the lbm benchmark: a fluid dynamics simulation using the Lattice-Boltzmann Method.

lbm is an iterative algorithm - the problem is broken down into discrete time steps, and at each time step a bunch of calculations are done to simulate the change in the system. Each time step relies on the results of the previous one.

The benchmark uses OpenMP to parallelise the workload, spreading the calculations done in each time step across many CPUs. The number of calculations scales with the resolution of the simulation.

Unfortunately, the resolution (and therefore the work done in each time step) is too small for modern CPUs with large numbers of SMT (simultaneous multi-threading) threads. OpenMP doesn't have enough work to parallelise and the system stays relatively idle. This means the benchmark scales relatively poorly, and is definitely not making use of the large POWER9 system

Also this benchmark is compiled without any optimisation. Recompiling with -O3 improves the results 3.2x on POWER9.

x264 Video Encoding

x264 is a library that encodes videos into the H.264/MPEG-4 format. x264 encoding requires a lot of integer kernels doing operations on image elements. The math and vectorisation optimisations are quite complex, so Nick only had a quick look at the basics. The systems and environments (e.g. gcc version 8.1 for Skylake, 8.0 for POWER9) are not completely apples to apples so for now patterns are more important than the absolute results. Interestingly the output video files between architectures are not the same, particularly with different asm routines and compiler options used, which makes it difficult to verify the correctness of any changes.

All tests were run single threaded to avoid any SMT effects.

With the default upstream build of x264, Skylake is significantly faster than POWER9 on this benchmark (Skylake: 9.20 fps, POWER9: 3.39 fps). POWER9 contains some vectorised routines, so an initial suspicion is that Skylake's larger vector size may be responsible for its higher throughput.

Let's test our vector size suspicion by restricting Skylake to SSE4.2 code (with 128 bit vectors, the same width as POWER9). This hardly slows down the x86 CPU at all (Skylake: 8.37 fps, POWER9: 3.39 fps), which indicates it's not taking much advantage of the larger vectors.

So the next guess would be that x86 just has more and better optimized versions of costly functions (in the version of x264 that Phoronix used there are only six powerpc specific files compared with 21 x86 specific files). Without the time or expertise to dig into the complex task of writing vector code, we'll see if the compiler can help, and turn on autovectorisation (x264 compiles with -fno-tree-vectorize by default, which disables auto vectorization). Looking at a perf profile of the benchmark we can see that one costly function, quant_4x4x4, is not autovectorised. With a small change to the code, gcc does vectorise it, giving a slight speedup with the output file checksum unchanged (Skylake: 9.20 fps, POWER9: 3.83 fps).

We got a small improvement with the compiler, but it looks like we may have gains left on the table with our vector code. If you're interested in looking into this, we do have some active bounties for x264 (lu-zero/x264).

Test Skylake POWER9
Original - AVX256 9.20 fps 3.39 fps
Original - SSE4.2 8.37 fps 3.39 fps
Autovectorisation enabled, quant_4x4x4 vectorised 9.20 fps 3.83 fps

Nick also investigated running this benchmark with SMT enabled and across multiple cores, and it looks like the code is not scalable enough to feed 176 threads on a 44 core system. Disabling SMT in parallel runs actually helped, but there was still idle time. That may be another thing to look at, although it may not be such a problem for smaller systems.

Primesieve

Primesieve is a program and C/C++ library that generates all the prime numbers below a given number. It uses an optimised Sieve of Eratosthenes implementation.

The algorithm uses the L1 cache size as the sieve size for the core loop. This is an issue when we are running in SMT mode (aka more than one thread per core) as all threads on a core share the same L1 cache and so will constantly be invalidating each others cache-lines. As you can see in the table below, running the benchmark in single threaded mode is 30% faster than in SMT4 mode!

This means in SMT-4 mode the workload is about 4x too large for the L1 cache. A better sieve size to use would be the L1 cache size / number of threads per core. Anton posted a pull request to update the sieve size.

It is interesting that the best overall performance on POWER9 is with the patch applied and in SMT2 mode:

SMT level baseline patched
1 14.728s 14.899s
2 15.362s 14.040s
4 19.489s 17.458s

LAME

Despite its name, a recursive acronym for "LAME Ain't an MP3 Encoder", LAME is indeed an MP3 encoder.

Due to configure options not being parsed correctly this benchmark is built without any optimisation regardless of architecture. We see a massive speedup by turning optimisations on, and a further 6-8% speedup by enabling USE_FAST_LOG (which is already enabled for Intel).

LAME Duration Speedup
Default 82.1s n/a
With optimisation flags 16.3s 5.0x
With optimisation flags and USE_FAST_LOG set 15.6s 5.3x

For more detail see Joel's writeup.

FLAC

FLAC is an alternative encoding format to MP3. But unlike MP3 encoding it is lossless! The benchmark here was encoding audio files into the FLAC format.

The key part of this workload is missing vector support for POWER8 and POWER9. Anton and Amitay submitted this patch series that adds in POWER specific vector instructions. It also fixes the configuration options to correctly detect the POWER8 and POWER9 platforms. With this patch series we get see about a 3x improvement in this benchmark.

OpenSSL

OpenSSL is among other things a cryptographic library. The Phoronix benchmark measures the number of RSA 4096 signs per second:

$ openssl speed -multi $(nproc) rsa4096

Phoronix used OpenSSL-1.1.0f, which is almost half as slow for this benchmark (on POWER9) than mainline OpenSSL. Mainline OpenSSL has some powerpc multiplication and squaring assembly code which seems to be responsible for most of this speedup.

To see this for yourself, add these four powerpc specific commits on top of OpenSSL-1.1.0f:

  1. perlasm/ppc-xlate.pl: recognize .type directive
  2. bn/asm/ppc-mont.pl: prepare for extension
  3. bn/asm/ppc-mont.pl: add optimized multiplication and squaring subroutines
  4. ppccap.c: engage new multipplication and squaring subroutines

The following results were from a dual 16-core POWER9:

Version of OpenSSL Signs/s Speedup
1.1.0f 1921 n/a
1.1.0f with 4 patches 3353 1.74x
1.1.1-pre1 3383 1.76x

SciKit-Learn

SciKit-Learn is a bunch of python tools for data mining and analysis (aka machine learning).

Joel noticed that the benchmark spent 92% of the time in libblas. Libblas is a very basic BLAS (basic linear algebra subprograms) library that python-numpy uses to do vector and matrix operations. The default libblas on Ubuntu is only compiled with -O2. Compiling with -Ofast and using alternative BLAS's that have powerpc optimisations (such as libatlas or libopenblas) we see big improvements in this benchmark:

BLAS used Duration Speedup
libblas -O2 64.2s n/a
libblas -Ofast 36.1s 1.8x
libatlas 8.3s 7.7x
libopenblas 4.2s 15.3x

You can read more details about this here.

Blender

Blender is a 3D graphics suite that supports image rendering, animation, simulation and game creation. On the surface it appears that Blender 2.79b (the distro package version that Phoronix used by system/blender-1.0.2) failed to use more than 15 threads, even when "-t 128" was added to the Blender command line.

It turns out that even though this benchmark was supposed to be run on CPUs only (you can choose to render on CPUs or GPUs), the GPU file was always being used. The GPU file is configured with a very large tile size (256x256) - which is fine for GPUs but not great for CPUs. The image size (1280x720) to tile size ratio limits the number of jobs created and therefore the number threads used.

To obtain a realistic CPU measurement with more that 15 threads you can force the use of the CPU file by overwriting the GPU file with the CPU one:

$ cp
~/.phoronix-test-suite/installed-tests/system/blender-1.0.2/benchmark/pabellon_barcelona/pavillon_barcelone_cpu.blend
~/.phoronix-test-suite/installed-tests/system/blender-1.0.2/benchmark/pabellon_barcelona/pavillon_barcelone_gpu.blend

As you can see in the image below, now all of the cores are being utilised! Blender with CPU Blend file

Fortunately this has already been fixed in pts/blender-1.1.1. Thanks to the report by Daniel it has also been fixed in system/blender-1.1.0.

Pinning the pts/bender-1.0.2, Pabellon Barcelona, CPU-Only test to a single 22-core POWER9 chip (sudo ppc64_cpu --cores-on=22) and two POWER9 chips (sudo ppc64_cpu --cores-on=44) show a huge speedup:

Benchmark Duration (deviation over 3 runs) Speedup
Baseline (GPU blend file) 1509.97s (0.30%) n/a
Single 22-core POWER9 chip (CPU blend file) 458.64s (0.19%) 3.29x
Two 22-core POWER9 chips (CPU blend file) 241.33s (0.25%) 6.25x

tl;dr

Some of the benchmarks where we don't perform as well as Intel are where the benchmark has inline assembly for x86 but uses generic C compiler generated assembly for POWER9. We could probably benefit with some more powerpc optimsed functions.

We also found a couple of things that should result in better performance for all three architectures, not just POWER.

A summary of the performance improvements we found:

Benchmark Approximate Improvement
Parboil 3x
x264 1.1x
Primesieve 1.1x
LAME 5x
FLAC 3x
OpenSSL 2x
SciKit-Learn 7-15x
Blender 3x

There is obviously room for more improvements, especially with the Primesieve and x264 benchmarks, but it would be interesting to see a re-run of the Phoronix benchmarks with these changes.

Thanks to Anton, Daniel, Joel and Nick for the analysis of the above benchmarks.

,

Rondam RamblingsRepublican tells brazen lie while "apologizing" for telling brazen lies

Lying has apparently become endemic in the Republican party.  Florida congressional candidate Melissa Howard dropped out of the race today after being caught lying about her academic credentials: A day after saying she planned to continue running for a state House seat despite revelations that she lied about having a degree from Miami University and went to great lengths to deceive people,

CryptogramDetecting Phishing Sites with Machine Learning

Really interesting article:

A trained eye (or even a not-so-trained one) can discern when something phishy is going on with a domain or subdomain name. There are search tools, such as Censys.io, that allow humans to specifically search through the massive pile of certificate log entries for sites that spoof certain brands or functions common to identity-processing sites. But it's not something humans can do in real time very well -- which is where machine learning steps in.

StreamingPhish and the other tools apply a set of rules against the names within certificate log entries. In StreamingPhish's case, these rules are the result of guided learning -- a corpus of known good and bad domain names is processed and turned into a "classifier," which (based on my anecdotal experience) can then fairly reliably identify potentially evil websites.

CryptogramGoogle Tracks its Users Even if They Opt-Out of Tracking

Google is tracking you, even if you turn off tracking:

Google says that will prevent the company from remembering where you've been. Google's support page on the subject states: "You can turn off Location History at any time. With Location History off, the places you go are no longer stored."

That isn't true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like "chocolate chip cookies," or "kids science kits," pinpoint your precise latitude and longitude ­- accurate to the square foot -­ and save it to your Google account.

On the one hand, this isn't surprising to technologists. Lots of applications use location data. On the other hand, it's very surprising -- and counterintuitive -- to everyone else. And that's why this is a problem.

I don't think we should pick on Google too much, though. Google is a symptom of the bigger problem: surveillance capitalism in general. As long as surveillance is the business model of the Internet, things like this are inevitable.

BoingBoing story.

Good commentary.

Planet DebianEnrico Zini: DebConf 18

This is a quick recap of what happened during my DebConf 18.

24 July:

  • after buying a new laptop I didn't set up a build system for Debian on it. I finally did it, with cowbuilder. It was straightforward to set up and works quite fast.
  • shopping for electronics. Among other things, I bought myself a new USB-C power supply that I can use for laptop and phone, and now I can have a power supply for home and one always in my backpack for traveling. I also bought a new pair of headphones+microphone, since I cannot wear in-ear, and I only had the in-ear ones that came with my phone.
  • while trying out the new headphones, I unexpectedly started playing loud music in the hacklab. I then debugged audio pin mapping on my new laptop and reported #904437
  • fixed debtags.debian.org nightly maintenance scripts, which have been mailing me errors for a while.

25 July:

26 July:

  • I needed to debug a wreport FTBFS on a porterbox, and since the procedure to set up a build system on a porterbox was long and boring, I wrote debug-on-porterbox
  • Fixed a wreport FTBFS and replaced it with another FTBFS, that I still haven't managed to track down.

27 July:

  • worked on multiple people talk notes, alone and with Rhonda
  • informal FD/DAM brainstorming with jmw
  • local antiharassment coordination with Tassia and Taowa
  • talked to ansgar about how to have debtags tags reach ftp-master automatically, without my manual intervention
  • watched a wonderful lunar eclipse

28 July:

  • implemented automatic export of debtags data for ftp-master
  • local anti-harassment team work

29 July:

30 July:

31 July:

  • Implemented F-Droid antifeatures as privacy:: Debtags tags

01 August:

  • Day trip and barbecue

02 August:

03 August:

  • Multiple People talk
  • Debug Boot of my laptop with UEFI with Steve, and found out that HP firmware updates for it can only be installed using Windows. I am really disappointed with HP for this, given it's a rather high-end business laptop.

04 August:

Worse Than FailureA Shell Game

When the big banks and brokerages on Wall Street first got the idea that UNIX systems could replace mainframes, one of them decided to take the plunge - Big Bang style. They had hundreds of programmers cranking out as much of the mainframe functionality as they could. Copy-paste was all the rage; anything to save time. It could be fixed later.

Nyst 1878 - Cerastoderma parkinsoni R-klep

Senior management decreed that the plan was to get all the software as ready as it could be by the deadline, then turn off and remove the mainframe terminals on Friday night, swap in the pre-configured UNIX boxes over the weekend, and turn it all on for Monday morning. Everyone was to be there 24 hours a day from Friday forward, for as long as it took. Air mattresses, munchies, etc. were brought in for when people would inevitably need to crash.

While the first few hours were rough, the plan worked. Come Monday, all hands were in place on the production floor and whatever didn't work caused a flurry of activity to get the issue fixed in very short order. All bureaucracy was abandoned in favor of: everyone has root in order to do whatever it takes on-the-fly, no approvals required. Business was conducted. There was a huge sigh of relief.

Then began the inevitable onslaught of add this and that for all the features that couldn't be implemented by the hard cutoff. This went on for 3-4 years until the software was relatively complete, but in desperate need of a full rewrite. The tech people reminded management of their warning about all the shortcuts to save time up front, and that it was time to pay the bill.

To their credit, management gave them the time and money to do it. Unfortunately, copy-paste was still ingrained in the culture, so nine different trading systems had about 90% of their code identical to their peers, but all in separate repositories, each with slightly different modification histories to the core code.

It was about this time that I joined one of the teams. The first thing they had me do was learn how to verify that all 87 (yes, eighty seven) of the nightly batch jobs had completed correctly. For this task, both the team manager and lead dev worked non-stop from 6AM to 10AM - every single day - to verify the results of the nightly jobs. I made a list of all of the jobs to check, and what to verify for each job. It took me from 6AM to 3:00PM, which was kind of pointless as the markets close at 4PM.

After doing it for one day, I said no way and asked them to continue doing it so as to give me time to automate it. They graciously agreed.

It took a while, but I wound up with a rude-n-crude 5K LOC ksh script that reduced the task to checking a text file for a list of OK/NG statuses. But this still didn't help if something had failed. I kept scripting more sub-checks for each task to implement what to do on failure (look up what document had the name of the job to run, figure out what arguments to pass, etc., get the status of the fix-it job, and notify someone on the upstream system if it still failed, etc). Either way, the result was recorded.

In the end, the ksh script had grown to more than 15K LOC, but it reduced the entire 8+ hour task to checking a 20 digit (bit-mask) page once a day. Some jobs failed every day for known reasons, but that was OK. As long as the bit-mask of the page was the expected value, you could ignore it; you only had to get involved if an automated repair of something was attempted but failed (this only happened about once every six months).

In retrospect, there were better ways to write that shell script, but it worked. Not only did all that nightly batch job validation and repair logic get encoded in the script (with lots of documentation of the what/how/why variety), but having rid ourselves of the need to deal with this daily mess freed up one man-day per day, and more importantly, allowed my boss to sleep later.

One day, my boss was bragging to the managers of the other trading systems (that were 90% copy-pasted) that he no longer had to deal with this issue. Since they were still dealing with the daily batch-check, they wanted my script. Helping peer teams was considered a Good Thing™, so we gave them the script and showed them how it worked, along with a detailed list of things to change so that it would work with the specifics of their individual systems.

About a week later, the support people on my team (including my boss) started getting nine different status pages in the morning - within seconds of each other - all with different status codes.

It turns out the other teams only modified the program and data file paths for the monitored batch jobs that were relevant to their teams, but didn't bother to delete the sections for the batch jobs they didn't need, and didn't update the notification pager list with info for their own teams. Not only did we get the pages for all of them, but this happened on the one day in six months that something in our system really broke and required manual intervention. Unfortunately, all of the shell scripts attempted to auto correct our failed job. Without. Any. Synchronization. By the time we cleared the confusion of the multiple pages, figured out the status of our own system, realized something required manual fixing and started to fix the mess created by the multiple parallel repair attempts, there wasn't enough time to get it running before the start of business. The financial users were not amused that they couldn't conduct business for several hours.

Once everyone changed the notification lists and deleted all the sections that didn't apply to their specific systems, the problems ceased and those batch-check scripts ran daily until the systems they monitored were finally retired.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #172

Here’s what happened in the Reproducible Builds effort between Sunday August 5 and Saturday August 11 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

There were a handful of updates to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages:

jenkins.debian.net development

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianMinkush Jain: Google Summer of Code 2018 Final Report

This is the summary of my work done during Google Summer of Code 2018 with Debian.

Project Title: Wizard/GUI helping new interns/students get started

Final Work Product: https://wiki.debian.org/MinkushJain/WorkProduct

Mentor: Daniel Pocock

Codebase: gsoc-2018-experiments

CardBook debian/sid

What is Google Summer of Code?

Google Summer of Code is a global program focused on introducing students to open source software development. Students work on a 3-month programming project with an open source organization during their break from university.

As you can probably guess, there is a high demand for its selection as thousands of students apply for it every year. The program offers students real-world experience to build software along with collaboration with the community and other student developers.

Project Overview

This project aims at developing tools and packages which would simplify the process for new applicants in the open source community to get the required setup. It would consist of a GUI/Wizard with integrated scripts to setup various communication and development tools like PGP and SSH key, DNS, IRC, XMPP, mail filters along with Jekyll blog creation, mailing lists subscription, project planner, searching for developer meet-ups, source code scanner and much more! The project would be free and open source hosted on Salsa (Debian based Gitlab)

I created various scripts and packages for automating tasks and helping a user get started by managing contacts, emails, subscribe to developer’s lists, getting started with Github, IRC and more.

Mailing Lists Subscription

I made a script for fully automating the subscription to various Debian mailing lists. The script also automates its reply process as well to complete the procedure for a user.

It works for all ten important Debian mailing lists for a newcomer like ‘debian-outreach’, ‘debian-announce’, ‘debian-news’, ‘debian-devel-announce’ and more.

I also spent time refactoring the code with my mentors to make it work as a stand-alone script by adding utility functions and fixing the syntax.

The video demo of the script had also been added in my blog.

It inputs the email and automated reply-code received from @lists.debian.org from the user, and subscribes them to the mailing list. The script uses requests library to send data on the website and submit it on their server.

For the application task, I also created a basic GUI for the program using PyQt.

Libraries used:

  • Requests
  • Smtp
  • PyQt
  • MIME handlers

This is a working demo of the script. The user can enter any Debian mailing lists to subscribe to it. They have to enter the unique code received by email to confirm their subscription:


Thunderbird Setup

This task involved writing program to simplify the setup procedure of Thunderbird for a new user.

I made a script which kills the Thunderbird process if it is running and then edits the ‘prefs.js’ configuration file to modify configuration settings of the software.

The program overwrites the existing settings by creating ‘user.js’ with cusotm settings. It gets implemented as soon Thunderbird is re-opened.

Also added the feature to extend the script to all profiles or a specific one which would be user’s choice.

Features:

  • Examines system process to find if Thunderbird is running in background and kills it.

  • Searches dynamically in user’s system to find the configuration file’s path.

  • User can chose which profile should they allow to change.

  • Modifies the default settings to accomplish the following:

    • User’s v-card is automatically appended in mails and posts.
    • Top-posting configuration has been setup by default.
    • Reply heading format is changed.
    • Plain-text mode made default for new mails.
    • No sound and alerts for incoming mails.

and many more…

Libraries used:

  • Psutil
  • Os
  • Subprocess


Source Code Scanner

I created a program to analyse user’s project directory to find which Programming Language they are proficient.

The script would help them realise which language and skill they prefer by finding the percentage of each language present.

It scans through all the file extensions like (.py, .java, .cpp) which are stored in a separate file and examines them to display the total number of lines and percentage of each language present in the directory.

The script uses Pygount library to scan all folders for source code files. It uses pygments syntax highlighting package to analyse the source code and can examine any language.

Libraries used:

  • os (operating system interfaces)
  • pygount

I added a Python script with all common file extensions included in it.

The script could be excecuted easily by entering the directory’s path by the user.

Research:

  • Searched Python’s glob library to iterate through home directory.

  • Using Github Linguists library to analyse code.

  • Pygments library to search languages through syntax highlighter.

This is a working demo of the script. The user can enter their project’s directory and the script will analyse it to publish the result:


CardBook Debian Package

For managing contacts/calendar for a user, Thunderbird extensions need to be installed and setup.

I created a Debian package for CardBook, a Thunderbird add on for managing contact using vCard and CardDAV standards.

I have written a blog here, explaining the entire development process , as well as using tools to make it comply to Debian standards.

Creating a Debian package from scratch, involved a lot of learning from resources and wiki pages.

I created the package using debhelper commands, and included the CardBook extension inside the package. I modified the binary package files like changes, control, rules, copyright for its installation.

I also created a Local Debian Repository for testing the package.

I created four updated versions of the package, which are present in the changelog.

I used Lintian tool to check for bugs, packaging errors and policy violations. I spent some time to remove all the Lintian errors in 1.3.0 version of the package.

I took help from mentors on IRC (#debian-mentors) and mailing lists during the packaging process. Finally, I added mozilla-devscripts to build the package using xul-ext architecture.

I updated the ‘watch’ file to automatically pull tags from upstream.

I mailed Carsten Schoenert, Debian Maintainer of Thunderbird and Lightning package, who helped me a lot along with my mentor, Daniel during the packaging process.

CardBook Debian Package: https://salsa.debian.org/minkush-guest/CardBook/tree/debian-package

Blog: http://minkush.me/cardbook-debian-package/

I created and setup my public and private GPG key using GnuPg and added them on mentors.debian.net.

I signed the package files including ‘.changes’, ‘.dsc’, ‘.deb’ using ‘dpkg-sig’ and ‘debsign’ and then verified them with my keys.

Finally, the package has been uploaded on mentors.debian.net using dput HTTPS method.

Link: https://mentors.debian.net/package/cardbook

This is video demo showing the package’s installation inside Thunderbird. As it can be clearly observed, CardBook was successfully installed as a Thunderbird add-on:


IRC Setup

One of most challenging tasks for a new contributor is getting started with Internet Relay Protocol chat and its setup.

I made an IRC Python bot to overcome the initial setup required. The script uses socket programming to connect to freenode server and send data.

Features:

*It registers new nickname for the user on Freenode server by sending user’s credentials to Nickserv. An email is received on successful registration of the nickname.

  • The script checks if the entered email is invalid or the nickname chosen by the user is already registered on the server. If this is case, the server disconnects and prompts the user again for re-entering the details.

  • It does identification for the nickname on the server before joining any channel by messaging ‘nickserv’ , if the nick registration is successful.

  • It displays the list of all available ‘#debian’ channels live on the server with minimum 30 members.

  • The script connects and joins with any IRC channel entered by the user and displays the live chat occurring on the channel.

  • Implements ping-pong protocol to keep the server live. This makes sure that the connection is not lost during the operation and simulate human interaction with the server by responding to its pings.

  • It continuously prints all data received from the server after decoding it with UTF-8 and closes the server after the operation is done.

Libraries:

Socket library

This is a working video demo for the IRC script.

To display one of it features, I have entered my already registered nickname (Mjain) to test it. It analyses server response to ask the user to again enter it.


Salsa and Github Registration

I created scripts using Selenium Web Driver to automate new account creation on Salsa and Github.

This task would provide a quick-start for a user to get started to contribute to Open source by registering account on web-hosting clients for version control.

I learned Selenium automation techniques in Python to accomplish it. It uses web driver to control it through automated scripts. (Tested with geckodriver for Firefox)

I used Pytest to write test scripts for both the programs which finds whether the account was successfully created or not.

Libraries used:

  • Selenium Web driver
  • Geckodriver
  • Pytest

Extract Mail Data

The aim for this task was to extract data from user’s email for ease of managing contacts.

I created a script to analyse user’s email and extract all Phone numbers present in it. The Program fetches all mails from the server using IMAP and decodes it in using UTF-8 to obtain it in readable format.

Features:

  • Easy login on mail server through user’s credentials

  • Obtains the date and time for all mails

  • Option to iterate through all or unseen mails

  • Extracts the Sender, Receiver, Subject and body of the email.

It scans the body of each message to look for phone numbers using python-phonenumbers and stores all of them along with details in a text file in external system.

Features:

  • Converts all the telephone numbers in Standard International Format E164 (adds country code if not already present)

  • Using geocoder to find the location of the phone numbers

  • Also extracts the Carrier name and Timezone details for all the phone numbers.

  • Saves all this data along with sender’s details in a file and also displays it on the terminal.

Libraries used:

  • Imaplib
  • IMAPClient
  • Python port of libphonenumbers (phoneumbers)

The original libphonenumbers is a popular Google’s library for parsing, formatting, and validating international phone numbers.

I also researched Telify Mozilla plugin for a similar algorithm to have click-to-save phone numbers.

This is a working video demo for the script:


HTTP Post Salsa Registration

I have created another script to automate the process of new account creation on Salsa using HTTP Post.

The script uses requests library to send HTTP requests on the website and send data in forms.

I used Beautiful Soup 4 library to parse and navigate HTML and XML data inside the URL and get tokens and form fields within the website.

The script checks for password mismatch and duplicate usernames and creates a new account instantly.

Libraries used:

  • Requests
  • Beautiful Soup

This is a working demo for the script. An email is received from Salsa which confirms that new account has been created:


Mail Filters Setup

One of the problems faced by a developer is filtering hundreds of unnecessary mails incoming from mailing lists, promotion websites, and spam.

Email client does the job to certain extent, still many emails are left which need to be sorted into categories.

For this purpose, I created a script which examines user’s mailbox and filters mails into labels and folders in Gmail, by creating them. The script uses IMAP to fetch mails from the server.

Libraries used:

Acknowledgment:

I would like to thank Debian and Google for giving me this opportunity to work on this project.

I am grateful to my mentors Daniel Pocock, Urvika Gola, Jaminy Prabharan and Sanyam Khurana for their constant help throughout GSoC.

Finally, this journey wouldn’t have been possible without my friends and family who supported me.

Special Mention

I would like to thank Carsten Schönert and Andrey Rahmatullin for their help with Debian packaging.

Planet DebianAthos Ribeiro: Google Summer of Code 2018 Final Report: Automatic Builds with Clang using Open Build Service

Project Overview

Debian package builds with Clang were performed from time to time through massive rebuilds of the Debian archive on AWS. The results of these builds are published on clang.debian.net. This summer project aimed to automate Debian archive clang rebuilds by substituting the current clang builds in clang.debian.net with Open Build System (OBS) builds.

Our final product consists of a repository with salt states to deploy an OBS instance which triggers Clang builds of Debian Unstable packages as soon as they get uploaded by their maintainers.

An instance of our clang builder is hosted at irill8.siege.inria.fr and the Clang builds triggered so far can be seen here.

My Google Summer of Code Project can bee seen at summerofcode.withgoogle.com/projects/#6144149196111872.

My contributions

The major contribution for the summer is our running OBS instance at irill8.siege.inria.fr.

Salt states to deploy our OBS intance

We created a series of Salt states to deploy and configure our OBS instance. The states for local deploy and development are available at github.com/athos-ribeiro/salt-obs.

Commits

The commits above were condensed and submitted as a Pull Request to the project’s mentor github account, with production deployment configurations.

OBS Source Service to make gcc/clang binary substitutions

To perform deb packages Clang builds, we substitute GCC binaries with the Clang binaries in the builders chroot during build time. To do that, we use the OBS Source Services feature, which requires a package (which performs the desired task) to be available to the target OBS project.

Our obs-service-clang-build package is hosted at github.com/athos-ribeiro/obs-service-clang-build.

Commits

Monitor Debian Unstable archive and trigger clang builds for newly uploaded packages

We also use two scripts to monitor the debian-devel-changes mailing lists, watching for new package uploads in Debian Unstable, and trigger Clang builds in our OBS instance whenever a new upload is accepted.

Our scripts to monitor the debian-devel-changes mailing list and trigger Clang builds in our OBS instance are available at github.com/athos-ribeiro/obs-trigger-sid-builds.

Commits

OBS documentation contributions

During the summer, most of my work was to read OBS documentation and code to understand how to trigger Debian Unstable builds in OBS and how to perform customized Clang builds (replacing GCC).

My contributions

Pending PRs

We want to change the Clang build links at tracker.debian.org/pkg/firefox To do so, we must change Debian distro-tracker to point to our OBS instance. As of the time this post was written, we have an open PR in distro-tracker to change the URLs:

Reports written through the summer

Adding new workers to the OBS instance

To configure new workers to our current OBS instance, hosted at irill8.siege.inria.fr, just set new salt slaves and provision them with obs-common and obs-worker, from github.com/opencollab/llvm-slave-salt. This should be done in the top.sls file.

Future work

  • We want to extend our OBS instance with more projects to provide Upstream LLVM packages to Debian and derived distributions.
  • More automation is needed in our salt states. For instance, we may want to automate SSL certificates generation using Let’s encrypt.
  • During the summer, several issues were detected in Debian Stable OBS packages. We want to work closer to OBS packages to help improving OBS packages and OBS itself.

Google Summer of Code experience

Working with Debian during the summer was an interesting experience. I did not expect to have so many problems as I did (see reports) with the OBS packages. This problems were turned into hours of debuging and reading Perl code in order to understand how OBS processes comunicate and trigger new builds. I also learned more about Debian packaging, salt and vagrant. I do expect to keep working with OBS and help maintaining the service we deployed during the summer. There’s still a lot of room for improvements and it is easy to see how the project benefits FLOSS communities.

,

Planet DebianIustin Pop: Eiger Bike Challenge 2018

So… another “fun” ride. Probably the most fun ever, both subjectively and in terms of Strava’s relative effort level. And that despite it being the “short” version of the race (55km/2’500m ascent vs. 88km/3’900m).

It all started very nicely. About five weeks ago, I started the Sufferfest climbing plan, and together with some extra cross-training, I was going very strong, feeling great and seeing my fitness increasing constantly. I was quite looking forward to my first time at this race.

Then, two weeks ago, after already having registered, family gets sick, then I get sick—just a cold, but with a persistent cough that has not gone away even after two weeks. The week I got sick my training plan went haywire (it was supposed to be the last heavy week), and the week of the race itself I was only half-recovered so I only did a couple of workouts.

With two days before the race, I was still undecided whether to actually try to do it or not. Weather was quite cold, which was on the good side (I was even a bit worried about too cold in the morning), then it turned to the better.

So, what do I got to lose? I went to the start of the 55km version. As to length, this is on the easy side. But it does have 2’500m of ascent, which is a lot for me for such a short ride. I’ve done this amount of ascent before—2017 BerGiBike, long route—but that was “spread” over 88km of distance and in lower temperatures and with quite a few kilograms fewer (on my body, not on the bike), and still killed me.

The race starts. Ten minutes in, 100m gained; by 18 minutes, 200m already. By 1h45m I’m done with the first 1’000m of ascent, and at this time I’m still on the bike. But I was also near the end of my endurance reserve, and even worse, at around 1h30m in, the sun was finally high enough in the sky to start shining on my and temperature went from 7-8°C to 16°. I pass Grosse Scheidegg on the bike, a somewhat flat 5k segment follows to the First station, but this flat segment still has around 300m of ascent, with one portion that VeloViewer says is around 18% grade. After pedalling one minute at this grade, I give up, get off the bike, and start pushing.

And once this mental barrier of “I can bike the whole race” is gone, it’s so much easier to think “yeah, this looks steep, let’s get off and push” even though one might still have enough reserves to bike uphill. In the end, what’s the difference between biking at 5km/h and pushing at 4.0-4.3km/h? Not much, and heart rate data confirms it.

So, after biking all the way through the first 1’100m of ascent, the remainder 1’400m were probably half-biking, half-pushing. And that might still be a bit generous. Temperatures went all the way up to 32.9°C at one point, but went back down a bit and stabilised at around 25°. Min/Avg/Max overall were 7°/19°/33° - this is not my ideal weather, for sure.

Other fun things:

  • Average (virtual) power over time as computed by VeloViewer went from 258W at 30m, to 230W at the end of first hour, 207W at 2h, 164W at 4h, and all the way down to 148W at the end of the race.
  • The brakes faded enough on the first long descend that in one corner I had to half-way jump of the bike and stop it against the hill; I was much more careful later to avoid this, which lead to very slow going down gravel roads (25-30km/h, not more); I need to fix this ASAP.
  • By last third of the race, I was tired enough that even taking a 2 minutes break didn’t relax my heart rate, and I was only able to push the bike uphill at ~3km/h.
  • The steepest part of the race (a couple of hundred meters at 22-24%) was also in the hottest temperature (33°).
  • At one point, there was a sign saying “Warning, ahead 2.5km uphill with 300m altitude gain”; I read that as “slowly pushing the bike for 2.5km”, and that was true enough.
  • In the last third of the race, there was a person going around the same speed as me (in the sense that we were passing each other again and again, neither gaining significantly). But he was biking uphill! Not much faster than my push, but still biking! Hat off, sir.
  • My coughing bothered me a lot (painful coughing) in the first two thirds, by the end of the race it was gone (now it’s back, just much better than before the race).
  • I met someone while pushing and we went together for close to two hours (on and off the bike), I think; lots of interesting conversation, especially as pushing is very monotonous…
  • At the end of the race (really, after the finish point), I was “ok, now what?” Brain was very confused that more pushing is not needed, especially as the race finishes with 77m of ascent.
  • BerGiBike 2017 (which I didn’t write about, apparently) was exactly the same recorded ascent to the meter: 2’506, which is a fun coincidence ☺

The route itself is not the nicest one I’ve done at a race. Or rather, the views are spectacular, but a lot of the descent is on gravel or even asphalt roads, and the single-trails are rare and on the short side. And a large part of the difficult descent are difficult enough that I skipped them, which in many other races didn’t happen to me. On the plus side, they had very good placements of the official photographers, I think one of the best setups I’ve seen (as to the number of spots and their positioning).

And final fun thing: I was not the last! Neither overall nor in my age category:

  • In my age category, I was place 129 our of 131 finishers, and there were another six DNF.
  • Overall (55km men), I was 391 out of 396 finishers, plus 17 DNF.

So, given my expectations for the race—I only wanted to finish—this was a good result. Grand questions:

  • How much did my sickness affect me? Especially as lung capacity is involved, and this being at between 1’000 and 2’000m altitude, when I do my training at below 500?
  • How much more could I have pushed the bike? E.g. could I push all above 10%, but bike the rest? What’s the strategy when some short bits are 20%? Or when there’s a long one at ~12%?
  • If I had an actual power meter, could I do much better by staying below my FTP, or below 90% FTP at all times? I tried to be careful with heart rate, but coupled with temperature increase this didn’t go as well as I thought it would.
  • My average overall speed was 8.5km/h. First in 55km category was 19.72km/h. In my age category and non-licensed, first one was 18.5km/h. How, as in how much training/how much willpower does that take?
  • Even better, in the 88km and my age category, first placed speed was 16.87km/h, finishing this longer route more than one hour faster than me. Fun! But how?

In any case, at my current weight/fitness level, I know what my next race profile will be. I know I can bike more than one thousand meters of altitude in a single long (10km) uphill, so that’s where I should aim at. Or not?

Closing with one picture to show how the views on the route are:

Yeah, that's me ☺ Yeah, that’s me ☺

And with that, looking forward to the next trial, whatever it will be!

CryptogramIdentifying Programmers by their Coding Style

Fascinating research de-anonymizing code -- from either source code or compiled code:

Rachel Greenstadt, an associate professor of computer science at Drexel University, and Aylin Caliskan, Greenstadt's former PhD student and now an assistant professor at George Washington University, have found that code, like other forms of stylistic expression, are not anonymous. At the DefCon hacking conference Friday, the pair will present a number of studies they've conducted using machine learning techniques to de-anonymize the authors of code samples. Their work could be useful in a plagiarism dispute, for instance, but it also has privacy implications, especially for the thousands of developers who contribute open source code to the world.

TED3 reasons why women are still fighting for equal healthcare

“A common theme here is that the data exists, but it has been ignored or beaten back,” says science journalist Linda Villarosa. At the Aspen Ideas Festival, TEDWomen co-host Pat Mitchell (at right) led a conversation about challenges around getting fair and equitable health care for women. The panel included, from left, journalist Villarosa, Dr. Deborah Rhodes of the Mayo Clinic and Dr. Paula Johnson, president of Wellesley College.

TEDWomen co-host Pat Mitchell writes: Once again this summer, I had the privilege of moderating sessions during the Spotlight Health Aspen Institute Ideas Festival. There were some surprises in a session titled “Breakthroughs and Challenges in Women’s Health” with importance for all women, and I want to share some of that information with you.

With two esteemed physicians — Dr. Deborah Rhodes of the Mayo Clinic and Dr. Paula Johnson, who was chief of women’s health at Brigham and Women’s Hospital at Harvard University and is now the president of Wellesley College — as well as science journalist Linda Villarosa, we began our conversation with the important reminder that improving health care depends in large part on research.

We don’t know what we don’t look for

Despite legislation passed over 20 years ago, women, and especially women of color, are still being left out of clinical trials, and the health outcomes for women, and especially women of color, reflect this disparity.

Dr. Paula Johnson talked about the disparity between the resources for research on men’s diseases and those specific to women in her 2014 TEDWomen talk — and if you haven’t seen it, I highly encourage you to watch it.

Dr. Johnson explained that every cell in the human body has a sex, which means that men and women are different right down to the cellular level! As a result, there are often significant differences in the ways in which men and women respond to disease or treatment. It’s very important in research trials to differentiate between female and male subjects so we can tease out the differences.

Although we have made progress since the 1990s with more women included in late-phase trials, we’re still not there in phases 1 and 2. This is important, she says, because how do we get to phase 3? Phases 1 and 2. In these early stages of research, female cells and female animals still aren’t being used. Why? She says one commonly cited reason is that female animals have an estrous cycle. Well, guess what, she says, so do we. What are we missing by not including female cells earlier in the research process?

The power and persistence of the status quo

One of the barriers to progress that perhaps we don’t think about as much is the problem with well-entrenched power paradigms, profit motives and institutional priorities. What happens when a doctor sees a need and solves it but the status quo is preferred over progress?

Dr. Deborah Rhodes — whose talk above from TEDWomen 2010 is a must — spoke about the challenges to her attempts to introduce a new diagnostic protocol for women with dense breasts. Dr. Rhodes (who in spirit of full disclosure is my personal physician at the Mayo Clinic) has observed in her practice that about 50% of women were potentially missing a cancer diagnosis because traditional mammograms fail in detecting breast cancer in women with dense breasts. Mammograms depend on visually seeing cancer cells, and in dense breasts this is more difficult because of the surrounding dense tissue.

As Dr. Rhodes says, in looking at entrenched paradigms in medicine, there is perhaps nothing more entrenched than the mammogram. She worked with physicists to come up with a new way to look for tumors using a tracer that has been safely used in cardiovascular medicine for decades that distinguishes tumor cells regardless of density. Her technique is FDA-approved, but you’ve probably never heard of it. It speaks to, as she says, “the extraordinary difficulties of upsetting something that is so precious to us as a mammogram.”

Earlier detection using her new test in women with dense breasts whose cancer may be hidden in a mammogram could spare women from toxic treatment (less advanced cancer means less chemotherapy) and, in more advanced cases, saving lives. Despite that, her research has been very, very difficult to fund. She says it’s a daily uphill battle to overturn the status quo. Doctors have invested years and years in learning how to read these difficult mammograms, and billions of dollars are invested in the current technology, resulting in a resistance to new technology and new ways of testing.

Intersection of gender, race and ethnicity

One of the more shocking statistics that Dr. Rhodes highlighted in her presentation was the disparity in outcomes for white women and women of color with breast cancer. White women are more likely to get breast cancer than black women, but black women are more likely to die of breast cancer. She says that is true particularly for black women under the age of 50 who are diagnosed with breast cancer. They are 77% more likely to die than white women. She points out that despite abundant data that informs us of these disparities, solutions are not being pursued.

The same tragic disparity between what we need to know for better health outcomes and what is fully understood as life and death factors was the subject of Linda Villarosa’s recent cover story in the New York Times Magazine titled “Why America’s Black Mothers and Babies Are in a Life-or-Death Crisis.” In her incredible article, she noted that black women were three-to-four times as likely to die in childbirth than white women and black babies die at a rate that is twice that of white babies.

Linda was one of the first journalists to put the maternal and infant mortality rates together and to investigate why black women and babies are so at risk. As she put it: “A common theme here is that the data exists, but it has been ignored or beaten back.” And further, she connected a condition identified earlier by Dr. Arline Geronimus called “weathering” that is a significant factor in the health outcomes for women of color. “The effect of racism — living with the near daily episodes of microaggressions and discriminations — have an adverse impact on health that needs to be better understood and incorporated into diagnosis and treatment for women of color.”

Shocking, yes, and deeply disturbing, but the good news is that the more we know about our own health and what impacts it adversely, the more proactive we can be as health consumers.

As one of the panelists noted to this highly engaged audience at Aspen Institute, “Nothing less than our lives depends on being informed and demanding that our health care institutions and physicians are, too.”

You can listen to the entire panel on aspenideas.org.

– Pat

TEDWOMEN 2018 UPDATE

showingup2018-small.png

 

The theme for this year’s TEDWomen event is “Showing Up.” We’re planning three inspiring days of ideas and connections full of creators, connectors and leaders. These dynamic and diverse pioneers are facing challenges head on and shaping the future we all want to see. If you haven’t been before, this is the year to show up!

I hope you’ll join us in Palm Springs Nov. 28–30, 2018. Registration is filling up fast and I don’t want you to miss out, so click this link to apply to attend today.

Planet DebianThomas Goirand: Official Debian testing OpenStack image news

A few things happened to the testing image, thanks to Steve McIntire, myself, and … some debconf18 foo!

  • The buster/testing image wasn’t generated since last April, this is now fixed. Thanks to Steve for it.
  • The datasource_list is now correct, in both the Stretch and Testing image (previously, cloustack was set too early in the list, which made the image wait 120 seconds for a data source which wasn’t available if booting on OpenStack).
  • The buster/testing image is now using the new package linux-image-cloud-amd64. This made the qcow file shrink from 614 MB to 493 MB. Unfortunately, we don’t have a matching arm64 cloud kernel image yet, but it’s still nice to have this for the amd64 arch.

Please use the new images, and report any issue or suggestion against the openstack-debian-images package.

Worse Than FailureA Tapestry of Threads

A project is planned. Gantt charts are drawn up. Timelines are set. They're tight up against the critical path, because including any slack time in the project plan is like planning for failure. PMs have meetings. Timelines slip. Something must be done, and the PMs form a nugget of a plan.

CORINTI

That nugget squeezes out of their meeting, and rolls downhill until it lands on some poor developer's desk.

That poor developer was Alona.

"Rex is the lead architect on this," her manager said. "And the project is about 90% complete… but even with that, they're never going to hit their timeline. So we're going to kinda do an 'all hands' thing to make sure that the project completes on time."

Alona was a junior developer, but even with that, she'd seen enough projects to know: slamming new resources onto a project in its final days never speeds up its completion. Even so, she had her orders.

Alona grabbed the code, checked the project backlog, confirmed her build environment, and then talked to the other developers. They had some warnings.

"It uses a lot of threads, and… well, the thread model is kinda weird, but Rex says it's the right way to do this."

"I don't understand what's going on with these threads, but I'm sure Rex could explain it to you."

"It's heavily CPU bound, but we're using more threads than we have cores, but Rex says we have to do it that way in order to get the performance we need."

Alona had never met Rex, but none of that sounded good. The first tasks Alona needed to grab off the backlog didn't have anything to do with the threads, so she spent a few days just writing code, until she picked up a bug which was obviously caused by a race condition.

From what she'd seen in the documentation and the code, that meant the problem had to be somewhere in MainComputationThread. She wasn't sure where it was defined, so she just did a quick search for the term class MainComputationThread.

It returned twenty hits. There was no class called MainComputationThread, though there was an interface. There were also classes which implemented that interface, named things like MainComputationThread1 and MainComputationThread17. A quick diff showed that all twenty of the MainComputationThreadn classes were 1,243 lines of perfectly identical code.

They were also all implemented as singletons.

Alona had never met Rex, and didn't want to, but she needed to send him an email and ask: "Why?"

Threading is a pretty advanced programming topic, so I put together an easy to use framework for writing threaded code, so that junior developers can use it. Just use it. -Rex

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianPetter Reinholdtsen: A bit more on privacy respecting health monitor / fitness tracker

A few days ago, I wondered if there are any privacy respecting health monitors and/or fitness trackers available for sale these days. I would like to buy one, but do not want to share my personal data with strangers, nor be forced to have a mobile phone to get data out of the unit. I've received some ideas, and would like to share them with you. One interesting data point was a pointer to a Free Software app for Android named Gadgetbridge. It provide cloudless collection and storing of data from a variety of trackers. Its list of supported devices is a good indicator for units where the protocol is fairly open, as it is obviously being handled by Free Software. Other units are reportedly encrypting the collected information with their own public key, making sure only the vendor cloud service is able to extract data from the unit. The people contacting me about Gadgetbirde said they were using Amazfit Bip and Xiaomi Band 3.

I also got a suggestion to look at some of the units from Garmin. I was told their GPS watches can be connected via USB and show up as a USB storage device with Garmin FIT files containing the collected measurements. While proprietary, FIT files apparently can be read at least by GPSBabel and the GpxPod Nextcloud app. It is unclear to me if they can read step count and heart rate data. The person I talked to was using a Garmin Forerunner 935, which is a fairly expensive unit. I doubt it is worth it for a unit where the vendor clearly is trying its best to move from open to closed systems. I still remember when Garmin dropped NMEA support in its GPSes.

A final idea was to build ones own unit, perhaps by basing it on a wearable hardware platforms like the Flora Geo Watch. Sound like fun, but I had more money than time to spend on the topic, so I suspect it will have to wait for another time.

While I was working on tracking down links, I came across an inspiring TED talk by Dave Debronkart about being a e-patient, and discovered the web site Participatory Medicine. If you too want to track your own health and fitness without having information about your private life floating around on computers owned by others, I recommend checking it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet Linux AustraliaMichael Still: city2surf 2018 wrap up

Share

city2surf 2018 was yesterday, so how did the race go? First off, thanks to everyone who helped out with my fund raising for the Black Dog Institute — you raised nearly $2,000 AUD for this important charity, which is very impressive. Thanks for everyone’s support!

city2surf is 14kms, with 166 meters of vertical elevation gain. For the second year running I was in the green start group, which is for people who have previously finished the event in less than 90 minutes. There is one start group before this, red, which is for people who can finish in less than 70 minutes. In reality I think its unlikely that I’ll ever make it to red — it would require me to shave about 30 seconds per kilometre off my time to just scrape in, and I think that would be hard to do.

Training for city2surf last year I tore my right achilles, so I was pretty much starting from scratch for this years event — at the start of the year I could run about 50 meters before I had issues. Luckily I was referred to an excellent physiotherapist who has helped me build back up safely — I highly recommend Cameron at Southside Physio Therapy if you live in Canberra.

Overall I ran a lot in training for this year — a total of 540 kilometres. I was also a lot more consistent than in previous years, which is something I’m pretty proud of given how cold winters are in Canberra. Cold weather, short days, and getting sick seem to always get in the way of winter training for me.

On the day I was worried about being cold while running, but that wasn’t an issue. It was about 10 degrees when we started and maybe a couple of degrees warmer than that at the end. The maximum for the day was only 16, which is cold for Sydney at this time of year. There was a tiny bit of spitting rain, but nothing serious. Wind was the real issue — it was very windy at the finish, and I think if it had been like that for the entire race it would have been much less fun.

That said, I finished in 76:32, which is about three minutes faster than last year and a personal best. Overall, an excellent experience and I’ll be back again.

Share

The post city2surf 2018 wrap up appeared first on Made by Mikal.

Krebs on SecurityFBI Warns of ‘Unlimited’ ATM Cashout Blitz

The Federal Bureau of Investigation (FBI) is warning banks that cybercriminals are preparing to carry out a highly choreographed, global fraud scheme known as an “ATM cash-out,” in which crooks hack a bank or payment card processor and use cloned cards at cash machines around the world to fraudulently withdraw millions of dollars in just a few hours.

“The FBI has obtained unspecified reporting indicating cyber criminals are planning to conduct a global Automated Teller Machine (ATM) cash-out scheme in the coming days, likely associated with an unknown card issuer breach and commonly referred to as an ‘unlimited operation’,” reads a confidential alert the FBI shared with banks privately on Friday.

The FBI said unlimited operations compromise a financial institution or payment card processor with malware to access bank customer card information and exploit network access, enabling large scale theft of funds from ATMs.

“Historic compromises have included small-to-medium size financial institutions, likely due to less robust implementation of cyber security controls, budgets, or third-party vendor vulnerabilities,” the alert continues. “The FBI expects the ubiquity of this activity to continue or possibly increase in the near future.”

Organized cybercrime gangs that coordinate unlimited attacks typically do so by hacking or phishing their way into a bank or payment card processor. Just prior to executing on ATM cashouts, the intruders will remove many fraud controls at the financial institution, such as maximum ATM withdrawal amounts and any limits on the number of customer ATM transactions daily.

The perpetrators also alter account balances and security measures to make an unlimited amount of money available at the time of the transactions, allowing for large amounts of cash to be quickly removed from the ATM.

“The cyber criminals typically create fraudulent copies of legitimate cards by sending stolen card data to co-conspirators who imprint the data on reusable magnetic strip cards, such as gift cards purchased at retail stores,” the FBI warned. “At a pre-determined time, the co-conspirators withdraw account funds from ATMs using these cards.”

Virtually all ATM cashout operations are launched on weekends, often just after financial institutions begin closing for business on Saturday. Last month, KrebsOnSecurity broke a story about an apparent unlimited operation used to extract a total of $2.4 million from accounts at the National Bank of Blacksburg in two separate ATM cashouts between May 2016 and January 2017.

In both cases, the attackers managed to phish someone working at the Blacksburg, Virginia-based small bank. From there, the intruders compromised systems the bank used to manage credits and debits to customer accounts.

The 2016 unlimited operation against National Bank began Saturday, May 28, 2016 and continued through the following Monday. That particular Monday was Memorial Day, a federal holiday in the United States, meaning bank branches were closed for more than two days after the heist began. All told, the attackers managed to siphon almost $570,000 in the 2016 attack.

The Blacksburg bank hackers struck again on Saturday, January 7, and by Monday Jan 9 had succeeded in withdrawing almost $2 million in another unlimited ATM cashout operation.

The FBI is urging banks to review how they’re handling security, such as implementing strong password requirements and two-factor authentication using a physical or digital token when possible for local administrators and business critical roles.

Other tips in the FBI advisory suggested that banks:

-Implement separation of duties or dual authentication procedures for account balance or withdrawal increases above a specified threshold.

-Implement application whitelisting to block the execution of malware.

-Monitor, audit and limit administrator and business critical accounts with the authority to modify the account attributes mentioned above.

-Monitor for the presence of remote network protocols and administrative tools used to pivot back into the network and conduct post-exploitation of a network, such as Powershell, cobalt strike and TeamViewer.

-Monitor for encrypted traffic (SSL or TLS) traveling over non-standard ports.

-Monitor for network traffic to regions wherein you would not expect to see outbound connections from the financial institution.

Update, Aug. 15, 11:11 a.m. ET: Several sources now confirm that the FBI alert was related to a breach of the Cosmos cooperative bank in India. According to multiple news sources, thieves using cloned cards executed some 12,000 transactions and stole roughly $13.5 million from Cosmos accounts via 25 ATMs located in Canada, Hong Kong and India.

,

Planet DebianShashank Kumar: Google Summer of Code 2018 with Debian - Final Report

Three weeks of Google Summer of Code went off to be life-changing for me. This here is the summary of my work which also serves as my Final Report of Google Summer of Code 2018.

GSoC and Debian

Preperations

My project is Wizard/GUI helping students/interns apply and get started and the final application is named New Contributor Wizard. It originated as the brainchild and Project Idea of Daniel Pocock for GSoC 2018 under Debian. I prepared the application task for the same and shared my journey through Open Source till GSoC 2018 in two of my blogs, From Preparations to Debian to Proposal and The Application Task and Results.

Project Overview

Sign Up Screen

New Contributor Wizard is a GUI application build to help new contributors get started with Open Source. It was an idea to bring together all the Tools and Tutorials necessary for a person to learn and start contributing to Open Source. The application contains different courseware sections like Communication, Version Control System etc. and within each section, there are respective Tools and Tutorials.

A Tool is an up and running service right inside the application which can perform tasks to help the user understand the concepts. For example, encrypting a message using the primary key, decrypting the encrypted message using the private key, and so on, these tools can help the user better understand the concepts of encryption.

A tutorial is comprised of lessons which contain text, images, questions and code snippets. It is a comprehensive guide for a particular concept. For example, Encryption 101, How to use git?, What is a mailing list? and so on.

In addition to providing the Tools and Tutorials, this application is build to be progressive. One can easily contribute new Tutorials by just creating a JSON file, the process of which is documented in the project repository itself. Similarly, a documentation for contributing Tools is present as well.

Project Details

Programming Language and Tools

For Development

For Testing

Environment

  • Pipenv for Python Virtual Environment
  • Debian 9 for Project Development and testing

Version Control System

For pinned dependencies and sub-dependencies one can have a look at the Pipfile and Pipfile.lock

My Contributions

The project was just an idea before GSoC and I had to make all the decisions for the implementation with the help of mentors whether it was Design or Architecture of the application. Below is the list of my contributions in shape of merge requests and every merge request contains UI, application logic, tests, and documentation. My contributions can also be seen in Changelog and Contribution Graph of the application.

Sign Up

Sign Up is the first screen a user is shown and asks for all the information required to create an account. It then takes the user to the Dashboard with all the courseware sections.

Merge request - Adds SignUp feature

Redmine Issue - Create SignUp Feature

Feature In Action (updated working of the feature)

Sign In

Alternate to Sign Up, the user has option to select Sign In to use existing account in order to access the application.

Merge Request - Adds SignIn feature

Redmine Issue - Create SignIn Feature

Feature In Action (updated working of the feature)

Dashboard

The Dashboard is said to be the protagonist screen of the application. It contains all the courseware sessions and their respective Tools and Tutorials.

Merge Request - Adds Dashboard feature

Redmine Issue - Implementing Dashboard

Feature In Action (updated working of the feature)

Adding Tool Architecture

Every courseware section can have respective Tools and Tutorials. To add Tools to a section I devised an architecture and implemented on Encryption to add 4 different Tools. They are:

  • Create Key Pair
  • Display and manage Key Pair
  • Encrypt a message
  • Decrypt a message

Merge Request - Adding encryption tools

Redmine Issue - Adding Encryption Tools

Feature In Action (updated working of the feature)

Adding Tutorial Architecture

Similar to Tools, Tutorials can be found with respect to any courseware section. I have created a Tutorial Parser, which can take a JSON file and build GUI for the Tutorial easily without any coding required. This way folks can easily contribute Tutorials to the project. I added Encryption 101 Tutorial to showcase the use of Tutorial Parser.

Merge Request - Adding encryption tutorials

Redmine Issue - Adding Encryption Tutorials

Feature In Action (updated working of the feature)

Adding 'Invite Contributor' block to Tools and Tutorials

In order to invite the contributor to New Contributor Wizard, every Tools and Tutorials menu display an additional block by linking the project repository.

Merge Request - Inviting contributors

Redmine Issue - Inviting contributors to the project

Feature In Action (updated working of the feature)

Adding How To Use

One of the courseware section How To Use help the user understand about different sections of the application in order to get the best out of it.

Merge Request - Updating How To Use

Redmine Issue - Adding How To Use in the application

Feature In Action (updated working of the feature)

Adding description to all the modules

All the courseware sections or modules need a simple description to describe what the user will learn using it's Tutorials and Tools.

Merge Request - Description added to all the modules

Redmine Issue - Add a introduction/description to all the modules

Feature In Action (updated working of the feature)

Adding Generic Tools and Tutorials Menu

This feature allows the abstraction of Tools and Tutorials architecture I mentioned earlier so that the Menu architecture can be used by any of the courseware sections following the DRY approach.

Merge Request - Adding Generic Menu

Redmine Issue - Adding Tutorial and Tools menu to all the modules

Tutorial Contribution Doc

A tutorial in the application can be added using just a JSON file. As mentioned earlier, it is made possible using the Tutorial Parser. A comprehensive ocumentation is added to help the users understand how they can contribute Tutorials to the application for the world to take advantage of.

Merge Request - Tutorial contribution docs

Redmine Issue - Add documentation for Tutorial development

Tools Contribution Doc

A tool in the application is build using Kivy lang and Python. A comprehensive documentation is added to the project in order for folks to contribute Tools for the world to take advantage of.

Merge Request - Tools contribution docs

Redmine Issue - Add documentation for Tools development

Adding a License to project

After having discussions with the mentors and a bit of research, GNU GPLv3 was finalized as the license for the project and has been added to the repository.

Merge Request - Adds License to project

Redmine Issue - Add a license to Project Repository

Allowing different timezones during Sign Up

Sign Up feature is refactored to support different timezones from the user.

Merge Request - Allowing different timezones during signup

Redmine Issue - Allow different timezones

All other contributions

Here's a list of all the merge request I raised to develop a feature or fix an issue with the application - All merge request by Shashank Kumar

Here are all the issues/bug/features I created, resolved or was associated to on the Redmine - All the redmine issue associated to Shashank Kumar

Packaging

The application has been packaged for PyPi and can be installed using either pip or pipenv.

Package - new-contributor-wizard

Packaging Tool - setuptools

To Do List

Weekly Updates And Reports

These report were send daily to private mentors mail thread and weekly on Debian Outreach mailing list.

Talk Delivered On My GSoC Project

On 12th August 2018, I gave a talk on How my Google Summer of Code project can help bring new contributors to Open Source during a meetup in Hacker Space, Noida, India. Here are the slides I prepared for my talk and a collection of photographs of the event.

Summary

New Contributor Wizard is ready for the users who would like to get started with Open Source as well as to the folks who would like to contribute Tools and Tutorials to the application as well.

Acknowledgment

I would like to thank Google Summer of Code for giving me the opportunity of giving back to the community and Debian for selecting me for the project.

I would like to thank Daniel Pocock for his amazing blogs and ideas he comes up which end up inspiring students and result in a project like above.

I would like to thank Sanyam Khurana for constantly motivating me by reviewing every single line of code which I wrote to come up with the best solution to put in front of the community.

Thanks to all the loved ones who always believed in me and kept me motivated.

Planet DebianVasudev Kamath: SPAKE2 In Golang: Finite fields of Elliptic Curve

In my previous post I talked about elliptic curve basics and how the operations are done on elliptic curves, including the algebraic representation which is needed for computers. For usage in cryptography we need a elliptic curve group with some specified number of elements, that is what we called Finite Fields. We limit Elliptic Curve groups with some big prime number p. In this post I will try to briefly explain finite fields over elliptic curve.

Finite Fields

Finite field or also called Galois Field is a set with finite number of elements. An example we can give is integer modulo `p` where p is prime. Finite fields can be denoted as \(\mathbb Z/p, GF(p)\) or \(\mathbb F_p\).

Finite fields will have 2 operations addition and multiplications. These operations are closed, associative and commutative. There exists a unique identity element and inverse element for every element in the set.

Division operation in finite fields is defined as \(x / y = x \cdot y^{-1}\), that is x multiplied by inverse of y. and substraction \(x - y\) is defined in terms of addition as \(x + (-y)\) which is x added by negation of y. Multiplicative inverse can be easily calculated using extended Euclidean algorithm which I've not understood yet myself as there were readily available library functions which does this for us. But I hear from Ramakrishnan that its very easy one.

Elliptic Curve in \(\mathbb F_p\)

Now we understood what is finite fields we now need to restrict our elliptic curves to the finite field. So our original definition of elliptic curve becomes slightly different, that is we will have modulo p to restrict the elements.

\begin{equation*} \begin{array}{rcl} \left\{(x, y) \in (\mathbb{F}_p)^2 \right. & \left. | \right. & \left. y^2 \equiv x^3 + ax + b \pmod{p}, \right. \\ & & \left. 4a^3 + 27b^2 \not\equiv 0 \pmod{p}\right\}\ \cup\ \left\{0\right\} \end{array} \end{equation*}

All our previous operations can now be written as follows

\begin{equation*} \begin{array}{rcl} x_R & = & (m^2 - x_P - x_Q) \bmod{p} \\ y_R & = & [y_P + m(x_R - x_P)] \bmod{p} \\ & = & [y_Q + m(x_R - x_Q)] \bmod{p} \end{array} \end{equation*}

Where slope, when \(P \neq Q\)

\begin{equation*} m = (y_P - y_Q)(x_P - x_Q)^{-1} \bmod{p} \end{equation*}

and when \(P = Q\)

\begin{equation*} m = (3 x_P^2 + a)(2 y_P)^{-1} \bmod{p} \end{equation*}

So now we need to know order of this finite field. Order of elliptic curve finite field can be defined as number of points in the finite field. Unlike integer modulo p where number of elements are 0 to p-1, in case of elliptic curve you need to count points from x to p-1. This counting will be \(O(p)\). Given large p this will be hard problem. But there are faster algorithm to count order of group, which even I don't know much in detail :). But from my reference its called Schoof's algorithm.

Scalar Multiplication and Cyclic Group

When we consider scalar multiplication over elliptic curve finite fields, we discover a special property. Taking example from Andrea Corbellini's post, consider curve \(y^2 \equiv x^3 + 2x + 3 ( mod 97)\) and point \(P = (3,6)\). If we try calculating multiples of P

\begin{align*} 0P = 0 \\ 1P = (3,6) \\ 2P = (80,10) \\ 3P = (80,87) \\ 4P = (3, 91) \\ 5P = 0 \\ 6P = (3,6) \\ 7P = (80, 10) \\ 8P = (80, 87) \\ 9P = (3, 91) \\ ... \end{align*}

If you are wondering how to calculate above (I did at first). You need to use point addition formula from earlier post where P = Q with mod 97. So we observe that there are only 5 multiples of P and they are repeating cyclicly. we can write above points as

  • \(5kP = 0P\)
  • \((5k + 1)P = 1P\)
  • \((5k + 2)P = 2P\)
  • \((5k + 3)P = 3P\)
  • \((5k + 4)P = 4P\)

Or simply we can write these as \(kP = (k mod 5)P\). We also note that all these 5 Points are closed under addition. This means adding two multiples of P, we obtain a multiple of P and the set of multiples of P form cyclic subgroup

\begin{equation*} nP + mP = \underbrace{P + \cdots + P}_{n\ \text{times}} + \underbrace{P + \cdots + P}_{m\ \text{times}} = (n + m)P \end{equation*}

Cyclic subgroups are foundation of Elliptic Curve Cryptography (ECC).

Subgroup Order

Subgroup order tells how many points are really there in the subgroup. We can redefine the order of group in subgroup context as order of P is the smallest positive integer such that nP = 0. In above case if you see we have smallest n as 5 since 5P = 0. So order of subgroup above is 5, it contains 5 element.

Order of subgroup is linked to order of elliptic curve by Lagrange's Theorem which says the order of subgroup is divisor of order of parent group. Lagrange is another name which I had read in my college, but the algorithms were different.

From this we have following steps to find out the order of subgroup with base point P

  1. Calculate the elliptic curve's order N using Schoof's algorithm.
  2. Find out all divisors of N.
  3. For every divisor of n, compute nP.
  4. The smallest n such that nP = 0 is the order of subgroup N.

Note that its important to choose smallest divisor, not a random one. In above examples 5P, 10P, 15P all satisfy condition but order of subgroup is 5.

Finding Base Point

Far all above which is used in ECC, i.e. Group, subgroup and order we need a base point P to work with. So base point calculation is not done at the beginning but in the end i.e. first choose a order which looks good then look for subgroup order and finally find the suitable base point.

We learnt above that subgroup order is divisor of group order which is derived from Lagrange's Theorem. This term \(h = N/n\) is actually called co-factor of the subgroup. Now why is this term co-factor important?. Without going into details, this co-factor is used to find generator for the subgroup as \(G = hP\).

Conclusion

So now are you wondering why I went on such length to describe all these?. Well one thing I wanted to make some notes for myself because you can't find all these information in single place, another these topics we talked in my previous post and this point forms the domain parameters of Elliptic Curve Cryptography.

Domain parameters in ECC are the parameters which are known publicly to every one. Following are 6 parameters

  • Prime p which is order of Finite field
  • Co-efficients of curve a and b
  • Base point \(\mathbb G\) the generator which is the base point of curve that generates subgroup
  • Order of subgroup n
  • Co-factor h

So in short following is the domain parameters of ECC \((p, a, b, G, n, h)\)

In my next post I will try to talk about the specific curve group which is used in SPAKE2 implementation called twisted Edwards curve and give a brief overview of SPAKE2 protocol.

Planet DebianSteve McIntyre: DebConf in Taiwan!

DebConf 18 logo

So I'm slowly recovering from my yearly dose of full-on Debian! :-) DebConf is always fun, and this year in Hsinchu was no different. After so many years in the project, and so many DebConfs (13, I think!) it has become unmissable for me. It's more like a family gathering than a work meeting. In amongst the great talks and the fun hacking sessions, I love catching up with people. Whether it's Bdale telling me about his fun on-track exploits or Stuart sharing stories of life in an Australian university, it's awesome to meet up with good friends every year, old and new.

DC18 venue

For once, I even managed to find time to work on items from my own TODO list during DebCamp and DebConf. Of course, I also got totally distracted helping people hacking on other things too! In no particular order, stuff I did included:

  • Working with Holger and Wolfgang to get debian-edu netinst/USB images building using normal debian-cd infrastructure;
  • Debugging build issues with our buster OpenStack images, fixing them and also pushing some fixes to Thomas for build-openstack-debian-image;
  • Reviewing secure boot patches for Debian's GRUB packages;
  • As an AM, helping two DD candidates working their way through NM;
  • Monitoring and tweaking an archive rebuild I'm doing, testing building all of our packages for armhf using arm64 machines;
  • Releasing new upstream and Debian versions of abcde, the CD ripping and encoding package;
  • Helping to debug UEFI boot problems with Helen and Enrico;
  • Hacking on MoinMoin, the wiki engine we use for wiki.debian.org;
  • Engaging in lots of discussions about varying things: Arm ports, UEFI Secure Boot, Cloud images and more

I was involved in a lot of sessions this year, as normal. Lots of useful discussion about Ignoring Negativity in Debian, and of course lots of updates from various of the teams I'm working in: Arm porters, web team, Secure Boot. And even an impromptu debian-cd workshop.

Taipei 101 - datrip venue

I loved my time at the first DebConf in Asia (yay!), and I was yet again amazed at how well the DebConf volunteers made this big event work. I loved the genius idea of having a bar in the noisy hacklab, meaning that lubricated hacking continued into the evenings too. And (of course!) just about all of the conference was captured on video by our intrepid video team. That gives me a chance to catch up on the sessions I couldn't make it to, which is priceless.

So, despite all the stuff I got done in the 2 weeks my TODO list has still grown. But I'm continuing to work on stuff, energised again. See you in Curitiba next year!

Planet DebianSam Hartman: Dreaming of a Job to Promote Love, Empathy and sexual Freedom

Debianhas always been filled with people who want to make the world a better place. We consider the social implications of our actions. Many are involved in work that focuses on changing the world. I’ been hesitant to think too closely about how that applies to me: I fear being powerless to bring about the world in which I would like to live.

Recently though, I've been taking the time to dream. One day my wife came home and told another story of how she’d helped a client reduce their pain and regain mobility. I was envious. Every day she advances her calling and brings happiness into the world, typically by reducing physical suffering. What would it be like for me to find a job where I helped advance my calling and create a world where love could be more celebrated. That seems such a far cry from writing code and working on software design every day. But if I don’t articulate what I want, I'll never find it.

I’ve been working to start this journey by acknowledging the ways in which I already bring love into the world. One of the most important lessons of Venus’s path is that to bring love into the world, you have to start by leading a life of love. At work I do this by being part of a strong team. We’re there helping each other grow, whether it is people trying entirely new jobs or struggling to challenge each other and do the best work we can. We have each other’s back when things outside of work mean we're not at our best. We pitch in together when the big deadlines approach.

I do not shove my personal life or my love and spirituality work in people’s faces, but I do not hide it. I'm there as a symbol and reminder that different is OK. Because I am open people have turned to me in some unusual situations and I have been able to invite compassion and connection into how people thought about challenges they faced.

This is the most basic—most critical love work. In doing this I’m already succeeding at bringing love into the world. Sometimes it is hard to believe that. Recently I have been daring to dream of a job in which the technology I created also helped bring love into the world.

I'd love to find a company that's approaching the world in a love-positive, sex-positive manner. And of course they need to have IT challenges big enough to hire someone who is world class at networking, security and cloud architecture. While I'd be willing to take a pay cut for the right job, I'd still need to be making a US senior engineer's salary.

Actually saying that is really hard. I feel vulnerable because I’m being honest about what I want. Also, it feels like I’m asking for the impossible.

Yet, the day after I started talking about this on Facebook, OkCupid posted a job for a senior engineer. That particular job would require moving to New York, something I want to avoid. Still, it was reassuring as a reminder that asking for what you want is the first step.

I doubt that will be the only such job. It's reasonable to assume that as we embrace new technologies like blockchains and continue to appreciate what the evolving web platform standards have to offer, there will be new opportunities. Yes, a lot of the adult-focused industries are filled with corruption and companies that use those who they touch. However, there's also room for approaching intimacy in a way that celebrates desire, connection, and all the facets of love.

And yes, I do think sexuality and desire are an important part of how I’d like to promote love. With platforms like Facebook, Amazon and Google, it's easier than ever for people to express themselves, to connect, and if they are willing to give up privacy, to try and reach out and create. Yet all of these platforms have increasingly restrictive rules about adult content. Sometimes it’s not even intentional censorship. My first post about this topic on Facebook was marked as spam probably because some friends suggested some businesses that I might want to look at. Those businesses were adult-focused and apparently even positive discussion of such businesses is now enough to trigger a presumption of spam.

If we aren't careful, we're going to push sex further out of our view and add to an ever-higher wall of shame and fear. Those who wish to abuse and hurt will find their spaces, but if we aren't careful to create spaces where sex can be celebrated alongside love, those seedier corners of the Internet will be all that explores sexuality. Because I'm willing to face the challenge of exploring sexuality in a positive, open way, I think I should: few enough people are.

I have no idea what this sort of work might look like. Perhaps someone will take on the real challenge of creating content platforms that are more decentralized and that let people choose how they want content filtered. Perhaps technology can be used to improve the safety of sex workers or eventually to fight shame associated with sex work. Several people have pointed out the value of cloud platforms in allowing people to host whatever service they would choose. Right now I’m at the stage of asking for what I want. I know I will learn from the exploration and grow stronger by understanding what is possible. And if it turns out that filling my every day life with love is the answer I get, then I’ll take joy in that. Another one of the important Venus lessons is celebrating desires even when they cannot be achieved.

Planet DebianSven Hoexter: iptables with random-fully support in stretch-backports

I've just uploaded iptables 1.6.2 to stretch-backports (thanks Arturo for the swift ACK). The relevant new feature here is the --random-fully support for the MASQUERADE target. This release could be relevant to you if you've to deal with a rather large amount of NATed outbound connections, which is likely if you've to deal with the whale. The engineering team at Xing published a great writeup about this issue in February. So the lesson to learn here is that the nf_conntrack layer propably got a bit more robust during the Bittorrent heydays, but NAT is still evil shit we should get rid of.

Planet DebianMike Hommey: Announcing git-cinnabar 0.5.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0?

  • git-cinnabar-helper is now mandatory. You can either download one with git cinnabar download on supported platforms or build one with make.
  • Performance and memory consumption improvements.
  • Metadata changes require to run git cinnabar upgrade.
  • Mercurial tags are consolidated in a separate (fake) repository. See the README file.
  • Updated git to 2.18.0 for the helper.
  • Improved memory consumption and performance.
  • Improved experimental support for pushing merges.
  • Support for clonebundles for faster clones when the server provides them.
  • Removed support for the .git/hgrc file for mercurial specific configuration.
  • Support any version of Git (was previously limited to 1.8.5 minimum)
  • Git packs created by git-cinnabar are now smaller.
  • Fixed incompatibilities with Mercurial 3.4 and >= 4.4.
  • Fixed tag cache, which could lead to missing tags.
  • The prebuilt helper for Linux now works across more distributions (as long as libcurl.so.4 is present, it should work)
  • Properly support the pack.packsizelimit setting.
  • Experimental support for initial clone from a git repository containing git-cinnabar metadata.
  • Now can successfully clone the pypy and GNU octave mercurial repositories.
  • More user-friendly errors.

Development process changes

It took about 6 months between version 0.3 and 0.4. It took more than 18 months to reach version 0.5 after that. That’s a long time to wait for a new version, considering all the improvements that have happened under the hood.

From now on, the release branch will point to the last tagged release, which is roughly the same as before, but won’t be the default branch when cloning anymore.

The default branch when cloning will now be master, which will receive changes that are acceptable for dot releases (0.5.x). These include:

  • Changes in behavior that are backwards compatible (e.g. adding new options which default to the current behavior).
  • Changes that improve error handling.
  • Changes to existing experimental features, and additions of new experimental features (that require knobs to be enabled).
  • Changes to Continuous Integration/Tests.
  • Git version upgrades for the helper.

The next branch will receive changes for the next “major” release, which as of writing is planned to be 0.6.0. These include:

  • Changes in behavior.
  • Changes in metadata.
  • Stabilizing experimental features.
  • Remove backwards compability with older metadata (< 0.5.0).

,

Planet DebianShirish Agarwal: Journeys

This would be a long blog post as I would be sharing a lot of journeys, so have your favorite beverage in your hand and prepare for an evening of musing.

Before starting the blog post, I have been surprised as the last week and the week before, lot of people have been liking my Debconf 2016 blog post on diaspora which is almost two years old. Almost all the names mean nothing to me but was left unsure as to reason of the spike. Were they debconf newcomers who saw my blog post and their experience was similar to mine or something, don’t know.

About a month and half back, I started reading Gandhiji’s ‘My Experiments with Truth‘ . To be truthful, a good friend had gifted this book back in 2015 but I had been afraid to touch it. I have had read a few autobiographies and my experience had been less than stellar when reading the autobiographies. Some exceptions are there, but those are and will remain exceptions. Now, just as everybody else even I had held high regard for Gandhiji and was afraid that reading the autobiography it would lower him in my eyes. As it is he is lovingly regarded as the ‘Father of the Nation‘ and given the honorific title of ‘Mahatma’ (Great Soul) so there was quite a bit of resistance within me to read the book as its generally felt to be impossible to be like him or even emulate him in any way.

So, with some hesitancy, I started reading his autobiography about a month and half back. It is a shortish book, topping out at 470 odd pages and I have read around 350 pages or so. While I am/was reading it I could identify with lot of what was written and in so many ways it also represents a lot of faults which are still prevalent in Indian society today as well.

The book is heavy with layered meanings. I do feel in parts there have been brushes of RSS . I dunno maybe that’s the paranoia in me, but would probably benefit from an oldish version (perhaps the 1993 version) if I can find it somewhere which probably may be somewhat more accurate. I don’t dare to review it unless I have read and re-read it at least 3-4 times or more. I can however share what I have experienced so far. He shares quite a bit of religion and shares his experience and understanding of reading the various commentaries both on Gita and the various different religious books like the ‘Koran’, ‘The Bible’ and so on and so forth. When I was reading it, I felt almost like an unlettered person. I know that at sometime in the near future I would have to start read and listen to various commentaries of Hinduism as well as other religions to have at least some base understanding.

The book makes him feel more human as he had the same struggles that we all do, with temptations of flesh, food, medicine, public speaking. The only difference between him and me that he was able to articulate probably in a far better way than people even today.

Many passages in the book are still highly or even more prevalent in today’s ‘life’ . It really is a pity it isn’t an essential book to be read by teenagers and young adults. At the very least they would start with their own inquiries at a young age.

The other part which was interesting to me is his description of life in Indian Railways. He traveled a lot by Indian Railways, in both third and first class. I have had the pleasure of traveling in first, second and general (third class), military cabin, guard cabin, luggage cabin as well as the cabin for people with disabilities and once by mistake even in a ladies cabin. The only one I haven’t tried is the loco pilot’s cabin and it’s more out of fear than anything else. While I know the layout of the cabin more or less and am somewhat familiar the job they have to do, I still fear as I know the enormous responsibilities the loco pilots have, each train carrying anywhere between 2300 to 2800 passengers or more depending on the locomotive/s, rake, terrain, platform length and number of passengers.

The experiences which Gandhiji shared about his travels then and my own limited traveling experience seem to indicate that change has hardly been in Indian Railways as far as the traveling experience goes.

A few days back my mum narrated one of her journeys on the Indian Railways when she was a kid, about five decades or so back. Her experience was similar to what even one can experience even today and probably decades from now till things don’t improve which I don’t think will happen at least in the short-term, medium to long-term who knows.

Anyways, my grandfather (my mother’s father, now no more 😦 ) had a bunch of kids. In those days, having 5-6 kids was considered normal. My mother, her elder sister (who is not with us anymore, god rest her soul.) and my grandpa took a train from Delhi/Meerut to Pune. As that time there was no direct train to Pune, the idea was to travel from Delhi to Bombay (today’s Mumbai). Take a break in Bombay (Mumbai) and then take a train to Pune. The journey was supposed to take only couple of days or a bit more. My grandma had made Puris and masala bhaji (basically boiled Potatoes mixed with Onions fried a bit) .

Puri bhaji image taken from https://www.spiceupthecurry.com/hi/poori-bhaji-recipe-hindi/

You can try making it with a recipe shared by Sanjeev Kapoor, a celebrity chef from India. This is not the only way to make it, Indian cooking is all about improvisation and experimentation but that’s a story for another day.

This is/was a staple diet for most North Indians traveling in trains and you can even find the same today as well. She had made it enough for 2 days with some to spare as she didn’t want my mum or her sister taking any outside food (food hygiene, health concerns etc.) My mum and sister didn’t have much to do and they loved my grandma’s cooking what was made for 2 days didn’t even last a day. What my mother, her sister and grandpa didn’t know it was one of those ill-fated journeys. Because of some accident which happened down the line, the train was stopped in Bhopal for indefinite time. This was the dead in night and there was nothing to eat there. Unfortunately for them, the accident or whatever happened down the line meant that all food made for travelers was either purchased by travelers before my mother’s train or was being diverted to help those injured . The end-result being that till they reached Mumbai which took another one and a half day which means around 4 days instead of two days were spent traveling. My grandpa also tried to get something for the children to eat but still was unable to find any food for them.

Unfortunately when they reached Bombay (today’s Mumbai) it was also dead at night so grandpa knew that he wouldn’t be able to get anything to eat as all shops were shut at night, those were very different times.

Fortunately for them, one of my grandfather’s cousins had got a trunk-call a nomenclature from a time in history when calling long-distance was pretty expensive for some other thing at his office from Delhi by one of our relatives on some unrelated matter. Land-lines were incredibly rare and it was just a sheer coincidence that he came to know that my grandpa would be coming to Bombay (Mumbai) and if possible receive him. My grandpa’s cousin made inquiries and came to know the accident and knew that the train would arrive late although he had no foreknowledge how late it would be. Even then he got meals for the extended family on both days as he knew that they probably would not be getting meals.

On the second night, my grandpa was surprised and relived to see his cousin and both my mum and her sister who had gone without food finished whatever food was bought within 10 minutes.

The toilets on Indian Railways in Gandhiji’s traveling days (the book was written in 1918 while he resided in Pune’s Yerwada Jail [yup, my city] ) and the accounts he shared were of 1908 and even before that, the days my mother traveled and even today are same, they stink irrespective of whichever class you travel. After reading the book, read and came to know that Yerwada had lot of political prisoners

The only changes which have happened is in terms of ICT but that too only when you know specific tools and sites. There is the IRCTC train enquiry site and the map train tracker . For food you have sites like RailRestro but all of these amenities are for the well-heeled or those who can pay for the amenities and know how to use the services. I say this for the reason below.

India is going to have elections come next year, to win in those elections round the corner the Government started the largest online recruitment drive for exams of loco pilot, junior loco pilot and other sundry posts. For around 90k jobs, something like 0.28 billion applied. Out of which around 0.17 billion were selected to apply for ‘online’ exam with 80-85% percent of the selected student given centers in excess of 1000 kms. At the last moment some special trains were made for people who wanted to go for the exams.

Sadly, due to the nature of conservatism in India, majority of the women who were selected choose to not travel that far as travel was time-consuming, expensive (about INR 5k or a bit more) excluding any lodging, just for traveling and a bit for eating. Most train journeys are and would be in excess of 24 hours or more as the tracks are more or less the same (some small improvement has been for e.g. from wooden tracks, it’s now concrete tracks) while the traffic has quadrupled from before and they can’t just build new lines without encroaching on people’s land or wildlife sanctuaries (both are happening but that’s a different story altogether.)

The exams are being held in batches and will continue for another 2-3 days. Most of the boys/gentleman are from rural areas for whom getting INR 5k/- is a huge sum. There are already reports of collusion, cheating etc. After the exams are over, the students fear that some people might go to court of law saying cheating happened, the court might make the whole exam null and void, cheating students of the hard-earned money and the suffering the long journeys they had to take. The date of the exams were shared just a few days and clashed with some other government exams so many will miss those exams, while some wouldn’t have time to prepare for the other exams.

It’s difficult to imagine the helplessness and stress the students might be feeling.

I just hope that somehow people’s hopes are fulfilled.

Ending the blog post on a hopeful and yet romantic ballad

Planet DebianNorbert Preining: TypeScript’s generics – still a long way to go

I first have to admit I am no a JavaScript or TypeScript expert, but the moment I wanted to implement some generic functionality (Discrete Interval Encoding Tree with additional data and that is merged according to a functional type parameter), I immediately stumbled upon lots of things I would like to have but aren’t there – I guess I am too much used to Scala.

First of all, parametric classes within parameters being again parametric, submitted in 2014, still open.

Then the nice one, type aliases do not work with classes (follow up here):

class A {}
type B = A;
var b = new B(); // <--- Cannot find name 'B'.

In this case not very useful, but in my case this would have been something like

type B = A>

in which case this would make a lot of sense.

Time to go back to some Scala ...

Planet DebianAndreas Metzler: You might not be off too bad ...

... if one of the major changes on going back to work after the holidays is that swimming in the morning or the evening in the lake in Lipno nad Vltavou is replaced by a going for a swim in the Lake Constance during the lunch break.

Don MartiQuestions for agency and publisher workshops

The web advertising game is changing from a hacking contest to a reputation contest. It would have had to happen anyway, but the shift is happening quickly right now because of two trends.

  • Privacy regulation (starting with the European Union, California and India). Some regulations will have impact outside their own juristictions when companies choose not to write and enforce separate second-class privacy policies for users not covered by those regulations.

  • New "browser wars" over which browser can best implement widely-held user norms on sharing their personal information. (Web browsers are good at showing you a web page that looks the same as it does on the other web browsers. Why switch browsers? For many users, because one browser does better at implementing your preferences on personal data sharing.)

Right now the web is terrible as a tool for brand building. But the web doesn't have to get better at signaling, or less fraudulent, than print or broadcast. In a lot of places the web just has to be better than Android. Fixing web advertising is not one big coordination problem. People who are interested in web advertising, from the publisher and ad agency point of view, have a lot of opportunities for innovative and remunerative projects.

  • Browser privacy improvements, starting with Apple Safari's Intelligent Tracking Prevention, are half of a powerful anti-fraud system. The better that the browser protects the user's information from leaking from one site to another, the less it looks like a fraudbot. How can publishers and brands build the other half, to shift ad budgets away from fraud?

  • "Conscious choosers" are an increasingly well-understood user segment, thanks to ongoing user research. For some brands and publishers, the best strategy may be to continue to pursue "personalization pioneers", the approximately one-third of users who don't object to having their information collected for ad targeting. Other brands have more appeal to mainstream, vaguely creeped out, users, or to users who more actively defend their personal info. How can "conscious chooser" research inform brands?

  • Regulation and browser privacy improvements are making contextual targeting more imporant. Where are the opportunities to reach human audiences in the right context? Where does conventional programmatic advertising miss out on high-context, signalful ad placements because of gaps in data?

  • As sharing of user data without permission becomes less common, new platforms are emerging to enable users to share information about themselves by choice. For example, a user who comments on a local news site about traffic may choose to share their neighborhood and the mode of transportation that they take to work. User data sharing platforms are in the early stages, and agencies have an opportunity to understand where publishers and browsers are going. (Hint: it'll be harder to get big-budget eyeballs on low-value or fraudulent sites.) Which brands can benefit from user-permissioned data sharing?

  • (Complementary to data sharing issues) Consent management is still an unsolved problem. While the Transparency and Consent Framework provides a useful foundation to build on, today's consent forms are too annoying for users and also make it difficult and time-consuming to do anything except select a single all-or-nothing choice. This doesn't accurately reflect the user's data sharing choices. The first generation of consent management is getting replaced with a better front end that not only sends a more accurate consent decision, but also takes less time and attention and is less vulnerable to consent string fraud. How will accurate and convenient consent management give advantages to sites and brands that users trust?

Workshops are in progress on all this stuff. (Mail me at work if you want to help organize one.) Clearly it's not all just coming from the browser side—forward-thinking people at ad agencies and publishers are coming up with most of it.

More than 1,000 U.S. news sites are still unavailable in Europe, two months after GDPR took effect

SuperAwesome now offers kids brands an alternative to YouTube

What is a Scandinavian media company’s first-ever director of public policy up against?

Coalition for Better Ads experiences European growing pains

Be Wary Of Ad-Tech Stories That Are Too Good To Be True

Rick Webb on Why Our Assumptions of Digital Advertising are Complete and Total Bunk

,

CryptogramFriday Squid Blogging: New Tool for Grabbing Squid and other Fragile Sea Creatures

Interesting video of a robot grabber that's delicate enough to capture squid (and even jellyfish) in the ocean.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Cory DoctorowTalking the hard questions of privacy and freedom with the Yale Privacy Lab podcast


This week, I sat down for an hour-long interview with the Yale Privacy Lab‘s Sean O’Brien (MP3); Sean is a frequent Boing Boing contributor and I was honored that he invited me to be his guest on the very first episode of the Lab’s new podcast.


As you might imagine, Sean had some sophisticated — and difficult — questions about privacy and freedom online and we delved into some material that I don’t normally get to cover. It was an exciting and challenging conversation!

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.5: Windows is back, and small updates

A brand new release of RQuantLib, now at version 0.4.5, just arrived on CRAN, and will get to Debian shortly. This release re-enables Windows builds thanks to a PR by Jeroen who now supplies a QuantLib library build in his rwinlib repositories. (Sadly, though, it is already one QuantLib release behind, so it would be awesome if a volunteer could step forward to help Jeroen keeping this current.) A few other smaller fixes were made, see below for more.

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.5 (2018-08-10)

  • Changes in RQuantLib code:

    • The old rquantlib.h header is deprecated and moved to a subdirectory. (Some OS confuse it with RQuantLib.h which Rcpp Attributes like to be the same name as the package.) (Dirk in #100 addressing #99).

    • The files in src/ now include rquantlib_internal.h directly.

    • Several ‘unused variable’ warnings have been taken care of.

    • The Windows build has been updated, and now uses an external QuantLib library from 'rwinlib' (Jeroen Ooms in #105).

    • Three curve-building example are no longer running by default as win32 has seen some numerical issues.

    • Two Rcpp::compileAttributes generated files have been updated.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianNorbert Preining: Specification and Verification of Software with CafeOBJ – Part 4 – Modules

This blog continues Part 1, Part 2, and Part 3 of our series on software specification and verification with CafeOBJ.

In the last part we have made our first steps with CafeOBJ, learned how start and quit the interpreter, how to access the help system, and how to write basic functions. This time we will turn our attention to the most fundamental building blocks of specifications, modules.

Answer to the challenge

But before diving into modules, let us give answers to the challenge posed at the end of the last blog, namely to give definitions of the factorial function and the Fibonacci function. Without much discussion, here is the one possible solution for the Fibonacci numbers:

-- Factorial
open NAT .
  op _! : Nat -> Nat .
  eq 0 ! = 1 .
  eq ( X:NzNat ! ) = X * ( (p X) ! ) .
  red 10 ! .
close
-- Fibonacci
open NAT .
  op fib : Nat -> Nat .
  eq fib ( 0 ) = 0 .
  eq fib ( 1 ) = 1 .
  ceq fib ( X:Nat ) = fib (p X) + fib (p (p X)) if X > 1 .
  red fib(10) .
close

Note that in the case of factorial we used the standard mathematical notation of postfix bang. We will come to the allowed syntax later on.

Modules

Modules are the basic building blocks of CafeOBJ specifications, and they correspond – or define – order-sorted algebras. Think about modules as a set of elements, operators on these elements, and rules how they behave. A very simple example of such a module is:

module ADDMOD {
  [ Elem ]
  op _+_ : Elem Elem -> Elem .
  op 0 : -> Elem .
  eq 0 + A:Elem = A .
}

This modules is introduced using the keyword module and the identifier ADDMOD, and the definition is enclosed in curly braces. The first line [ Elem ] defines the types (or sorts) available in the module. Most algebra one sees is mono-sorted, so there is only one possible sort, but algebras in CafeOBJ are many-sorted, and in fact order-sorted (we come to that later). So the module we are defining contains only elements of one sort Elem.

Next up are two lines starting with op, which introduces operators. Operators (think “functions”) have a certain arity, that is, the take a certain number of arguments and return a value. In the many-sorted case we not only need to specify how many arguments an operator takes, but also from which sorts these arguments are. In the above case, let us first look at the line op _+_ : Elem Elem -> Elem .. Here an infix operator + is introduced, that takes two arguments of sort Elem, and returns again an Elem.

The next line defines an operator 0 – you might ask what a strange operator this is, but looking at the definition we see that there is no arguments to the operator, only a result. This is the standard way to define constants. So we now have an addition operator and a constant operator.

The last line of the definition starts with eq which introduces an equality, or axiom, or rule. It states that the left side and the right side are equal, in mathematical terms, that 0 is a left-neutral element of +.

Even with much more complicated modules, these three blocks will form most of the code one writes in CafeOBj: sort declarations, operator declarations, axiom definitions.

A first module

Let us define a bit more involved module extending the ideas of the pet example above:

mod! PNAT {
  [ PNat ]
  op 0 : -> PNat .
  op s : PNat -> PNat .
  op _+_ : PNat PNat -> PNat .
  vars X Y : PNat
  eq 0 + Y = Y .
  eq s(X) + Y = s(X + Y) .
}

You will immediately recognize the same parts as in the first example, sort declaration, three operator declarations, and two axioms. There are two things that are new: (1) the module started with mod!: mod is a shorthand for the longer module, and the bang after it indicates that we are considering initial models – we wont go into details about initial versus free models here, though; (2) the line vars X Y : PNat: we have seen variable declarations inline in the first example A:Elem, but instead of this inline definition of variables one can defines variables beforehand and use them later on multiple times.

For those with mathematical background, you might have recognized that the name of the module (PNAT) relates to the fact that we are defining Peano Natural Numbers here. For those with even more mathematical background – yes I know this is not the full definition of Peano numbers 😉

Next, let us see what we can do with this module, in particular what are elements of this algebra and how we can add them. From the definiton we only see that 0 is a constant of the algebra, and that the monadic operator s defines new elements. That means, that terms of the form 0, s(0), s(s(0)), s(s(s(0))) etc are PNats (not ). So, can we do computations with these kind of numbers? Let us see:

CafeOBJ> open PNAT .
-- opening module PNAT.. done.
%PNAT> red s(s(s(0))) + s(s(0)) .
-- reduce in %PNAT : (s(s(s(0))) + s(s(0))):PNat
(s(s(s(s(s(0)))))):PNat
(0.0000 sec for parse, 0.0010 sec for 4 rewrites + 7 matches)
%PNAT> close
CafeOBJ>

Oh, CafeOBJ computed something for us. Let us look into details of the computation. CafeOBJ has a built-in tracing facility that shows each computations step. It is activated using the code set trace whole on:

%PNAT> set trace whole on

%PNAT> red s(s(s(0))) + s(s(0)) .
-- reduce in %PNAT : (s(s(s(0))) + s(s(0))):PNat
[1]: (s(s(s(0))) + s(s(0))):PNat
---> (s((s(s(0)) + s(s(0))))):PNat
[2]: (s((s(s(0)) + s(s(0))))):PNat
---> (s(s((s(0) + s(s(0)))))):PNat
[3]: (s(s((s(0) + s(s(0)))))):PNat
---> (s(s(s((0 + s(s(0))))))):PNat
[4]: (s(s(s((0 + s(s(0))))))):PNat
---> (s(s(s(s(s(0)))))):PNat
(s(s(s(s(s(0)))))):PNat
(0.0000 sec for parse, 0.0000 sec for 4 rewrites + 7 matches)

The above trace shows that the rule (axiom) eq s(X) + Y = s(X + Y) . is applied from left to right until the final term is obtained.

Computational model of CafeOBJ

The underlying computation mechanism of CafeOBJ is Term Rewriting, in particular order-sorted conditional term rewriting with associativity and commutativity (AC). Just as a side node, CafeOBJ is one of the very few (if not the only one still in use) system that implements conditional AC rewriting, in addition on order-sorted terms.

We wont go into details of the intricacies of term rewriting but mention only one thing that is important for the understanding of the computational model of the CafeOBJ interpreter: While axioms (equations) do not have a direction, they are just statements of equality between two terms, the evaluation carried out by the red uses the axioms only in the direction from left to right, thus it is important how an equation is written down. General rule is to write the complex term on the left side and the simple term on the right (if it is easy to define what is the more complex one).

Let us exhibit this directional usage on a simple example. Look at the following code, what will happen?

mod! FOO {
  [ Elem ]
  op f : Elem -> Elem .
  op a : -> Elem .
  var x : Elem
  eq f(x) = f(f(x)) .
}
set trace whole on
open FOO .
red f(a) .

I hope you guessed it, we will send CafeOBJ into an infinite loop:

%FOO> red f(a) .
-- reduce in %FOO : (f(a)):Elem
[1]: (f(a)):Elem
---> (f(f(a))):Elem
[2]: (f(f(a))):Elem
---> (f(f(f(a)))):Elem
[3]: (f(f(f(a)))):Elem
---> (f(f(f(f(a))))):Elem
[4]: (f(f(f(f(a))))):Elem
...

This is because the rules are strictly applied from left to right, and this can be done again and again.

Defining lists

To close this blog, let us use modules to define a very simple list of natural numbers: A list can either be empty, or it is a natural number followed by the symbol | and another list. In BNF this would look like

  L ::= nil  |   x|L

Some examples of list and things that are not proper lists:

  • nil – this is a list, the most simple one
  • 1 | ( 3 | ( 2 | nil ) ) – again a list, with all parenthesis
  • 1 | 3 | 2 | nil – again a list, if we assume right associativity of |
  • 1 | 3 | 2 – not a list, because the last element is a natural number and not a list
  • (1 | 3) | 2 | nil – again not a list, because the first element is not a natural number

Now let us reformulate this in CafeOBJ language:

mod! NATLIST {
  pr(NAT)
  [ NatList ]
  op nil : -> NatList .
  op _|_ : Nat NatList -> NatList .
}

Here one new syntax element did appear: the line pr(NAT) which is used to pull in the natural numbers NATin a protected way pr. This is the general method to include other modules and build up more complex entities. We will discuss import methods later on.

We also see that within the module NATLIST we have now multiple sorts (Nat and NatList, and operators that take arguments of different sorts.

As a final step, let us see whether the above definition is consistent with our list intuition from above, i.e., that the CafeOBJ parser accepts exactly these terms. We can use the already known red here, but if we only want to check whether an expression can be correctly parsed, CafeOBJ also offers the parse command. Let us use it to check the above list:

CafeOBJ> open NATLIST .
%NATLIST> parse nil .,
(nil):NatList
%NATLIST> parse 1 | ( 3 | ( 2 | nil ) ) .
(1 | (3 | (2 | nil))):NatList
%NATLIST> parse 1 | 3 | 2 | nil .
(1 | (3 | (2 | nil))):NatList
%NATLIST> 

Up to here, all fine, all the terms we hoped that they are lists are properly parsed into NatList. Let us try the same with the two terms which should not be parsed correctly:

%NATLIST> parse 1 | 3 | 2 .
[Error]: no successful parse
(parsed:[ 1 ], rest:[ (| 3 | 2) ]):SyntaxErr
%NATLIST> parse (1 | 3) | 2 | nil .
[Error]: no successful parse
((( 1 | 3 ) | 2 | nil)):SyntaxErr

As we see, parsing did not succeed, which is what we expected. CafeOBJ also tells us up to which part the parsing did work out, and where the first syntax error did occur.

This concludes the introduction to modules. Let us conclude with a challenge for the interested reader.

Challenge

Enrich the NatList module with an operator len that computes the length of a list. Here is the basic skeleton to be filled in:

op len : NatList -> Nat .
eq len(nil) = ??? .
eq len(E:Nat | L:NatList) = ??? .

Test your code by looking at the rewriting trace.

CryptogramDon't Fear the TSA Cutting Airport Security. Be Glad That They're Talking about It.

Last week, CNN reported that the Transportation Security Administration is considering eliminating security at U.S. airports that fly only smaller planes -- 60 seats or fewer. Passengers connecting to larger planes would clear security at their destinations.

To be clear, the TSA has put forth no concrete proposal. The internal agency working group's report obtained by CNN contains no recommendations. It's nothing more than 20 people examining the potential security risks of the policy change. It's not even new: The TSA considered this back in 2011, and the agency reviews its security policies every year. But commentary around the news has been strongly negative. Regardless of the idea's merit, it will almost certainly not happen. That's the result of politics, not security: Sen. Charles E. Schumer (D-N.Y.), one of numerous outraged lawmakers, has already penned a letter to the agency saying that "TSA documents proposing to scrap critical passenger security screenings, without so much as a metal detector in place in some airports, would effectively clear the runway for potential terrorist attacks." He continued, "It simply boggles the mind to even think that the TSA has plans like this on paper in the first place."

We don't know enough to conclude whether this is a good idea, but it shouldn't be dismissed out of hand. We need to evaluate airport security based on concrete costs and benefits, and not continue to implement security theater based on fear. And we should applaud the agency's willingness to explore changes in the screening process.

There is already a tiered system for airport security, varying for both airports and passengers. Many people are enrolled in TSA PreCheck, allowing them to go through checkpoints faster and with less screening. Smaller airports don't have modern screening equipment like full-body scanners or CT baggage screeners, making it impossible for them to detect some plastic explosives. Any would-be terrorist is already able to pick and choose his flight conditions to suit his plot.

Over the years, I have written many essays critical of the TSA and airport security, in general. Most of it is security theater -- measures that make us feel safer without improving security. For example, the liquids ban makes no sense as implemented, because there's no penalty for repeatedly trying to evade the scanners. The full-body scanners are terrible at detecting the explosive material PETN if it is well concealed -- which is their whole point.

There are two basic kinds of terrorists. The amateurs will be deterred or detected by even basic security measures. The professionals will figure out how to evade even the most stringent measures. I've repeatedly said that the two things that have made flying safer since 9/11 are reinforcing the cockpit doors and persuading passengers that they need to fight back. Everything beyond that isn't worth it.

It's always possible to increase security by adding more onerous -- and expensive -- procedures. If that were the only concern, we would all be strip-searched and prohibited from traveling with luggage. Realistically, we need to analyze whether the increased security of any measure is worth the cost, in money, time and convenience. We spend $8 billion a year on the TSA, and we'd like to get the most security possible for that money.

This is exactly what that TSA working group was doing. CNN reported that the group specifically evaluated the costs and benefits of eliminating security at minor airports, saving $115 million a year with a "small (nonzero) undesirable increase in risk related to additional adversary opportunity." That money could be used to bolster security at larger airports or to reduce threats totally removed from airports.

We need more of this kind of thinking, not less. In 2017, political scientists Mark Stewart and John Mueller published a detailed evaluation of airport security measures based on the cost to implement and the benefit in terms of lives saved. They concluded that most of what our government does either isn't effective at preventing terrorism or is simply too expensive to justify the security it does provide. Others might disagree with their conclusions, but their analysis provides enough detailed information to have a meaningful argument.

The more we politicize security, the worse we are. People are generally terrible judges of risk. We fear threats in the news out of proportion with the actual dangers. We overestimate rare and spectacular risks, and underestimate commonplace ones. We fear specific "movie-plot threats" that we can bring to mind. That's why we fear flying over driving, even though the latter kills about 35,000 people each year -- about a 9/11's worth of deaths each month. And it's why the idea of the TSA eliminating security at minor airports fills us with fear. We can imagine the plot unfolding, only without Bruce Willis saving the day.

Very little today is immune to politics, including the TSA. It drove most of the agency's decisions in the early years after the 9/11 terrorist attacks. That the TSA is willing to consider politically unpopular ideas is a credit to the organization. Let's let them perform their analyses in peace.

This essay originally appeared in the Washington Post.

Worse Than FailureError'd: Truth in Errors

Jakub writes, "I'm not sure restarting will make IE 'normal', but yeah, I guess it's worth a shot."

 

"What else can I say? Honest features are honest," wrote Matt H.

 

"This was the sign outside the speaker ready-room for the duration of American Mensa's 2018 Annual Gathering at the JW Marriott in Indianapolis," Dave A., "Of course, we weren't in control of the signs...or the fans...or the fan speeds."

 

"Well, Cisco made an attempt to personalize this email," wrote Pascal.

 

Dave A. wrote, "Skynet is coming (to the Chubby Squirrel Brewery, opened that very day), but we can do anything we want with it because there are no Term(s) of Use."

 

"Yeah, GMail, I agree, there's no way that number of fireworks all at once is safe," Ranuka wrote.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Don MartiICYMI

Inner procrastinator: HEY LET'S FIND SOME K3WL ARTICLES TO READ ON THE INTERNET

Sense of duty: No, must update project status. (Ctrl-T to open new tab)

Web browser: HEY WEREN'T YOU LISTENING TO INNER PROCRASTINATOR JUST NOW? HERE IS SOME RECOMMENDED CONTENT

Me: Preferences → Home → Firefox Home Content. Uncheck everything except "Web Search" and "Bookmarks".

Anyway, happy Friday. Since you're already reading blogs, you might as well read something good, so here is some stuff that the RSS reader dragged in. (My linklog is no longer getting posted to Facebook because reasons, so if you were clicking on links from me there you will have to figure something else out. The raw linklog is: feed. Ideas?)

The Segway patent expires next June. If you thought the scooters of San Francisco were annoying this year, just wait for the summer of generic-Segway-on-demand startups.

xkcd: Voting Software

Why open source failed

The Google Funded Astroturf Group that Hacked The EU Copyright Vote (In Pictures)

Juul & its House of Smoke & Horrors

Parking Has Eaten American Cities

Selling a Good Time: Inside the Wild, Wacky World of Minor League Baseball Marketing

How to Stop Your Smart TV From Tracking What You Watch

How to Pull Off a Professional Video Call From Home

Architects Ask: Where Are the Spaces for Teen Girls?

Open Offices Make You Less Open

The AudioKit Synth One is a pro-level iPad synth that’s completely free

The advantages of an email-driven git workflow

Here’s One Union That Can’t Be Touched by ‘Right to Work’ Laws

The Innovation Stack: How to make innovation programs deliver more than coffee cups

I Delivered Packages for Amazon and It Was a Nightmare

New US Tariffs are Anti-Maker and Will Encourage Offshoring

How to spot a perfect fake: the world’s top art forgery detective

Help The Stranger and ProPublica Track Online Ads About the Seattle Head Tax, the Midterms, and More

Should Bankers Be Forced to Put Some Skin in the Game?

San Francisco is losing more residents than any other city in the US, creating a shortage of U-Hauls that puts a rental at $2,000 just to move to Las Vegas

Containers, Security, and Echo Chambers

Planet Linux AustraliaMatthew Oliver: Keystone Federated Swift – Final post coming

This is a quick post to say the final topology post is coming. It’s currently in draft from and I hope to post it soon. I just realised it’s been a while so thought I’d better give an update.

 

The last post goes into what auth does, what is happening in keystone, what needs to happen  to really make this topology work and then talks about the brittle POC I created to have something to demo. I’ll be discussing other better options/alternative. But all this means it’s become much more detailed then I originally expected. I’ll hope to get it up by mid next week.

 

Thanks for waiting.

Planet DebianJacob Adams: PGP Clean Room 1.0 Release

After several months of work, I am proud to announce that my GSoC 2018 project, the PGP/PKI Clean Room, has arrived at a stable (1.0) release!

PGP/PKI Clean Room

PGP is still used heavily by many open source communities, especially Debian. Debian’s web of trust is the last line of defense between a Debian user and a malicious software update. Given the availability of GPG subkeys, the safest thing would be to store one’s private GPG master key offline and use subkeys regularly. However, many do not do this as it can be a complex and arcane process.

The PGP Clean Room simplifies this by allowing a user to safely create and store their master key offline while exporting their subkeys, either to a USB drive for importing on their computer, or to a smartcard, where they can be safely used without ever directly exposing one’s private keys to an Internet-connected computer.

The PGP Clean Room also supports PKI, allowing one to create a Certificate Authority and issue certificates from Certificate Signing Requests.

Screenshots

Main Menu

Progress Bar

Setting up a Smartcard

Usage

You’ll probably want to read the README to understand how to build and use this project.

It contains instructions on how to obtain the latest build, as well as how to verify it, use it and build it from source.

Translators Wanted

The application has gettext support and a partial German translation, but now that strings are final I would love to support more languages than just English! See the PGPCR README to get started, and thank you for your help!

Source Code

pgpcr: This repository contains the source code of the PGP Clean Room application. It allows one to manage PGP and PKI/CA keys, and export PGP subkeys for day-to-day usage.

make-pgp-clean-room: This repository holds all of the configuration required to build a live CD for the PGP Clean Room application. This is the recommended way to run the application and allows for easy offline key pair management. Everything from commit a50e2aae forward was part of GSoC 2018.

Development Log

The project changelog, which was a day-by-day log of my activities, can be found here.

You can find links to all my weekly reports on the project wiki page.

Bugs Filed

Over the course of this project I also filed a few bugs with other projects.

  • Debian #903681, about psf-unifont’s unneeded dependency on bdf2psf.
  • GNUPG T4001, about exposing import and export functions in the GPGME python bindings.
  • GNUPG T4052, about GPG’s inability to guess an algorithm for P-curve signing and authentication keys.

More Screenshots

Generating GPG Key

Generating GPG Signing Subkey

GPG Key Backup

Loading a GPG Key from a Backup

CA Creation

Advanced Menu

,

Cory DoctorowCome see me at the Edinburgh Festival and/or Worldcon!

I’m heading to Scotland for the Edinburgh Festival where I’m appearing with the wonderful Ada Palmer on August 12th at 845PM (we’re talking about the apocalypse, science fiction and hopefulness); from there, I’m heading to the 76th World Science Fiction Convention in San Jose, California, where I’ll be doing a bunch of panels, signings and a reading.


Here’s my Worldcon schedule:

Friday August 17:

11AM, Signing, The Book Bin table, dealer’s room R4

12PM, Borderlines, SJCC 210B, with Joanna Mead, Christopher Brown, Pepe Rojo and Kelly Robson: “National borders are historically recent. Borders between races and languages are arbitrary and arguable. Is it even possible or useful to have ‘borders’ between spacefaring civilizations? In cyberspace? How about between Science Fiction and Fantasy and Horror, or between cyberpunk and MilSF and ‘hard’ SF?”

130PM, Reading, SJCC211A

4PM Tachyon booth signing

Saturday, August 18:

12PM, 50 Years of Gratitude: The Clarion Workshop, SJCC210B, with Karen Joy Fowler, Christian Coleman, Nancy Etchemendy, James Patrick Kelly, Pat Murphy and Lilliam Rivera: “Join Karen Joy Fowler and other authors who have all been involved with The Clarion Workshop. Hear the history, learn about the participants, and see what will to expect from the next 50 years!”

Sunday, August 19:

3PM, So You Want to Build a Science Fictional Device, SJCC210G, with SB Divya, Sydney Thomson, and Bill Higgins: “Join us for an improv-technology panel – where the audience asks us to design a SFnal device, and the panelists have 5 minutes to come up with our best ‘non-handwavium’ answers.”

4PM, The Dark Side of the Digital Frontier — Facing Our Addictions, SJCC211C, with Rick Canfield and Brad Templeton: ” From ‘1984’ to video games, what remains science fiction and science fact? A conversation with internet pioneers, digital rights activists, and emerging technologists on the ethics of our digital addictions in an age without Net Neutrality.”

5PM: The Working Class in Science Fiction, SJCC210F, with Olav Rokne, Eric Flint and Eileen Gunn: “Labor unions are an important part of the everyday life for millions of American workers, yet labor unions seem to be largely absent from our science fictional narratives, as compared to the presence of corporate businesses. This panel will explore whether there’s an underlying assumption in science fiction that workers will not organize themselves, or whether there are alternative social models that are being explored. In the process, panelists will attempt to identify and analyze a very small but diverse body of SF works that do include images of unions, in ways that range from the symptomatic to the radically suggestive.”

Cory DoctorowMy closing keynote from the second Decentralized Web Summit

Two years ago, I delivered the closing keynote at the Internet Archive’s inaugural Decentralized Web event; last week, we had the second of these, and once again, I gave the closing keynote, entitled Big Tech’s problem is Big, not Tech. Here’s the abstract:

For decades, we fought complacency over Big Tech. That’s over. The techlash is here, and with it, a new and scarier problem: that we’ll tame big tech by regulating it with expensive compliance rules that no startup could match, enthroning Big Tech as permanent (regulated) monarchs of the digital age.

In this keynote address, author and advocate, Cory Doctorow, argues that Big Tech is a problem, but the problem isn’t “Tech,” it’s “BIG.” Giants get to bend policy to suit their ends, they get to strangle potential competitors in their infancy, they are the only game in town, so they can put the squeeze on users and suppliers alike.

Surrender is premature.

Nerds don’t take it or leave it: they take the parts that work and block the parts that don’t. That’s what we have to offer to everyone else: the training and tools to decide what tech can do with us, our data, and our communications. The MOST democratic future is one where everyone gets to hack, where we seize the means of computation and distribute it to everyone.

Worse Than FailureCodeSOD: Knowledge Transfer

Lucio Crusca is a consultant with a nice little portfolio of customers he works with. One of those customers was also a consultancy, and their end customer had a problem. The end customer's only in-house developer, Tyrell, was leaving. He’d worked there for 8 years, and nobody else knew anything about his job, his code, or really what exactly he’d been doing for 8 years.

They had two weeks to do a knowledge transfer before Tyrell was out the door. There was no chance of on-boarding someone in that time, so they wanted a consultant who could essentially act as a walking, talking USB drive, simply holding all of Tyrell’s knowledge until they could have a full-time developer.

As you can imagine, the two week brain-dump turned into a two week “documentation crunch” as pretty much nothing had any real documentation. That lead to comments like:

  /**
   * Parses a log file. This function looks for the string "ERR-001" in the log file
   * produced by the parser daemon (see project NetParserDaemon).
   * If found it returns the file contents with the string. Otherwise it
   * returns the file contents without it. 
   * Known bug: it needs more optimizations to handle very big files. In the 
   * meantime we manually restart the parser daemon from time to time, so the
   * log file doesn't grow too much.
   * @param log_file_name File name
   * @return ArrayList
   */
  public static ArrayList parseLogFile(String log_file_name) {
    …
  }

Read that comment, read the signature, and tell me if you have any idea what that method does? Because trust me, after reading the implementation, it’s not going to get any clearer.

  public static ArrayList parseLogFile(String log_file_name)
  {
    try
    {
      ArrayList result = new ArrayList();
      File f = new File(log_file_name);
      FileInputStream s = new FileInputStream(log_file_name);
      InputStreamReader r = new InputStreamReader(s);
      BufferedReader r2 = new BufferedReader(r);
      String line = null;
      int retry = 1;
      while (r2.readLine() != null)
      {
        try
        {
          for (int i = 0; i < retry; i++)
          {
            line = r2.readLine();
            result.add(line);
          }
          if (line.contains("ERR-001"))
            return result;
        }
        catch (OutOfMemoryError e)
        {
          System.gc();
          line = "Retry";
          retry = (int)(Math.random() * 10 + 10);
        }
      }
    }
    catch(Exception e)
    {
      ArrayList result = new ArrayList();
      result.add("ERR-001: File not found");
      return result;
    }
    finally
    {
      // always return the file contents, even when there are exceptions
      return parseLogFile(log_file_name);
    }
  }

There’s just… so much going on here. First, this method must be dead code. The return in the finally block always trumps any other return in Java, which means it would call itself recursively until a stack overflow. Also, don’t put returns in your finally block.

So it doesn’t work, clearly hasn’t been tested, and certainly can’t be invoked.

But the core loop demonstrates its own bizarre logic. We retry reading within a for loop, by default though, we only do 1 retry. If and only if we encounter an out of memory exception, then we set the retry to a random value and repeat the loop. Oh, and don’t forget to garbage collect first. If any other exception happens, we’ll catch that and just return an ArrayList with “ERR–001: File not found” in it, which raises some questions about what on earth ERR-001 actually means to this application.

By the time the company actually hired on a full-time developer, Lucio had already forgotten most of the knowledge dump, and the rushed documentation and broken code meant that there really wasn’t much knowledge to transfer to the new developer, beyond, “Delete it, destroy the backups, and start from scratch.”

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CryptogramSpiderOak's Warrant Canary Died

BoingBoing has the story.

I have never quite trusted the idea of a warrant canary. But here it seems to have worked. (Presumably, if SpiderOak wanted to replace the warrant canary with a transparency report, they would have written something explaining their decision. To have it simply disappear is what we would expect if SpiderOak were being forced to comply with a US government request for personal data.)

EDITED TO ADD (8/9): SpiderOak has posted an explanation claiming that the warrant canary did not die -- it just changed.

That's obviously false, because it did die. And a change is the functional equivalent -- that's how they work. So either they have received a National Security Letter and now have to pretend they did not, or they completely misunderstood what a warrant canary is and how it works. No one knows.

I have never fully trusted warrant canaries -- this EFF post explains why -- and this is an illustration.

Planet DebianNorbert Preining: DebConf 18 – Day 2

Although I have already returned from this year’s DebConf, I try to continue to write up my comments on the talks I have attended. The first one was DebConf 18 – Day 1, here we go for Day 2.

The day started with me sleeping in – having the peace of getting up and one’s own pace without a small daughter waking you up a a precious experience 😉 I spent my morning working on my second presentation and joined the conference for lunch.

After the lunch came a Plenary Talk/Discussion/Round Table on Ignoring Negativity. I tried to follow the discussion but somehow couldn’t keep me awake for more than 30min. For me this was the best sleeping pill ever encountered. The time I could listen to was mostly filled with voluptuous and elaborate verbiage I couldn’t digest. Missing IQ I guess on my side.

Next was a very interesting presentation on git-debrebase, a new tool for managing Debian packaging in git. I was very much impressed by the very tricky usage of git in areas I have never touched. Unfortunately one sour point did remain all through the end – by now it does not fully support collaboration in the sense that it can deal with digressing histories, one of the big features of git. Fortunately I learned after asking in the QA section, problems only arise when some very restricted branches are digressing, but not on normal operation.

After the coffee break I attended Autodeploy from salsa which was technically interesting, but I not directly usable for my own development, so I somehow dreamed through the talk.

The last talk for today was Server freedom: why choosing the cloud, OpenStack and Debian: With more and more services moving into the cloud, the question of lock-in is getting more and more pronounced. In my work environment I am dealing with this and we hope by using Kubernetes Cluster Federation and multi-cloud setups we can avoid the lock-in. Thomas gave a very interesting presentation on his work on OpenStack and the tools around it. Very promising and technically on a high level.

But the highlight of the day came after the dinner – the Cheese and Wine party. I cannot express my gratitude to all those who brought excellent cheese from their home countries. Life in Japan, where micro-slices of good cheese cost up to 10USD and more, is somehow a life of cheese deprivation. Enjoying this huge variety from all over the world was like heaven for me. I myself brought some sake and dried fish and burdock to contribute what I could do.

During the Cheese and Wine party we were also treated to a Kavalan Whiskey which has won some of the most prestigious prices for Whiskey making just this year in May. On my way back home to Japan I was sure to get two bottles for my own collection.

After having tasted countless wines and cheeses, a bit of this wonderful Kavalan, and enjoyed chatting with many of the participants mostly on matters unrelated to Debian, I returned late back to my hotel off campus.

Thanks goes to all the organizers of the conference and in particular the Wine and Cheese party for this spectacular event!

,

Planet DebianAthos Ribeiro: Building Debian packages with clang replacing gcc with Open Build Service

The production instance of our Open Build Service can be found here

This is the seventh post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Triggering Debian Unstable builds on Open Build Service

As I have been reporting on previous posts, whenever one tries to build packages against Debian unstable with Debian Stretch OBS packages, the OBS scheduler would download the package dependencies and clean up the cacher before the downloads were completed. This would cause builds to never get triggered: OBS would keep downloading dependencies forever.

This would only happen for builds on debian sid (unstable) and buster (testing).

After some investigation (as already reported before), we realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19.

While the backports package included the patch needed to support source packages built with newer versions of dpkg, we would still get the same issue with unstable and testing builds.

We spent a lot of time stuck in this issue, and I ended up proposing to host the whole archive in our OBS instance so we could use it as dependencies for new packages. It ended up being a terrible idea, since it would be unfeasible to test it locally and we would have to update the packages on each accepted upload into debian unstable (note that several packages get updated every day). Hence, I kept studying OBS code to understand how builds get triggered and why it would not happen for unstable/testing packages.

After diving into OBS code, we realized that it uses libsolv to read repositories. In libsolv’s change history, we found that there was an arbitrary size limit to the debian control file, which was fixed in an newer version of the package.

After updating the libsolv0, libsolv-perl and libsolvext0 packages (using Debian Buster versions), we could finally trigger Debian unstable builds.

Substituting gcc for clang in build time

With OBS being able to trigger builds for Debian Unstable, we now needed to be able to substitute builds with gcc for builds with the clang compiler. As suggested by my mentor, Sylvestre, we wanted to substitute gcc binaries for clang ones. For that, we needed to find a way to run a script suggest by Sylvestre in our build environments prior triggering the builds.

Fortunately, Open Build Service has a Source Services feature that can do exactly that: change source code before builds or run specific tasks in the build environment during build time. For the latter (which is what we needed to substitute gcc binaries), OBS requires one to build an obs-service-SERVICE_NAME package, which will be injected in the build environment as a dependency and get whatever task specified by that package performed.

We built our obs-service-clang-build package with the needed scripts to perform the gcc/clang substitutions in build time.

To activate the substitution task for a package, it should contain a _service file with in its root path (along with the debian and pristine sources) the following content:

<services>
  <service name="clang_build" mode="buildtime" />
</services>

If everything is correct, we should see

[  TIME in seconds] Running build time source services...

in the log file, followed by the gcc substitution commands.

Automating clang builds for new Debian unstable uploads

Now, we must proceed to trigger Debian unstable builds for newly accepted uploads. To do so, we wrote a cron job to monitor the debian-devel-changes mailing list. We check for new source uploads to debian unstable and trigger a new build for those new packages (we added a 4 hour cool-down before triggering the build to allow the package to propagate through the mirrors).

The results can be seen at https://irill8.siege.inria.fr/project/show/Debian:Unstable:Clang

Updating links in Debian distro-tracker

Debian distro-tracker has links to clang builds, which would point to the former service at clang.debian.net. We now want to substitute those links to point to our new OBS instance. Thus, we opened a new pull request to perform such change.

Next steps

Now that we have an instance of our project up and running, there are a few tasks left for our final GSoC submission.

  • Migrate salt scripts from my personal github namespace to https://github.com/opencollab/llvm-slave-salt. A few adjustments may be needed due to the environment differences.
  • Write some documentation for the project and a guide on how to add new workers
  • Create a separate github project for the cron which analyzes debian-devel-changes mailing list
  • Create a separate github project for the obs-service-clang-build package

Planet Debianbisco: Final GSOC 2018 Report

This is the final report of my 2018 Google Summer of Code project. It also serves as my final code submission.

Short overview:

Description

The main project was nacho, the web frontend for the guest accounts of the Debian project. The software is now in a state where it can be used in a production enviroment and there is already work being done to deploy the application on Debian infrastructure. It was a lot of fun programming that software and i learned a lot about Python and Django. My mentors gave me valuable feedback and pointed me in the right direction in case i had questions. There are still some ideas or features that can be implemented and i’m sure some feature requests will come up in the future. Those can be tracked in the issue tracker in the salsa repository. An overview of the activity in the project, including both commits and issues, can be seen in the activity list.

The SSO evaluations i did give an overview of existing solutions and will help in the decision making process. The README in the evaluation repository has a table taht summarizes the findings of the evaluations.

The branch of nm.debian.org that implements oauth2 authentication against an oauth2 provider provides a proof of concept of how the authentication can be implemented and it can be used to integrate the functionality into other services.

I’ve learned a lot in the last few month and it was a pleasure to work with babelouest and formorer. Debian is an interesting project and i plan to keep on contributing or maybe even intensify the contributions. Maybe i can use the the oauth2 authentication on nm.debian.org for my own application soon ;)

Reports

The list of reports in chronological order from top to bottom:

Worse Than FailureCodeSOD: CDADA

If there’s one big problem with XML, it’s arguably that XML is overspecified. That’s not all bad- it means that every behavior, every option, every approach is documented, schematized, and defined. That might result in something like SOAP, which creates huge, bloated payloads, involves multiple layers of wrapping tags, integrates with discovery schemas, has additional federation and in-built security mechanisms, each of which are themselves defined in XML. And let’s not even start on XSLT and XQuery.

It also means that if you have a common task, like embedding arbitrary content in a safe fashion, there’s a well-specified and well-documented way to do it. If you did want to embed arbitrary content in a safe fashion, you could use the <![CDATA [Here is some arbitrary content]]> directive. It’s not a pretty way of doing it, but it means you don’t have to escape anything but ]]>, which is only a problem in certain esoteric programming languages with rude names.

So, there’s an ugly, but perfectly well specified and simple to use method of safely escaping content to store in XML. You know why we’re here. Carl W was going through some of the many, many gigs of XML data files his organization uses, and found:

<CommandLine>&amp;lt%3bPATH&amp;gt%3bSOME_VALUE_HERE&amp;lt%3b/PATH&amp;gt%3b</CommandLine>

The specific sequence of mangling operations that were performed aren’t documented anywhere, but you can figure it out. To decode this, you first have to convert the character entities back into actual characters- which really is just the ampersands.

Now you have: &lt%3bPATH&gt%3bSOME_VALUE_HERE&lt%3b/PATH&gt%3b.

This is obviously URL encoded. So we can reverse that, yielding &lt;PATH&gt;SOME_VALUE_HERE&lt;/PATH&gt;.

Now, we can decode the character entities here.

<PATH>SOME_VALUE_HERE</PATH>

XML documents nest quite neatly, so why even do this escaping rigamarole? If you don't want it as XML, why not use CDATA? Why URL encode any of this? Carl had neither the time nor the documentation to figure it out. He simply changed SOME_VALUE_HERE to NEW_VALUE_HERE, and moved on to the next problem.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianLouis-Philippe Véronneau: I am Tomu!

While I was away for DebConf18, I received the Tomu boards I ordered on Crowdsupply a while ago while the project was still going through crowdfunding.

A Tomu board next to a US cent for size comparison

For those of you who don't know what the Tomu is, it's a tiny ARM microprocessor board which fits in your USB port. There are a bunch of neat stuff you can do with it, but I use it as a U2F token.

The design is less sleek than a YubiKey nano and it can't be used as a GPG smartcard (yet!), but it runs free software on open hardware and everything can be built using a free software toolchain.

It also cost me a fraction of the price of a Yubico device (14 CAD with shipping vs 70+ CAD for the YubiKey nano) so I could literally keep 1 for me and give away 4 Tomus to my friends and family for the price of a YubiKey nano.

But yeah, the deal breaker really is the openness of the device. I don't see how I could trust a proprietary device that tells me it's very secure when I can't see what it's doing with my U2F private key...

Flashing the board

The Tomu can be used as a U2F token by flashing chopstx on it, the same software used in the gnuk project lead by awesome Niibe-san.

Although I had a gnuk token a while ago, I ended up giving it away since I found the flashing process painful and I didn't really have a use case for a GPG smartcard at the time.

The Tomu board in the bootloader

On the contrary, flashing the Tomu was a walk in the park. The Tomu's bootloader supports dfu-util so it was only a matter of installing it on my computer, building the software and pushing it on the board.

I did encounter a few small problems during the process, but I sent a series of patches upstream to try to fix that and make the whole experience smoother.

Here's a few things you should look out for while flashing a Tomu for to be used as a U2F token.

  • Make sure you are running the latest version of the bootloader. You can find it here.
  • Your U2F private key will be erased if you update the firmware. Be sure to generate it on your host computer and keep an encrypted copy of it somewhere.
  • For now, the readout protection is not enabled by default. Be sure to use make ENFORCE_DEBUG_LOCK=1 when building the chopstx binary.
  • Firefox doesn't support U2F out of the box on Debian. You have to enable a few options in about:config and use a plugin for it to work properly.
  • You need to add a new udev rule for the Tomu to be seen as a U2F device by your system.

,

Planet DebianBenjamin Mako Hill: Lookalikes

Am I leading a double life as an actor in several critically acclaimed television series?

I ask because I was recently accused of being Paul Sparks—the actor who played gangster Mickey Doyle on Boardwalk Empire and writer Thomas Yates in the Netflix version of House of Cards. My accuser reacted to my protestations with incredulity. Confronted with the evidence, I’m a little incredulous myself.


Previous lookalikes are here.

Rondam RamblingsTrump fiddles while the West burns

Fire officials in California first started keeping records in 1932, when the Matilija fire burned 220,000 acres.  It would be 71 years before that record was broken by the Cedar fire (273,000 acres) in 2003.  That record was very nearly broken 9 years later, by the Rush fire (272,000 acres) in 2012, then it was broken 5 years later by the Thomas fire in 2017 (282,000 acres).  Now, less than one

Planet DebianLucas Kanashiro: DebCamp and DebConf 18 summary

Come as no surprise, Debcamp and Debconf 18 were amazing! I worked on many things that I had not had enough time to accomplish before; also I had the opportunity to meet old friends and new people. Finally, I engaged in important discussions regarding the Debian project.

Based on my last blog post, follows what I did in Hsinchu during these days.

  • The Debconf 19 website has an initial version running \o/ I want to thank Valessio Brito and Arthur Del Esposte for helping me build this first version, and also thank tumblingweed for your explanation about how wafer works.

  • The Perl Team Rolling Sprint was really nice! Four people participated, and we were able to get a bunch of things done, you can see the full report here.

  • Arthur Del Esposte (my GSoC intern) made some improvements on his work, and also collected some feedbacks from others developers. I hope he will blog post these things soon. You can find his presentation about his GSoC project here; he is the first student in the video :)

  • I worked on some Ruby packages. I uploaded some new dependencies of Rails 5 to unstable (which Praveen et al. were already working on them). I hope we can make Rails 5 package available in experimental soon, and ship it in the next Debian stable release. I also discussed about Redmine package with Duck (Redmine’s co-maintainer) but did not manage to work on it.

Besides the technical part, this was my first time in Asia! I loved the architecture, despite the tight streets, the night markets the temples and so on. Some pictures that I took below:

And in order to provide a great experience for the Debian community next year in Curitiba - Brazil, we already started to prepare the ground for you :)

See you all next year in Curitiba!

Krebs on SecurityFlorida Man Arrested in SIM Swap Conspiracy

Police in Florida have arrested a 25-year-old man accused of being part of a multi-state cyber fraud ring that hijacked mobile phone numbers in online attacks that siphoned hundreds of thousands of dollars worth of bitcoin and other cryptocurrencies from victims.

On July 18, 2018, Pasco County authorities arrested Ricky Joseph Handschumacher, an employee of the city of Port Richey, Fla, charging him with grand theft and money laundering. Investigators allege Handschumacher was part of a group of at least nine individuals scattered across multiple states who for the past two years have drained bank accounts via an increasingly common scheme involving mobile phone “SIM swaps.”

A SIM card is the tiny, removable chip in a mobile device that allows it to connect to the provider’s network. Customers can legitimately request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

But SIM swaps are frequently abused by scam artists who trick mobile providers into tying a target’s service to a new SIM card and mobile phone that the attackers control. Unauthorized SIM swaps often are perpetrated by fraudsters who have already stolen or phished a target’s password, as many banks and online services rely on text messages to send users a one-time code that needs to be entered in addition to a password for online authentication.

In some cases, fraudulent SIM swaps succeed thanks to lax authentication procedures at mobile phone stores. In other instances, mobile store employees work directly with cyber criminals to help conduct unauthorized SIM swaps, as appears to be the case with the crime gang that allegedly included Handschumacher.

A WORRIED MOM

According to court documents, investigators first learned of the group’s activities in February 2018, when a Michigan woman called police after she overheard her son talking on the phone and pretending to be an AT&T employee. Officers responding to the report searched the residence and found multiple cell phones and SIM cards, as well as files on the kid’s computer that included “an extensive list of names and phone numbers of people from around the world.”

The following month, Michigan authorities found the same individual accessing personal consumer data via public Wi-Fi at a local library, and seized 45 SIM cards, a laptop and a Trezor wallet — a hardware device designed to store crytpocurrency account data. In April 2018, the mom again called the cops on her son — identified only as confidential source #1 (“CS1”) in the criminal complaint — saying he’d obtained yet another mobile phone.

Once again, law enforcement officers were invited to search the kid’s residence, and this time found two bags of SIM cards and numerous driver’s licenses and passports. Investigators said they used those phony documents to locate and contact several victims; two of the victims each reported losing approximately $150,000 in cryptocurrencies after their phones were cloned; the third told investigators her account was drained of $50,000.

CS1 later told investigators he routinely conducted the phone cloning and cashouts in conjunction with eight other individuals, including Handschumacher, who allegedly used the handle “coinmission” in the group’s daily chats via Discord and Telegram. Search warrants revealed that in mid-May 2018 the group worked in tandem to steal 57 bitcoins from one victim — then valued at almost $470,000 — and agreed to divide the spoils among members.

GRAND PLANS

Investigators soon obtained search warrants to monitor the group’s Discord server chat conversations, and observed Handschumacher allegedly bragging in these chats about using the proceeds of his alleged crimes to purchase land, a house, a vehicle and a “quad vehicle.” Interestingly, Handschumacher’s public Facebook page remains public, and is replete with pictures that he posted of recent new vehicle aquisitions, including a pickup truck and multiple all-terrain vehicles and jet skis.

The Pasco County Sheriff’s office says their surveillance of the Discord server revealed that the group routinely paid employees at cellular phone companies to assist in their attacks, and that they even discussed a plan to hack accounts belonging to the CEO of cryptocurrency exchange Gemini Trust Company. The complaint doesn’t mention the CEO by name, but the current CEO is bitcoin billionaire Tyler Winklevoss, who co-founded the exchange along with his twin brother Cameron.

“Handschumacher and another co-conspirator talk about compromising the CEO of Gemini and posted his name, date of birth, Skype username and email address into the conversation,” the complaint reads. “Handschumacher and the co-conspirators discuss compromising the CEO’s Skype account and T-Mobile account. The co-conspirator states he will call his ‘guy’ at T-Mobile to ask about the CEO’s account.”

Court documents state that the group used Coinbase.com and multiple other cryptocurrency exchanges to launder the proceeds of their thefts in a bid to obfuscate the source of the stolen funds. Subpoenas to Coinbase revealed Handschumacher had a total of 82 bitcoins sold from or sent to his account, and that virtually all of the funds were received via outside sources (as opposed to being purchased through Coinbase).

Neither Handschumacher nor his attorney responded to requests for comment. The complaint against Handschumacher says that following his arrest he confessed to his involvement in the group, and that he admitted to using his cell phone to launder cryptocurrency in amounts greater than $100,000.

But on July 23, Handschumacher’s attorney entered a plea of “not guilty” on behalf of his client, who is now facing charges of grand larceny, money laundering, and accessing a computer or electronic device without authorization.

Handschumacher’s arrest comes on the heels of an apparent law enforcement crackdown on individuals involved in SIM swap schemes. As first reported by Motherboard.com earlier this month, on July 12, police in California arrested Joel Ortiz — a 20-year-old college student accused of being part of a group of criminals who hacked dozens of cellphone numbers to steal more than $5 million in cryptocurrency.

The Motherboard story notes that Ortiz allegedly was an active member of OGusers[dot]com, a marketplace for Twitter and Instagram usernames that SIM swapping hackers use to sell stolen accounts — usually one- to six-letter usernames. Short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them.

Sources familiar with the investigation tell KrebsOnSecurity that Handschumacher also was a member of OGUsers, although it remains unclear how active he may have been there.

WHAT YOU CAN DO

All four major U.S. mobile phone companies allow customers to set personal identification numbers (PINs) on their accounts to help combat SIM swaps, as well as another type of phone hijacking known as a number port-out scam. But these precautions may serve as little protection against crooked insiders working at mobile phone retail locations. On May 18, KrebsOnSecurity published a story about a Boston man who had his three-letter Instagram username hijacked after attackers executed a SIM swap against his T-Mobile account. According to T-Mobile, that attack was carried out with the help of a rogue company employee.

SIM swap scams illustrate a crucial weak point of multi-factor authentication methods that rely on a one-time code sent either via text message or an automated phone call. If an online account that you value offers more robust forms of multi-factor authentication — such as one-time codes generated by an app, or better yet hardware-based security keys — please consider taking full advantage of those options.

If, however, SMS-based authentication is the only option available, this is still far better than simply relying on a username and password to protect the account. If you haven’t done so lately, head on over to twofactorauth.org, which maintains probably the most comprehensive list of which sites support multi-factor authentication, indexing each by type of site (email, gaming, finance, etc) and the type of added authentication offered (SMS, phone call, software/hardware token, etc.).

Rondam RamblingsMore from the Republican hypocrisy files

Republicans are oddly selective about which parts of the Constitution they pay attention to.  A new poll shows that 43% of Republicans approve of giving the president the power to shut down the media, a clear violation of the First Amendment. So... Republicans go absolutely apoplectic when the government threatens to take their guns, but have absolutely no problem with the government taking away

Planet DebianVincent Sanders: The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office.

I fear that I may have worked in a similar office environment to Robert Frost. Certainly his description is familiar to those of us who have been subjected to modern "open plan" offices. Such settings may work for some types of job but for myself, as a programmer, it has a huge negative effect.

My old basement officeWhen I decided to move on from my previous job my new position allowed me to work remotely. I have worked from home before so knew what to expect. My experience led me to believe the main aspects to address when home working were:
Isolation
This is difficult to mitigate but frequent face to face meetings and video calls with colleagues can address this providing you are aware that some managers have a terrible habit of "out of sight, out of mind" management
Motivation
You are on your own a lot of the time which means you must motivate yourself to work. Mainly this is achieved through a routine. I get dressed properly, start work the same time every day and ensure I take breaks at regular times.
Work life balance
This is often more of a problem than you might expect and not in the way most managers assume. A good motivated software engineer can have a terrible habit of suddenly discovering it is long past when they should have finished work. It is important to be strict with yourself and finish at a set time.
Distractions
In my previous office testers, managers, production and support staff were all mixed in with the developers resulting in a lot of distractions however when you are at home there are also a great number of possible distractions. It can be difficult to avoid friends and family assuming you are available during working hours to run errands. I find I need to carefully budget time to such tasks and take it out of my working time like i was actually in an office.
Environment
My previous office had "tired" furniture and decoration in an open plan which often had a negative impact on my productivity. When working from home I find it beneficial to partition my working space from the rest of my life and ensure family know that when I am in that space I am unavailable. You inevitably end up spending a great deal of time in this workspace and it can have a surprisingly large effect on your productivity.
Being confident I was aware of what I was letting myself into I knew I required a suitable place to work. In our previous home the only space available for my office was a four by ten foot cellar room with artificial lighting. Despite its size I was generally productive there as there were few distractions and the door let me "leave work" at the end of the day.

Garden office was assembled June 2017
This time my resources to create the space are larger and I wanted a place I would be comfortable to spend a lot of time in. Initially I considered using the spare bedroom which my wife was already using as a study. This was quickly discounted as it would be difficult to maintain the necessary separation of work and home.

Instead we decided to replace the garden shed with a garden office. The contractor ensured the structure selected met all the local planning requirements while remaining within our budget. The actual construction was surprisingly rapid. The previous structure was removed and a concrete slab base was placed in a few hours on one day and the timber building erected in an afternoon the next.

Completed office in August 2018
The building arrived separated into large sections on a truck which the workmen assembled rapidly. They then installed wall insulation, glazing and roof coverings. I had chosen to have the interior finished in a hardwood plywood being hard wearing and easy to apply finish as required.

Work desk in July 2017
Although the structure could have been painted at the factory Melodie and I applied this ourselves to keep the project in budget. I laid a laminate floor suitable for high moisture areas (the UK is not generally known as a dry country) and Steve McIntyre and Andy Simpkins assisted me with various additional tasks to turn it into a usable space.

To begin with I filled the space with furniture I already had, for example the desk was my old IKEA Jerker which I have had for over twenty years.

Work desk in August 2018
Since then I have changed the layout a couple of times but have finally returned to having my work desk in the corner looking out over the garden. I replaced the Jerka with a new IKEA Skarsta standing desk, PEXIP bought me a nice work laptop and I acquired a nice print from Lesley Mitchell but overall little has changed in my professional work area in the last year and I have a comfortable environment.

Cluttered personal work area
In addition the building is large enough that there is space for my electronics bench. The bench itself was given to me by Andy. I purchased some inexpensive kitchen cabinets and worktop (white is cheapest) to obtain a little more bench space and storage. Unfortunately all those flat surfaces seem to accumulate stuff at an alarming rate and it looks like I need a clear out again.

In conclusion I have a great work area which was created at a reasonable cost.

There are a couple of minor things I would do differently next time:
  • Position the building better with respect to the boundary fence. I allowed too much distance on one side of the structure which has resulted in an unnecessary two foot wide strip of unusable space.
  • Ensure the door was made from better materials. The first winter in the space showed that the door was a poor fit as it was not constructed to the same standard as the rest of the building.
  • The door should have been positioned on the end wall instead of the front. Use of the building showed moving the door would make the internal space more flexible.
  • Planned the layout more effectively ahead of time, ensuring I knew where services (electricity) would enter and where outlets would be placed.
  • Ensure I have an electrician on site for the first fix so electrical cables could be run inside the walls instead of surface trunking.
  • Budget for air conditioning as so far the building has needed heating in winter and cooling in summer.
In essence my main observation is better planning of the details matters. If i had been more aware of this a year ago perhaps I would not not be budgeting to replace the door and fit air conditioning now.

Planet DebianJoachim Breitner: Swing Dancer Profile

During my two years in Philadelphia (I was a post-doc with Stephanie Weirich at the University of Pennsylvania) I danced a lot of Swing, in particular at the weekly social “Jazz Attack”.

They run a blog, that blog features dancers, and this week – just in time for my last dance – they featured me with a short interview.

Cory DoctorowTalking copyright, internet freedom, artistic business models, and antitrust with Steal This Show

I’m on the latest episode of Torrentfreak’s Steal This Show podcast (MP3), where I talk with host Jamie King about “Whether file-sharing & P2P communities have lost the battle to streaming services like Netflix and Spotify, and why the ‘copyfight’ is still important; how the European Copyright Directive eats at the fabric of the Web, making it even harder to compete with content giants; and why breaking up companies like Google and Facebook might be the only way to restore an internet — and a society — we can all live with.”

Sociological ImagesWhat’s Trending? Deep State Searches

This week a host of digital platforms gave Alex Jones’ programming the boot. Conspiracy theories have big consequences in a polarized political world, because they can amplify basic human skepticism about political institutions into absurd, and sometimes violent, belief systems.

But the language of mainstream politics can often work the same way when leaders use short, pithy phrases to signal all kinds of beliefs. From “mistakes were made” to the “food stamp president” slur, careful choices about framing can cover up an issue or conjure up stereotypes to swing voters.

In the past two years, you may have noticed a new term entering the American political lexicon: the “deep state.” Used to refer to insider groups of political specialists (especially in government agencies like the FBI or in the media), “deep state” conjures up images of a shadowy network of political power brokers who operate outside of elected office. The term has really caught on—search data from Google Trends shows a huge spike in “deep state” searches since 2016.

(Click to Enlarge)

Deep state talk catches my interest because I have heard it before. For years now, politicians in Turkey have raised allegations about secretive “deep state” organizations plotting to overthrow the government. While Turkey has had coups in the past, these kinds of accusations are also one way that leadership has been able to justify cracking down on political opposition. Sure enough, trends also show deep state searches spiked in Turkey about ten years before the US (I also added the global trend for context).

(Click to Enlarge)

This data doesn’t show a direct connection between the Turkish and American cases. It does show us that new political ideas don’t necessarily spring out from nowhere. For example, work by sociologists like Chris Bail shows how ideas from the fringes of the political world can make their way into the mainstream, especially if they rely on emotionally-charged messaging. As political consulting and strategy goes global, it is important to pay attention to how these ideas play out in other times and places when we see them emerging in the United States.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianPetter Reinholdtsen: Privacy respecting health monitor / fitness tracker?

Dear lazyweb,

I wonder, is there a fitness tracker / health monitor available for sale today that respect the users privacy? With this I mean a watch/bracelet capable of measuring pulse rate and other fitness/health related values (and by all means, also the correct time and location if possible), which is only provided for me to extract/read from the unit with computer without a radio beacon and Internet connection. In other words, it do not depend on a cell phone app, and do make the measurements available via other peoples computer (aka "the cloud"). The collected data should be available using only free software. I'm not interested in depending on some non-free software that will leave me high and dry some time in the future. I've been unable to find any such unit. I would like to buy it. The ones I have seen for sale here in Norway are proud to report that they share my health data with strangers (aka "cloud enabled"). Is there an alternative? I'm not interested in giving money to people requiring me to accept "privacy terms" to allow myself to measure my own health.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

CryptogramMeasuring the Rationality of Security Decisions

Interesting research: "Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
":

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Worse Than FailureCodeSOD: This Interview Doesn't Count

There are merits and disadvantages to including any sort of programming challenge in your interview process. The argument for something like a FizzBuzz challenge is that a surprising number of programmers can’t actually do that, and it weeds out the worst candidates and the liars.

Gareth was interviewing someone who purported to be a senior developer with loads of Java experience. As a standard part of their interview process, they do a little TDD based exercise: “here’s a test, here’s how to run it, now write some code which passes the test.”

The candidate had no idea what to make of this exercise. After about 45 minutes which resulted in three lines of code (one of which was just a closing curly bracket) Gareth gave the candidate some mercy. Interviews are stressful, the candidate might not be comfortable with the tools, everybody has a bad brainfart from time to time. He offered a different, simpler task.

“Heres’s some code which generates a list of numbers. I’d like you to write a method which finds the number which appears in the list most frequently.”

import java.io.*;
import java.util.*;
class Solution {
  public static void main(String[] args) {
    List<Integer> numbers = new Vector<Integer>();
    numbers.add(5);
    numbers.add(14);
    numbers.add(6);
    numbers.add(7);
    numbers.add(7);
    numbers.add(7);
    numbers.add(20);
    numbers.add(10);
    numbers.add(10);

    // find most common item
    for(Integer num : numbers){
      if(num == 5 ){
        int five += 1;
      }
      else if(num == 14 ) [
        int foue=rteen += !:
        }
   }
}

Gareth brought the interview to a close. After this, he didn’t want to spend another foue=rteen minutes trying to find a test the candidate could pass.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianPaul Wise: FLOSS Activities July 2018

Changes

Issues

Review

Administration

  • myrepos: merge patches, release
  • foxtrotgps: merge patch
  • whohas: merge pull request
  • fossjobs: forward some job advertisments
  • Debian: quiet buildd cron mail, redirect potential contributor, discuss backup hosts for some arches, discussions at DebConf18
  • Debian wiki: unblacklist networks, whitelist domains, whitelist email addresses, reject possibly inappropriate signup attempt
  • Debian website: remove lingering file

Communication

Sponsors

All work was done on a volunteer basis.

Planet Linux AustraliaDavid Rowe: Testing a RTL-SDR with FSK on HF

There’s a lot of discussion about ADC resolution and SDRs. I’m trying to develop a simple HF data system that uses RTL-SDRs in “Direct Sample” mode. This blog post describes how I measured the Minimum Detectable Signal (MDS) of my 100 bit/s 2FSK receiver, and a spreadsheet model of the receiver that explains my results.

Noise in a receiver comes from all sorts of places. There are two sources of concern for this project – HF band noise and ADC quantisation noise. On lower HF frequencies (7MHz and below) I’m guess-timating signals weaker than -100dBm will be swamped by HF band noise. So I’d like a receiver that has a MDS anywhere under that. The big question is, can we build such a receiver using a low cost SDR?

Experimental Set Up

So I hooked up the experimental setup in the figure below:

The photo shows the actual hardware. It was spaced apart a bit further for the actual test:

Rpitx is sending 2FSK at 100 bit/s and about 14dBm Tx power. It then gets attenuated by some fixed and variable attenuators to beneath -100dBm. I fed the signal into a RTL-SDR plugged into my laptop, demodulated the 2FSK signal, and measured the Bit Error Rate (BER).

I tried a command line receiver:

rtl_sdr -s 1200000 -f 7000000 -D 2 - | csdr convert_u8_f | csdr shift_addition_cc `python -c "print float(7000000-7177000)/1200000"` | csdr fir_decimate_cc 25 0.05 HAMMING | csdr bandpass_fir_fft_cc 0 0.1 0.05 | csdr realpart_cf | csdr convert_f_s16 | ~/codec2-dev/build_linux/src/fsk_demod 2 48000 100 - - | ~/codec2-dev/build_linux/src/fsk_put_test_bits -

and also gqrx, using this configuration:

with the very handy UDP output option sending samples to the FSK demodulator:

$ nc -ul 7355 | ./fsk_demod 2 48000 100 - - | ./fsk_put_test_bits -

Both versions demodulate the FSK signal and print the bit error rate in real time. I really love the csdr tools, and gqrx is also great for a more visual look at the signal and the ability to monitor the audio.

For these tests the gqrx receiver worked best. It attenuated nearby interferers better (i.e. better sideband rejection) and worked at lower Rx signal levels. It also has a “hardware AGC” option that I haven’t worked out how to enable in the command line tools. However for my target use case I’ll eventually need a command line version, so I’ll have to improve the command line version some time.

The RF Gods are smiling on me today. This experimental set up actually works better than previous bench tests where we needed to put the Tx in another room to get enough isolation. I can still get 10dB steps from the attenuator at -120dBm (ish) with the Tx a few feet from the Rx. It might be the ferrites on the cable to the switched attenuator.

I tested the ability to get solid 10dB steps using a CW (continuous sine wave) signal using the “test” utility in rpitx. FSK bounces about too much, especially with the narrow spec an settings I need to measure weak signals. The configuration of the Rigol DSA815 I used to see the weak signals is described at the end of this post on the SM2000.

The switched attenuator just has 10dB steps. I am getting zero bit errors at -115dBm, and the modem fell over on the next step (-125dBm). So the MDS is somewhere in between.

Model

This spreadsheet (click for the file) models the receiver:

By poking the RTL-SDR with my signal generator, and plotting the output waveforms, I worked out that it clips at around -30dBm (a respectable S9+40dB). So that’s the strongest signal it can handle, at least using the rtl_sdr command line options I can find. Even though it’s an 8 bit ADC I figure there are 7 magnitude bits (the samples are unsigned chars). So we get 6dB per bit or 42dB dynamic range.

This lets us work out the the power of the quantisation noise (42dB beneath -30dBm). This noise power is effectively spread across the entire bandwidth of the ADC, a little bit of noise power for each Hz of bandwidth. The bandwidth is set by the sample rate of the RTL-SDRs internal ADC (28.8 MHz). So now we can work out No (N-nought), the power/unit Hz of bandwidth. It’s like a normalised version of the receiver “noise floor”. An ADC with more bits would have less quantisation noise.

There follows some modem working which gives us an estimate of the MDS for the modem. The MDS of -117.6dBm is between my two measurements above, so we have a good agreement between this model and the experimental results. Cool!

Falling through the Noise Floor

The “noise floor” depends on what you are trying to receive. If you are listening to wide 10kHz wide AM signal, you will be slurping up 10kHz of noise, and get a noise power of:

-146.6+10*log10(10000) = -106.6 dBm

So if you want that AM signal to have a SNR of 20dB, you need a received signal level of -86.6dB to get over the quantisation noise of the receiver.

I’m trying to receive low bit rate FSK which can handle a lot more noise before it falls over, as indicated in the spreadsheet above. So it’s more robust to the quantisation noise and we can have a much lower MDS.

The “noise floor” is not some impenetrable barrier. It’s just a convention, and needs to be considered relative to the bandwidth and type of the signal you want to receive.

One area I get confused about is noise bandwidth. In the model above I assume the noise band width is the same as the ADC sample rate. Please feel free to correct me if that assumption is wrong! With IQ signals we have to consider negative frequencies, complex to real conversions, which all affects noise power. I muddle through this when I play with modems but if anyone has a straight forward explanation of the noise bandwidth I’d love to hear it!

Blocking Tests

At the suggestion of Mark, I repeated the MDS tests with a strong CW interferer from my signal generator. I adjusted the Sig Gen and Rx levels until I could just detect the FSK signal. Here are the results, all in dBm:

Sig Gen 2FSK Rx MDS Difference
-51 -116 65
-30 -96 66

The FSK signal was at 7.177MHz. I tried the interferer at 7MHz (177 kHz away) and 7.170MHz (just 7 kHz away) with the same blocking results. I’m pretty impressed that the system can continue to work with a 65dB stronger signal just 7kHz away.

So the interferer desensitises the receiver somewhat. When listening to the signal on gqrx, I can hear the FSK signal get much weaker when I switch the Sig Gen on. However it often keeps demodulating just fine – FSK is not sensitive to amplitude.

I can also hear spurious tones appearing; the quantisation noise isn’t really white noise any more when a strong signal is present. Makes me feel like I still have a lot to learn about this SDR caper, especially direct sampling receivers!

As with the MDS results – my blocking results are likely to depend on the nature of the signal I am trying to receive. For example a SSB signal or higher data rate might have different blocking results.

Still, 65dB rejection on a $27 radio (at least for my test modem signal) is not too shabby. I can get away with a S9+40dB (-30dBm) interferer just 7kHz away with my rx signal near the limits of where I want to detect (-96dBm).

Conclusions

So I figure for the lower HF bands this receivers performance is OK – the ADC quantisation noise isn’t likely to impact performance and the strong signal performance is good enough. An overload of -30dBm (S9+40dB) is also acceptable given the use case is remote communications where there is unlikely to be any nearby transmitters in the input filter passband.

The 100 bit/s signal is just a starting point. I can use that as a reference to help me understand how different modems and bit rates will perform. For example I can increase the bit rate to say 1000 bit/s 2FSK, increasing the MDS by 10dB, and still be well beneath my -100dBm MDS target. Good.

If it does falls over in the real world due to MDS performance, overload or blocking, I now have a good understanding of how it works so it will be possible to engineer a solution.

For example a pre-amp with X dB gain would lower the quantisation noise power by X dB and allow us to detect weaker signals but then the Rx would overload at -30-X dB. If we have strong signal problems but our target signal is also pretty strong we can insert an attenuator. If we drop in another SDR I can recompute the quantisation noise from it’s specs, and estimate how well it will perform.

Reading Further

Rpitx and 2FSK, first part in this series.
Spreadsheet used to do the working for the quantisation noise.

,

Planet DebianAndreas Bombe: GHDL Back in Debian

As I have noted, I have been working on packaging the VHDL simulator GHDL for Debian after it has dropped out of the archive for a few years. This work has been on slow burner for a while and last week I used some time at DebConf 18 to finally push this to completion and upload it. ftpmasters were also working fast, so yesterday the package got accepted and is now available from Debian unstable.

The package you get supports up to VHDL-93, which is entirely down to VHDL library issues. The libraries published by IEEE along with the VHDL standard are not free enough to be suitable for Debian main. Instead, the package uses the openieee libraries developed as part of GHDL, which are GPL’ed from-scratch implementations of the libraries required by the VHDL standard. Currently these only implement VHDL-89 and VHDL-93, hence the limitation.

I intend to package the IEEE libraries in a separate package that will go into non-free. The new license under which the libraries are distributed is frustratingly close to free except in the case of modifications, where only specific changes are allowed. No foreseeable problems for the non-free section though. This package should integrate itself into the GHDL package installations, so installing it will make the GHDL packages support VHDL-2008 — at least as far as GHDL itself supports VHDL-2008.

Planet DebianJonathan McDowell: DebConf18 writeup

I’m just back from DebConf18, which was held in Hsinchu, Taiwan. I went without any real concrete plans about what I wanted to work on - I had some options if I found myself at a loose end, but no preconceptions about what would pan out. In the end I felt I had a very productive conference and I did bits on all of the following:

  • Worked on trying to fix my corrupted Lenovo Ideacentre Stick 300 BIOS (testing of current attempt has been waiting until I’m back home and have access to the device again, so hopefully within the next few days)
  • NMUed sdcc to fix FTBFS with GCC 8
  • Prepared Pulseview upload to fix FTBFS with GCC 8, upload stalled on libsigc++2.0 (Bug#897895)
  • Caught up with Gunnar re keyring stuff
  • Convinced John Sullivan to come and help out keyring-maint
  • New Member / Front Desk conversations
  • Worked on gcc toolchain packages for ESP8266 (xtensa-lx106) (Bug#868895). Not sure if these are useful enough to others to upload or not, but so far I’ve moved from 4.8.5 to 7.3 and things seem happy.
  • Worked on porting latest newlib to xtensa with help from Keith Packard (in particular his nano variant with much smaller stdio code)
  • Helped present the state of Debian + the GDPR
  • Sat on the New Members BoF panel
  • Went to a whole bunch of interesting talks + BoFs.
  • Put faces to a number of names, as well as doing the usual catchup with the familiar faces.

I managed to catch the DebConf bug towards the end of the conference, which was unfortunate - I had been eating the venue food at the start of the week and it would have been nice to explore the options in Hsinchu itself for dinner, but a dodgy tummy makes that an unwise idea. Thanks to Stuart Prescott I squeezed in a short daytrip to Taipei yesterday as my flight was in the evening and I was going to have to miss all the closing sessions anyway. So at least I didn’t completely avoid seeing some of Taiwan when I was there.

As usual thanks to all the organisers for their hard work, and looking forward to DebConf19 in Curitiba, Brazil!

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #171

Here’s what happened in the Reproducible Builds effort between Sunday July 29 and Saturday August 4 2018:

Upstream work

Bernhard M. Wiedemann proposed toolchain patches to:

  • rpm to have determinism in the process of stripping debuginfo into separate packages
  • gzip to make tar -cz output reproducible on the gzip side. This might also help with compressed man-pages and merged by gzip upstream.

In addition, Bernhard M. Wiedemann worked on:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaGary Pendergast: WordPress’ Gutenberg: The Long View

WordPress has been around for 15 years. 31.5% of sites use it, and that figure continues to climb. We’re here for the long term, so we need to plan for the long term.

Gutenberg is being built as the base for the next 15 years of WordPress. The first phase, replacing the post editing screen with the new block editor, is getting close to completion. That’s not to say the block editor will stop iterating and improving with WordPress 5.0, rather, this is where we feel confident that we’ve created a foundation that we can build everything else upon.

Let’s chat about the long-term vision and benefit of the Gutenberg project. 🙂

As the WordPress community, we have an extraordinary opportunity to shape the future of web development. By drawing on the past experiences of WordPress, the boundless variety and creativity found in the WordPress ecosystem, and modern practices that we can adopt from many different places in the wider software world, we can create a future defined by its simplicity, its user friendliness, and its diversity.

If we’re looking to create this future, what are the key ingredients?

Interface unity. Today, the two primary methods of embedding structured data into a WordPress post are through shortcodes, and meta boxes. Both of these have advantages, and drawbacks. With shortcodes, authors can see exactly where the shortcode will be rendered, in relation to the rest of the content. With meta boxes, the site creator can ensure the author enters data correctly, and doesn’t get it out of order. Additionally, meta boxes allow storing data outside of post_content, making it easily queryable. With blocks, we can combine these strengths: blocks render where they’ll be in the finally content, they can be configured to only allow certain data to be entered, and to save that data wherever you want it. With block templates, you can lock the post to only allow certain blocks, or to even layout the blocks exactly as they need to be, ensuring the post is saved and rendered exactly as intended.

Platform agnosticism. There’s never been a nice way for plugins to provide custom UI for the WordPress mobile apps (or any other apps that talk to WordPress, for that matter) to render. Thanks to the magic of React Native, this is a very real possibility in the near future. The mobile team are working hard on compiling Gutenberg into the mobile apps and getting the core blocks working, which will guide the way for any sort of custom block to just… work. 🤯

Concept simplification. Even vanilla WordPress has masses of similar-but-subtly-different concepts to learn. Within the post, there are shortcodes, meta boxes, and embeds. Outside of that, we have menus and widgets (and widget areas, of course). The first phase of Gutenberg is focussed on the post, but ultimately, we can imagine a world where the entire site creation process is just blocks. Blocks that can fit together, that can be easily rearranged, and can each take care of important individual things, like their own responsive behaviour.

A common base. Gutenberg isn’t going to replace page builders, or custom field plugins like ACF. Instead, it give them a common framework to build themselves upon. Instead of every page builder having to spend a huge amount of time maintaining their own framework, they can use the one that Gutenberg provides, and focus on providing the advanced functionality that their customers rely on.

A design language. I don’t know about y’all, but as a developer, I find it challenging to create quality interfaces in the WordPress of today. I’d really love if there was a simple library for me to refer to whenever I wanted to create something. Desktop and mobile environments have had this for decades, but the web is only just starting to catch on. The WordPress design team have some really interesting ideas on this that’ll help both core and plugin developers to put together high quality interfaces, quickly.

There are side benefits that come along for the ride, too. Encouraging client-side rendering gives a smoother UX. Using modern JS practices encourages a new generation of folks to start contributing to WordPress, helping ensure WordPress’ long term viability. Because it’s Open Source, anyone can use and adapt it. This benefits the Open Source world, and it also benefits you: you should never feel locked into using WordPress.


What’s next?

Naturally, there’s going to be a transition period. WordPress 5.0 is just the start, it’s going to take some time for everyone to adjust to this brave new world, there will be bugs to fix, kinks to iron out, flows to smooth. The tools that plugin and theme developers need are starting to appear, but there’s still a lot of work to be done. There’s a long tail of plugins that may never be updated to support Gutenberg, the folks using them need an upgrade route.

If you feel that your site or business isn’t quite ready to start this transition, please install the Classic Editor plugin now. Gutenberg is very much a long term project, I’m certainly not expecting everyone to jump on board overnight. Much like it took years for the customiser to get to the level of adoption it has now; every site, plugin, and agency will need to consider how they’re going to make the transition, and what kind of timeline they’re comfortable with.

Ultimately, the WordPress experience, community, and ecosystem will grow stronger through this evolution.

I’ve been been working on WordPress for years, and I plan on doing it for many years to come. I want to help everyone make it through this transition smoothly, so we can keep building our free and open internet, together.

CryptogramHacking the McDonald's Monopoly Sweepstakes

Long and interesting story -- now two decades old -- of massive fraud perpetrated against the McDonald's Monopoly sweepstakes. The central fraudster was the person in charge of securing the winning tickets.

Planet DebianNiels Thykier: Buster is headed for a long hard freeze

We are getting better and better accumulating RC bugs in testing. This is unfortunate because the length of the freeze is strongly correlated with the number of open RC bugs affecting testing. If you believe that Debian should have short freezes, then it will require putting effort behind that belief and fix some RC bugs – even in packages that are not maintained directly by you or your team and especially in key packages.

The introduction of key packages have been interesting. On the plus side, we can use it to auto-remove RC buggy non-key packages from testing which has been very helpful. On the flip-side, it also makes it painfully obvious that over 50% of all RC bugs in testing are now filed against key packages (for the lazy; we are talking about 475 RC bugs in testing filed against key packages; about 25 of these appear to be fixed in unstable).

Below are some observations from the list of RC bugs in key packages (affecting both testing and unstable – based on a glance over all of the titles).

  • About 85 RC bugs related to (now) defunct maintainer addresses caused by the shutdown of Alioth. From a quick glance, it appears that the Debian Xfce Maintainers has the largest backlog – maybe they could use another team member.  Note they are certainly not the only team with this issue.
  • Over 100 RC bugs are FTBFS for various reasons..  Some of these are related to transitions (e.g. new major versions of GCC, LLVM and OpenJDK).

Those three points alone accounts for 40% of the RC bugs affecting both testing and unstable.

We also have several contributors that want to remove unmaintained, obsolete or old versions of  packages (older versions of compilers such as GCC and LLVM, flash-players/tooling, etc.).  If you are working on this kind of removal, please remember to follow through on it (even if it means NMU packages).  The freeze is not the right time to remove obsolete key packages as it tends to involve non-trivial changes of features or produced binaries.  As much of this as entirely possible ought to be fixed before 2019-01-12 (transition freeze).

 

In summary: If you want Debian Buster released in early 2019 or short Debian freezes in general, then put your effort where your wish/belief is and fix RC bugs today.  Props for fixes to FTBFS bugs, things that hold back transitions or keep old/unmaintained/unsupportable key packages in Buster (testing).

Worse Than FailureRepresentative Line: Constantly True

An anonymous reader had something to share.

"I came across this code in a 13,000 line file called Constants.cs."

public const string TRUE = "TRUE"; public const string True = "True"; public const string FALSE = "FALSE";

"I'm submitting my resignation letter on Monday."

There's not much more to say, really. Good job hunting.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianBits from Debian: DebConf18 closes in Hsinchu and DebConf19 dates announced

DebConf18 group photo - click to enlarge

Today, Sunday 5 August 2018, the annual Debian Developers and Contributors Conference came to a close. With over 306 people attending from all over the world, and 137 events including 100 talks, 25 discussion sessions or BoFs, 5 workshops and 7 other activities, DebConf18 has been hailed as a success.

Highlights included DebCamp with more than 90 participants, the Open Day,
where events of interest to a broader audience were offered, plenaries like the traditional Bits from the DPL, a Questions and Answers session with Minister Audrey Tang, a panel discussion about "Ignoring negativity" with Bdale Garbee, Chris Lamb, Enrico Zini and Steve McIntyre, the talk "That's a free software issue!!" given by Molly de Blanc and Karen Sandler, lightning talks and live demos and the announcement of next year's DebConf (DebConf19 in Curitiba, Brazil).

The schedule has been updated every day, including 27 ad-hoc new activities, planned
by attendees during the whole conference.

For those not able to attend, most talks and sessions were recorded and live streamed, and videos are being made available at the Debian meetings archive website. Many sessions also facilitated remote participation via IRC or a collaborative text document.

The DebConf18 website will remain active for archive purposes, and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf19 will be held in Curitiba, Brazil, from 21 July to 28 July, 2019. It will be the second DebConf held in Brazil (first one was DebConf4 in Porto Alegre. For the days before DebConf the local organisers will again set up DebCamp (13 July – 19 July), a session for some intense work on improving the distribution, and organise the Open Day on 20 July 2019, open to the general public.

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf18, particularly our Platinum Sponsor Hewlett Packard Enterprise.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (HPE) is an industry-leading technology company providing a comprehensive portfolio of products such as integrated systems, servers, storage, networking and software. The company offers consulting, operational support, financial services, and complete solutions for many different industries: mobile and IoT, data & analytics and the manufacturing or public sectors among others.

HPE is also a development partner of Debian, and providing hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

Contact Information

For further information, please visit the DebConf18 web page at https://debconf18.debconf.org/ or send mail to press@debian.org.

Rondam RamblingsTrump digs in deeper

Today's tweet from the Trumpster fire is essentially a confession to violating federal law: This was a meeting to get information on an opponent, totally legal and done all the time in politics — and it went nowhere It's true that meetings to get information on opponents are common and routine, but not when the counterparty is a foreign agent.  Then it's a crime. It is illegal for a federal

Rondam RamblingsFitch's paradox

A while back I had a private exchange with @Luke about Fitch's paradox of knowability, which I think of more as a puzzle than a paradox.  The "paradox" is that if you accept the following four innocuous-seeming assumptions: 1.  If a proposition P is known, then P is true 2.  If the conjunction P&Q is known, then P is known and Q is known 3.  If P is true then it is possible to know P 4.  If ~

Don MartiHow many vendors are relying on legitimate interest for ad targeting?

ICYMI: Why the GDPR ‘legitimate interest’ provision will not save you by Johnny Ryan.

The “legitimate interest” provision in the GDPR will not save behavioral advertising and data brokers from the challenge of obtaining consent for personally identifiable data.

The obvious question is: how many of the vendors listed on the Global Vendor and CMP List are actually relying on LI for purposes of Ad selection, delivery, reporting? Worth writing a simple script to check. Looks like 151 of 409, or about 37%.

Purpose 3 is:

Ad selection, delivery, reporting: The collection of information, and combination with previously collected information, to select and deliver advertisements for you, and to measure the delivery and effectiveness of such advertisements. This includes using previously collected information about your interests to select ads, processing data about what advertisements were shown, how often they were shown, when and where they were shown, and whether you took any action related to the advertisement, including for example clicking an ad or making a purchase. This does not include personalisation, which is the collection and processing of information about your use of this service to subsequently personalise advertising and/or content for you in other contexts, such as websites or apps, over time.

And here's the list of vendors with a "3" in their legIntPurposeIds:

151 of 409 listed vendors claim LI for purpose: Ad selection, delivery, reporting

This is as of version 90 of the list, last updated 2 August.

Will be interesting to see if the number claiming a legitmate interest here goes up or down as people learn more about the applicable regulations.

Everything bad about Facebook is bad for the same reason

Mark Ritson: Targeting or mass marketing? The answer is both

A Malvertising Campaign of Secrets and Lies

Tech’s Fractal Irresponsibility Problem

The death of Don Draper

Facebook signs agreement saying it won’t let housing advertisers exclude users by race

A year after Charlottesville, why can't big tech delete white supremacists?

WTF is a GDPR consent string?

Brainwashing your wife to want sex? Here is adtech at its worst

We need a new model for tech journalism

Sam VargheseLions fail again, Crusaders romp home

That the Lions lost their third successive super rugby final — to the Crusaders for a second successive time — came as no surprise, for nobody really gave them much of a chance to take the trophy home. The bookies, always the best informed, had the Crusaders at a dollar and the Lions at eight dollars. The final score was 37-18.

But there were some indications that once again — as in 2016 and 2017 — coaching decisions had played a part in the defeat. One amazing stat that emerged during the final was that Lions fly-half Elton Jantjies had played every game of the season in its entirety. (He also played the entire 82 minutes of the final).

One has to wonder why coach Sys de Bruin put such a strain on the man. The super rugby season is always arduous and in recent years it has become even more of a strain as there is a break in June for international games to take place. This was devised as a way to give teams the chance to recover from injuries, but in reality a broken season like this is more of a strain than one that runs unbroken from start to finish. Much in the same way that it is easier to run an 800 metres race rather than two 400 metres races with a short break in-between.

Is there really no-one who can fill in at No 10 for the Lions for even 10 minutes of every game, so that Jantjies can recover his breath? What happens if he suffers a serious injury in round six or seven and is out for the season? It gives one food for thought and makes one wonder whether the occasional brain-fades which Jantjies displays are due to the excessive tiredness.

It must be borne in mind that Jantjies is also either the number one or number two fly-half in the national team — and there is a testing series of games coming up from this month until the end-of-season tours of the northern hemisphere are completed in November. How will the man cope with that?

If a team from South Africa plays in the super rugby final and has to travel, either to New Zealand or Australia, then the travel always takes its toll, much as the jet-lag always affects teams from Australia or New Zealand when they go to South Africa for a one-off game.

The Lions only landed in Christchurch on Wednesday (August 1) and had to play the final on August 4. Thus it is not surprising that one of Australian rugby’s brightest stars of the past, Mark Ella, mooted the idea of playing the final in a third country like Japan, where the spectacle would be better, with a bigger crowd, and also avoid the extensive travel.

Whether Sanzar, the organisation that organises the competition, would look kindly on such an idea remains to be seen. In one sense, it would be good for the crowd that turns up would be much bigger; the stadium in Christchurch had a little under 20,000 people on Saturday. The South African and Australian grounds are much bigger but then very few finals are held in either of these countries. New Zealand has dominated the competition from the start.

To the match itself, the Crusaders had prepared well to counter the rolling maul which the Lions had used to good effect against the Waratahs in the semi-final the previous week. The tactic never got off the ground, with the Lions’ forwards being pushed back as soon as they got the ball and tried to line themselves up to begin the maul.

The Lions dominated in terms of territory and possession but the Crusaders put what little ball they enjoyed to very good use, with fly-half Richie Mo’unga having a blinder, with both his kicking and running being of a very high order. That Jantjies had the occasional brain-fade, kicking away possession needlessly, with one of his kicks to Mo’unga leading directly to a Crusaders’ try, did not help the Lions’ cause in any way.

The Crusaders made more than twice thw tackles of their opponents and there were some truly heavy hits. But only one yellow card was dished out, to Crusaders centre Ryan Crotty for a cynical tactic of tackling from the wrong side. The Lions, by the way, are the team with the best disciplinary record in the tournament: a single yellow card.

Forwards Cyle Brink and Malcolm Marx scored for the losers, while winger Seta Tamanivalu, full-back David Havili, replacement scrum-half Mitchell Drummond and lock Scott Barrett went over for the Crusaders.

For those who watched the game on Sky Sports, it must be noted that a good part of the enjoyment comes from having a commentator like Grant Nisbett at the mike. Nisbett, who has now called more than 300 Tests, apart from God knows how many super rugby games, has not lost even a fraction of his deft way with words and it is a joy to listen to the man.

This was the Crusaders ninth title in the tournament’s 23rd year. The Blues have three titles, the Chiefs two and the Hurricanes and Highlanders one apiece. Of the Australian teams, the Brumbies have two and the the Waratahs and Reds one each. And of the South African teams, the Bulls are the lone team to have tasted success with three wins.

Finally, referee Angus Gardner managed to keep his lectures to the minimum and interfered much less with the game than he normally does. One has to thank heavens for this, as he can really spoil a game when he is in schoolmaster mode.

,

Planet DebianThorsten Alteholz: My Debian Activities in July 2018

FTP master

This month was dominated by warm weather and I spent more time in a swimming pool than in the NEW queue. So I only accepted 149 packages and rejected 5 uploads. The overall number of packages that got accepted this month was 380.

Debian LTS

This was my forty ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30.00h. During that time I did LTS uploads of:

    [DLA 1428-1] 389-ds-base security update for 5 CVEs
    [DLA 1430-1] taglib security update for one CVE
    [DLA 1433-1] openjpeg2 security update for two CVEs
    [DLA 1437-1] slurm-llnl security update for two CVEs
    [DLA 1438-1] opencv security update for 17 CVEs
    [DLA 1439-1] resiprocate security update for two CVEs
    [DLA 1444-1] vim-syntastic security update for one CVE
    [DLA 1451-1] wireshark security update for 7 CVEs

Further I started to work on libgit and fuse. Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the second ELTS month.

During my allocated time I uploaded:

  • ELA-23-1 for wireshark
  • ELA-24-1 for fuse

I also tried to work on qemu but had to confess that those CVEs are far beyond my capabilities. Luckily qemu is no longer on the list of supported packages for ELTS. As there seemed to be some scheduling difficulties I stepped in and did 1.5 weeks of frontdesk duties.

Other stuff

This month I uploaded new packages of

  • pywws, a software to obtain data from some wheather stations
  • osmo-msc, a software from Osmocom

Further I continued to sponsor some glewlwyd packages for Nicolas Mora.

I also uploaded a new upstream version of …

I improved packaging of …

and fixed some bugs in …

The DOPOM (Debian Orphaned Package Of the Month) of this month has been sockstat. As there was a BUG about IPv6 support and upstream doesn’t seem to be active anymore, I revived it on github.

Planet DebianSune Vuorela: The smart button

Item {
    property string text;
    Text { ... }
    MouseArea {
        onClicked: {
            if (parent.text === "Quit") {
                Qt.quit()
            } else if (parent.text === "Start") {
                globalObject.start()
            } else if (parent.text === "Stop") {
                globalObject.stop()
            }
        }
    }
}

I don’t always understand why people do things in some ways.

Planet Debianintrigeri: Report from the AppArmor BoF at DebConf18

After a discussion started on debian-devel a year ago, AppArmor has been enabled by default in testing/sid since November 2017 as an experiment. We'll soon need to decide whether Buster ships with AppArmor by default or not. Clément Hermann and yours truly have hosted a BoF at DebConf18 in order to gather both subjective and factual data that can later be used to:

  1. draw conclusions from this experiment;
  2. identify problems we need to fix.

About 40 people attended this BoF; about half of them to participated actively, which is better than I expected even though I think we can do better.

Opting-in or -out

We started with a show of hands:

  • Out of 7 attendees who run Debian Stretch on their main system, 3 have voluntarily enabled AppArmor. course the attendees
  • Out of 15 attendees who run Debian testing/sid on their main system, 4 have voluntarily disabled AppArmor.
    → It would be interesting to understand why; if you're in this situation, let's talk!

Sticky notes party

We had a very dynamic collaborative sticky notes party aiming at gathering feeling and ideas, in a way that let us identify which ones were most commonly shared among the attendees.

Process

We asked the participants to write down their answers to the following questions on sticky notes (one idea per post-it):

  • How have you felt about your personal AppArmor experience so far?
  • How do you feel about the idea of keeping AppArmor enabled by default in the Debian Buster release?

Then we de-duplicated and categorized the resulting big pile of post-its together on a whiteboard. Finally, everyone got the chance to "+1" the four ideas/feelings they shared the most.

Output

If you're curious, here's what the whiteboard contained at the end.

Here are the conclusions I draw from this data:

  • A clear majority of the actively participating attendees have a generally positive feeling about AppArmor since it was enabled.
  • A clear majority of the actively participating attendees like the idea of keeping it enabled in Debian Buster. This is not very surprising coming from a small crowd of people who were interested enough to attend this BoF, but still.
  • Many attendees would like AppArmor to confine more software.
  • We need integration tests for AppArmor policy… just like we need integration tests for many other things in Debian.
  • We need at the very least better documentation (to explain how to use the existing policy debugging/development tools) and probably better integration in Debian (e.g. reportbug).
  • Regarding desktop apps sandboxing, the audience seemed to be split:
    • Those who were lead to believe that AppArmor is, in itself, a great technology to sandbox desktop apps. I think that's a misunderstanding; I know I'm at partly responsible for it and will do my best to fix it.
    • Those who echoed the concerns I had written on post-its myself about this strategy and communication problem.

I will update/file bug reports to reflect these conclusions.

Open discussion

Finally, we had an open discussion, half brainstorming ideas and half "ask me anything about AppArmor". For the curious, I've compiled the notes that were taken by Clément Hermann.

Meta

I want to thank:

  • Clément Hermann for co-hosting this session with me;
  • all attendees for playing the sticky notes party game — which was probably not what they expected when entering the room — and for their valuable input.

The feedback I got about the sticky notes party format was very positive: a few attendees told me it made them feel more part of the decision making process. Credits are due to Gunner for the inspiration!

If you attended this BoF and want to share your thoughts about how it went, I'm all ears → intrigeri@debian.org :)

,

Planet DebianDima Kogan: UNIX curiosities

Recently I've been doing more UNIXy things in various tools I'm writing, and I hit two interesting issues. Neither of these are "bugs", but behaviors that I wasn't expecting.

Thread-safe printf

I have a C application that reads some images from disk, does some processing, and writes output about these images to STDOUT. Pseudocode:

for(imagefilename in images)
{
    results = process(imagefilename);
    printf(results);
}

The processing is independent for each image, so naturally I want to distribute this processing between various CPUs to speed things up. I usually use fork(), so I wrote this:

for(child in children)
{
    pipe = create_pipe();
    worker(pipe);
}

// main parent process
for(imagefilename in images)
{
    write(pipe[i_image % N_children], imagefilename)
}

worker()
{
    while(1)
    {
        imagefilename = read(pipe);
        results = process(imagefilename);
        printf(results);
    }
}

This is the normal thing: I make pipes for IPC, and send the child workers image filenames through these pipes. Each worker could write its results back to the main process via another set of pipes, but that's a pain, so here each worker writes to the shared STDOUT directly. This works OK, but as one would expect, the writes to STDOUT clash, so the results for the various images end up interspersed. That's bad. I didn't feel like setting up my own locks, but fortunately GNU libc provides facilities for that: flockfile(). I put those in, and … it didn't work! Why? Because whatever flockfile() does internally ends up restricted to a single subprocess because of fork()'s copy-on-write behavior. I.e. the extra safety provided by fork() (compared to threads) actually ends up breaking the locks.

I haven't tried using other locking mechanisms (like pthread mutexes for instance), but I can imagine they'll have similar problems. And I want to keep things simple, so sending the output back to the parent for output is out of the question: this creates more work for both me the programmer, and for the computer running the program.

The solution: use threads instead of forks. This has a nice side effect of making the pipes redundant. Final pseudocode:

for(children)
{
    pthread_create(worker, child_index);
}
for(children)
{
    pthread_join(child);
}

worker(child_index)
{
    for(i_image = child_index; i_image < N_images; i_image += N_children)
    {
        results = process(images[i_image]);
        flockfile(stdout);
        printf(results);
        funlockfile(stdout);
    }
}

Much simpler, and actually works as desired. I guess sometimes threads are better.

Passing a partly-read file to a child process

For various vnlog tools I needed to implement this sequence:

  1. process opens a file with O_CLOEXEC turned off
  2. process reads a part of this file (up-to the end of the legend in the case of vnlog)
  3. process calls exec to invoke another program to process the rest of the already-opened file

The second program may require a file name on the commandline instead of an already-opened file descriptor because this second program may be calling open() by itself. If I pass it the filename, this new program will re-open the file, and then start reading the file from the beginning, not from the location where the original program left off. It is important for my application that this does not happen, so passing the filename to the second program does not work.

So I really need to pass the already-open file descriptor somehow. I'm using Linux (other OSs maybe behave differently here), so I can in theory do this by passing /dev/fd/N instead of the filename. But it turns out this does not work either. On Linux (again, maybe this is Linux-specific somehow) for normal files /dev/fd/N is a symlink to the original file. So this ends up doing exactly the same thing that passing the filename does.

But there's a workaround! If we're reading a pipe instead of a file, then there's nothing to symlink to, and /dev/fd/N ends up passing the original pipe down to the second process, and things then work correctly. And I can fake this by changing the open("filename") above to something like popen("cat filename"). Yuck! Is this really the best we can do? What does this look like on one of the BSDs, say?

CryptogramFriday Squid Blogging: Calamari Squid Catching Prey

The calamari squid grabs prey three feet away with its fast tentacles.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramThree of My Books Are Available in DRM-Free E-Book Format

Humble Bundle sells groups of e-books at ridiculously low prices, DRM free. This month, the bundles are all Wiley titles, including three of my books: Applied Cryptography, Secrets and Lies, and Cryptography Engineering. $15 gets you everything, and they're all DRM-free.

Even better, a portion of the proceeds goes to the EFF. As a board member, I've seen the other side of this. It's significant money.

Krebs on SecurityCredit Card Issuer TCM Bank Leaked Applicant Data for 16 Months

TCM Bank, a company that helps more than 750 small and community U.S. banks issue credit cards to their account holders, said a Web site misconfiguration exposed the names, addresses, dates of birth and Social Security numbers of thousands of people who applied for cards between early March 2017 and mid-July 2018.

TCM is a subsidiary of Washington, D.C.-based ICBA Bancard Inc., which helps community banks provide a credit card option to their customers using bank-branded cards.

In a letter being mailed to affected customers today, TCM said the information exposed was data that card applicants uploaded to a Web site managed by a third party vendor. TCM said it learned of the issue on July 16, 2018, and had the problem fixed by the following day.

Bruce Radke, an attorney working with TCM on its breach outreach efforts to customers, said fewer than 10,000 consumers who applied for cards were affected. Radke declined to name the third-party vendor, saying TCM was contractually prohibited from doing so.

“It was less than 25 percent of the applications we processed during the relevant time period that were potentially affected, and less than one percent of our cardholder base was affected here,” Radke said. “We’ve since confirmed the issue has been corrected, and we’re requiring the vendor to look at their technologies and procedures to detect and prevent similar issues going forward.”

ICBA Bancard is the payments subsidiary of the Independent Community Bankers of America, an organization representing more than 5,700 financial institutions that has been fairly vocal about holding retailers accountable for credit card breaches over the years. Last year, the ICBA sued Equifax over the big-three credit bureau’s massive data breach that exposed the Social Security numbers and other sensitive data on nearly 150 million Americans.

Many companies that experience a data breach or data leak are quick to place blame for the incident on a third-party that mishandled sensitive information. Sometimes this blame is entirely warranted, but more often such claims ring hollow in the ears of those affected — particularly when they come from banks and security providers. For example, identity theft protection provider LifeLock recently addressed a Web site misconfiguration that exposed the email addresses of millions of customers. LifeLock’s owner Symantec later said it fixed the flaw, which it blamed on a mistake by an unnamed third-party marketing partner.

Managing third-party risk can be challenging, especially for organizations with hundreds or thousands of partners (consider the Target breach, which began with an opportunistic malware compromise at a heating and air conditioning vendor). Nevertheless, organizations of all shapes and sizes need to be vigilant about making sure their partners are doing their part on security, lest third-party risk devolves into a first-party breach of customer trust.

Planet DebianLars Wirzenius: On requiring English in a free software project

This week's issue of LWN has a quote by Linus Torvalds on translating kernel messages to something else than English. He's against it:

Really. No translation. No design for translation. It's a nasty nasty rat-hole, and it's a pain for everybody.

There's another reason I fundamentally don't want translations for any kernel interfaces. If I get an error report, I want to be able to just 'git grep" it. Translations make that basically impossible.

So the fact is, I want simple English interfaces. And people who have issues with that should just not use them. End of story. Use the existing error numbers if you want internationalization, and live with the fact that you only get the very limited error number.

I can understand Linus's point of view. The LWN readers are having a discussion about it, and one of the comments there provoked this blog post:

It somewhat bothers me that English, being the lingua franca of of free software development, excludes a pretty huge parts of the world from participation. I thought that for a significant part of the world, writing an English commit message has to be more difficult than writing code.

I can understand that point of view as well.

Here's my point of view:

  • It is entirely true that if a project requires English for communication within the project, it discriminates against those who don't know English well.

  • Not having a common language within a project, between those who contribute to the project, now and later, would pretty much destroy any hope of productive collaboration.

    If I have a free software project, and you ask me to merge something where commit messages are in Hindi, error messages in French, and code comments in Swahili, I'm not going to understand any of them. I won't merge what I don't understand.

    If I write my commit messages in Swedish, my code comments in Finnish, and my error messages by entering randomly chosen words from /usr/share/dict/words into search engine, and taking the page title of the fourteenth hit, then you're not going to understand anything either. You're unlikely to make any changes to my project.

    When Bo finds the project in 2038, and needs it to prevent the apocalypse from 32-time timestamps ending, and can't understand the README, humanity is doomed.

    Thus, on balance, I'm OK with requiring the use of a single language for intra-project communication.

  • Users should not be presented with text in a language foreign to them. However, this raises a support issue, where a user may copy-paste an error message in their native language, and ask for help, but the developers don't understand the language, and don't even know what the error is. If they knew the error was "permission denied", they could tell the user to run the chmod command to fix the permissions. This is a dilemma.

    I've solved the dilemma by having a unique error code for each error message. If the user tells me "R12345X: Xscpft ulkacsx ggg: /etc/obnam.conf!" I can look up R12345X and see that the error is that /etc/obnam.conf is not in the expected file format.

    This could be improved by making the "parameters" for the error message easy to parse. Perhaps something like this:

    R12345X: Xscpft ulkacsx ggg! filename=/etc/obnam.conf

    Maintaining such error codes by hand would be quite tedious, of course. I invented a module for doing that. Each error message is represented by a class, and the class creates its own error code by taking the its Python module and class name, and computing and MD5 of that. The first five hexadecimal digits are the code, and get surrounded by R and X to make it easier to grep.

    (I don't know if something similar might be used for the Linux kernel.)

  • Humans and inter-human communication is difficult. In many cases, there is not solution that's good for everyone. But let's not give up.

CryptogramHow the US Military Can Better Keep Hackers

Interesting commentary:

The military is an impossible place for hackers thanks to antiquated career management, forced time away from technical positions, lack of mission, non-technical mid- and senior-level leadership, and staggering pay gaps, among other issues.

It is possible the military needs a cyber corps in the future, but by accelerating promotions, offering graduate school to newly commissioned officers, easing limited lateral entry for exceptional private-sector talent, and shortening the private/public pay gap, the military can better accommodate its most technical members now.

The model the author uses is military doctors.

LongNowNick Damiano Wins 10-Year Long Bet that The Large Hadron Collider Wouldn’t Destroy Earth

10 years ago, Joe Keane placed a Long Bet that the Large Hadron Collider will destroy Earth by 02018. He was challenged by Nick Damiano. The stakes were $1,000. If Damiano won, the winnings would go to Save the Children. If Keane won, the world would end, and the winnings would (theoretically) go to the National Rifle Association.

The Large Hadron Collider is the world’s largest and most powerful particle collider, and was launched in 02008, the year Keane placed his bet. Such awesome power, Keane reasoned, could bring about some unintended consequences. In placing the bet, Keane argued that “theoretical physics is incomplete regarding what processes can occur at such high energies. No one knows what is going to happen.”

Today, we’re pleased to announce that the Earth still exists, and that Damiano has won the bet.

LongNowGeorge P. Shultz on the Future of the International System

Changing demographics will do more to shape the future than the politics of Washington, says George P. Shultz. The politics of Washington is something Shultz knows well: now 97, Shultz served in four cabinet positions over two decades. Under Nixon, he served as the Secretary of Labor, the Director of Office of Management and Budget, and Secretary of the Treasury. He was also the Secretary of State under Reagan.

From George P. Shultz’s Long Now Seminar “Perspective.”

Worse Than FailureError'd: What the Truck?!

"I think I'll order the big-busted truck," writes Alicia.

 

Kevin D. writes, "Well, I guess when it comes to Microsoft and crashes, it's either go big or go home!"

 

"I was checking the status of a refund and learned that, in my specific case, time had gone negative," Dave L. wrote.

 

"Ok, Thunderbird, I know I'm an email packrat, but I really didn't think I had over 100 TB of junk stashed away!" Rich P. writes.

 

Mike S. wrote, "We usually complain about error dialogs without enough information. Then there's GIT."

 

"After installing an old PCI-e Serial card with accompanying driver, circa 2006, and then updating, the device shows as an 'unusable Parallel Port' in the list," writes Ralph E., "Oh, and the text in 'Gerätestatus' is German and means 'The device is working properly.'"

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianEnrico Zini: Multiple people

These are the notes from my DebConf18 talk

Slides are available in pdf and odp.

Abtract:

Starting from Debian, I have been for a long time part of various groups where diversity is accepted and valued, and it has been an invaluable supply of inspiration, allowing my identity to grow with unexpected freedom.

During the last year, I have been thinking passionately about things such as diversity, gender identity, sexual orientation, neurodiversity, and preserving identity in a group.

I would like to share some of those thoughts, and some of that passion.

Multiple people

"Debian is a relationship between multiple people", it (I) says at the entrance.

I grew up in a small town, being different was not allowed. Difference attracted "education", well-meaning concern, bullying.

Everyone knew everyone, there was no escape to a more fitting bubble, there was very little choice of bubbles.

I had to learn from a very young age the skill of making myself accepted by my group of peers.

It was an essential survival strategy.

Not being a part of the one group meant becoming a dangerous way of defining the identity of the group: "we are not like him". And one would face the consequences.

"Debian is a relationship between multiple people", it (I) says at the entrance.

Debian was one of the first opportunities for me to directly experience that.

Where I could begin to exist

Where I could experience the value and pleasure of diversity.

Including mine.

I am extremely grateful for that.

I love that.

This talk is also a big thank you to you all for that.

"Debian is a relationship between multiple people", it (I) says at the entrance.

Multiple people does not mean being all the same kind of person, all doing all the same kind of things.

Each of us is a different individual, each of us brings their unique creativity to Debian.

Classifying people

How would you describe a person?

There are binary definitions:

  • Good / Bad
  • Smart / Stupid
  • Pretty / Ugly
  • White / Not white
  • Rich / Poor
  • From here / immigrant
  • Not nerd / Nerd
  • Straight / Gay
  • Cis / Trans
  • Polite / Impolite
  • Right handed / left handed (some kids are still being "corrected" for being left handed in Italy)
  • Like me / unlike me

Labels: (like package sections)

  • Straight
  • Gay
  • Bi
  • Cis
  • Trans
  • Caucasian
  • ...
  • Like me / unlike me

Spectra: (like debtags)

  • The gender spectrum
  • The sexual preference spectrum
  • The romantic preference spectrum
  • The neurodiversity spectrum
  • The skin color spectrum
  • The sexual attraction spectrum

We classify packages better than we classify people.

Identity / spectrums

I'm going to show a few examples of spectra; I chose them not because they are more or less important than others, but because they have recently been particularly relevant to me, and it's easier for me to talk about them.

If you wonder where you are in each spectrum, know that every place is ok.

Think about who you are, not about who you should be.

Gender identity

My non binary awareness began with d-w and gender neutral documentation.

Sexual orientation

https://en.wikipedia.org/wiki/Human_sexuality_spectrum

table of sexual preference prefixes combinations

Neurodiversity

I'll introduce neurodiversity by introducing allism

An allistic person learns subconsciously that ey is dependent on others for eir emotional experience. Consequently, ey tends to develop the habit of manipulating the form and content of social interactions in order to elicit from others expressions of emotion that ey will find pleasing when incorporated into eir mind.

https://fysh.org/~zefram/allism/allism_intro.txt

The more I reason about this (and I reasoned about this a lot, before, during and after therapy), the more I consider it a very rational adaptation, derived from a very clear message I received since I was a small child: nobody cared whom I was, and to be accepted socially I needed to act a social part, which changed from group to group. Therefore, socially, I was wrong, and I had to work to deserve the right to exist.

What one usually sees of me in large groups or when out of comfort zone, is a social mask of me.

This paper is also interesting: analyzing tweets of people and their social circle, they got to the point of being able to predict what a person will write by running statistics only on what their friends are writing.

Is it measuring identity or social conformance?

Discussion about the autism spectrum tends to get very medical very fast, and the DSM gets often criticised for a tendency of turning diversity into mental health issues.

I stick to my experience from a different end of the spectrum, and there are various resources online to explore if you are interested in more.

Other spectra

I hope you get the idea about spectrum and identity.

There are many more, those were at the top of my head because of my recent experiences.

Skin color, age, wealth, disability, language proficiency, ...

How to deal with diversity

How to deal with my diversity

Let's all assume for a moment that each and every single one of us is ok.

I am ok.

You are ok.

You've been ok since the moment you were born.

Being born is all you need to deserve to exist.

You are ok, and you will always be ok.

Like every single person alive.

I'm ok.

You're ok.

We're all ok.

Hold on to that thought for the next 5 minutes. Hold onto it for the rest of your life.

Ok. A lot of problems are now solved. A lot of energy is not wasted anymore. What next?

Get to know myself

Awareness:

  • what do I like / what don't I like?
  • what am I interested in?
  • what would I like to do?
  • what do I know? What would I like to know?
  • what do I feel?
  • what do I need?

Get in touch with my feelings, get to know my needs.

Here's a simple algorithm to get to know your feelings and needs:

  1. If you are happy, take this phrase: I feel … because my need of … is being met
  2. If you are not happy, take this phrase: I feel … because my need of … is not being met
  3. Fill the first space with one of the words from here
  4. Fill the second space with one of the words from here
  5. Done!

To know more about Non-Violent Communication, I suggest this video

This other video I also liked.

Forget absolute truths, center on my own experience. Have a look here for more details.

Learn to communicate and express myself

Communicating/being oneself

  • enjoy what I like
  • pursue my interests
  • do what I want to do
  • study and practice what I'm interested in
  • let myself be known and seen by those who respect who I am

Find out where to be myself

Look for safe spaces where I can activate parts of myself

  • Friends
  • Partners (but not just partners)
  • Interest groups
  • Courses / classes
  • Debian
  • DebConf!

Learn to protect myself

I will make mistakes acting as myself:

  • Mistakes do not invalidate me, Mistakes are opportunity for learning.
  • I need to make a distinction between "I did something wrong" and "I am wrong"

Learn to know my boundaries

Learn to recognise when they are being crossed

Negotiate

Use my anger to protect my integrity. I do not need to hurt others to protect myself

How to deal with the diversity of others

Diversity is a good thing

Once I realised I can be ok in my diversity, it was easier to appreciate the diversity of others

Opening to others doesn't need to sacrifice oneself.

  • I can embrace my own identity without denying the identity of others.
  • Affirming me does not imply destroying you
  • If I feel I'm right, it doesn't mean that you are wrong

Curiosity is a good default.

Do not assume. Assume better and I'll be disappointed. Assume worse and I'll miss good interactions

  • Listen
  • Connect, don't be creepy
  • Interact with people, not things
  • People are unique like me
  • Respect others and myself
  • listen to my red flags
  • choose my involvement
  • choose again

When facing the new and unsettling, use curiosity if I have the energy, or be aware that I don't, and take a step back

The goal of the game is to affirm all identities, especially oneself.

Love freely.

Expect nothing.

Liberate myself from imagined expectations.

YKINMKBYKIOK.

What is not acceptable

https://en.wikipedia.org/wiki/Paradox_of_tolerance

The paradox of tolerance, as a comic strip

Less well known is the paradox of tolerance: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. — In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.

Use diversity for growth

Identifying where I am gives me more awareness about myself.

Identifying where I am shows me steps I might be interested in making.

Identity can change, evolve, move I like the idea of talking about my identity in the past tense

Diversity as empowerment, spectrum as empowerment

  • I'm in the trans* spectrum at least as far as not needing to follow gender expectations, possibly more
  • I'm in the autism spectrum at least as far as not needing to follow social expectations, possibly more
  • I'm in the asexual spectrum at least as far as not seeing people as sexual objects, possibly more Once I'm in, I'm free to move, I can reason on myself, see other possibilities

Take control of your narrative: what is your narrative? Do you like it? Does it tell you now what you're going to like next year, or in 5 years? Is it a problem if it does?

Conceptual space is not limited. Allocating mental space for new diversity doesn't reduce one's own mental space, but it expands it

Is someone trying to control your narrative? gaslighting, negging, patronising.

Debian and diversity

Impostor syndrome

Entering a new group: impostor syndrome. Am I good enough for this group?

Expectations, perceived expectations, perceived changes in perceived identity, perceived requirements on identity

I worked some months with a therapist to deal with that, to, it turned out, learn to give up the need to work to belong.

In the end, it was all there in the Diversity Statement:

No matter how I identify myself or how others perceive me: I am welcome, as long as I interact constructively with my community.

Ability of the group to grow, evolve, change, adapt, create

And here, according to Trout, was the reason human beings could not reject ideas because they were bad: “Ideas on Earth were badges of friendship or enmity. Their content did not matter. Friends agreed with friends, in order to express friendliness. Enemies disagreed with enemies, in order to express enmity.

“The ideas Earthlings held didn’t matter for hundreds of thousands of years, since they couldn’t do much about them anyway. Ideas might as well be badges as anything.

(Kurt Vonnegut, "Breakfast of Champions", 1973)

Keep one's identity in Debian

If your identity is your identity, and the group changes, it actually extends, because you keep being who you are.

If your identity is a function of the group identity, you become a control freak for where the group is going.

When people define their identity in terms of belonging to a group, that group cannot change anymore, because if it does, it faces resistance from its members, that will see their own perceived identity under threat.

The threat is that rituals, or practices, that validated my existance, that previously used to work, cease to function. systemd?

  • can I adapt when facing something new and unexpected?
  • do I have the energy to do it?
  • do I allow myself to ask for help?

Free software

Us, and our users, we are a diverse ecosystem

Free Software is a diverse ecosystem

Free software can be a spectrum (free hardware, free firmware, free software, free javascript in browsers...)

Vision

Debian exists, and can move in a diverse and constantly changing upstream ecosystem

Vision / non limiting the future of Debian (if your narrative tells you what you're going to like next year, you might have a problem) (but before next year I'd like to get to a point that I can cope with X)

Debian doesn't need to be what people need to define their own identity, but it is defined by the relationship between different, diverse, evolving people

Appreciate diversity, because there's always something you don't know / don't understand, and more in the future.

Nobody can know all of Debian now, and in the future, if we're successful, we're going to get even bigger and more complex.

We're technically complex and diverse, we're socially complex and diverse. We got to learn to deal with with that.

Because we're awesome. We got to learn to deal with with that.

Ode to the diversity statement

https://www.debian.org/intro/diversity

,

Krebs on SecurityThe Year Targeted Phishing Went Mainstream

A story published here on July 12 about a new sextortion-based phishing scheme that invokes a real password used by each recipient has become the most-read piece on KrebsOnSecurity since this site launched in 2009. And with good reason — sex sells (the second most-read piece here was my 2015 scoop about the Ashley Madison hack).

But beneath the lurid allure of both stories lies a more unsettling reality: It has never been easier for scam artists to launch convincing, targeted phishing and extortion scams that are automated on a global scale. And given the sheer volume of hacked and stolen personal data now available online, it seems almost certain we will soon witness many variations on these phishing campaigns that leverage customized data elements to enhance their effectiveness.

The sextortion scheme that emerged this month falsely claims to have been sent from a hacker who’s compromised your computer and used your webcam to record a video of you while you were watching porn. The missive threatens to release the video to all your contacts unless you pay a Bitcoin ransom.

What spooked people most about this scam was that its salutation included a password that each recipient legitimately used at some point online. Like most phishing attacks, the sextortion scheme that went viral this month requires just a handful of recipients to fall victim for the entire scheme to be profitable.

From reviewing the Bitcoin addresses readers shared in the comments on that July 12 sextortion story, it is clear this scam tricked dozens of people into paying anywhere from a few hundred to thousands of dollars in Bitcoin. All told, those addresses received close to $100,000 in payments over the past two weeks.

And that is just from examining the Bitcoin addresses posted here; the total financial haul from different versions of this attack is likely far higher. A more comprehensive review by the Twitter user @SecGuru_OTX and posted to Pastebin suggests that as of July 26 there were more than 300 Bitcoin addresses used to con at least 150 victims out of a total of 30 Bitcoins, or approximately $250,000.

There are several interesting takeaways from this phishing campaign. The first is that it effectively inverted a familiar threat model: Most phishing campaigns try to steal your password, whereas this one leads with it.

A key component of a targeted phishing attack is personalization. And purloined passwords are an evergreen lure because your average Internet user hasn’t the slightest inkling of just how many of their passwords have been breached, leaked, lost or stolen over the years.

This was evidenced by the number of commenters here who acknowledged that the password included in the extortion email was one they were still using, with some even admitting they were using the password at multiple sites! 

Surprisingly, none of the sextortion emails appeared to include a Web site link of any kind. But consider how effective this “I’ve got your password” scam would be at enticing a fair number of recipients into clicking on one.

In such a scenario, the attacker might configure the link to lead to an “exploit kit,” crimeware designed to be stitched into hacked or malicious sites that exploits a variety of Web-browser vulnerabilities for the purposes of installing malware of the attacker’s choosing.

Also, most of the passwords referenced in the sextortion campaign appear to have been slurped from data breaches that are now several years old. For example, many readers reported that the password they received was the one compromised in LinkedIn’s massive 2012 data breach.

Now imagine how much more convincing such a campaign would be if it leveraged a fresh password breach — perhaps one that the breached company wasn’t even aware of yet.

There are many other data elements that could be embedded in extortion emails to make them more believable, particularly with regard to freshly-hacked databases. For example, it is common for user password databases that are stolen from hacked companies to include the Internet Protocol (IP) addresses used by each user upon registering their account.

This could be useful for phishers because there are many automated “geo-IP” services that try to determine the geographical location of Website visitors based on their Internet addresses.

Some of these services allow users to upload large lists of IP addresses and generate links that plot each address on Google Maps. Suddenly, the phishing email not only includes a password you are currently using, but it also bundles a Google Street View map of your neighborhood!

There are countless other ways these schemes could become far more personalized and terrifying — all in an automated fashion. The point is that automated, semi-targeted phishing campaigns are likely here to stay.

Here are some tips to help avoid falling prey to these increasingly sophisticated phishing schemes:

Avoid clicking on links and attachments in email, even in messages that appear to be sent from someone you know.

Urgency should be a giant red flag. Most phishing scams invoke a temporal element that warns of dire consequences should you fail to respond or act quickly. Take a deep breath. If you’re unsure whether the message is legitimate, visit the site or service in question manually (ideally, using a browser bookmark so as to avoid potential typosquatting sites).

Don’t re-use passwords. If you’re the kind of person who likes to use the same password across multiple sites, then you definitely need to be using a password manager. That’s because password managers handle the tedious task of creating and remembering unique, complex passwords on your behalf; all you need to do is remember a single, strong master password or passphrase. In essence, you effectively get to use the same password across all Web sites.

Some of the more popular password managers include DashlaneKeepass, LastPass. [Side note: Using unique passwords at each site also can provide a strong clue about which Web site likely got breached in the event that said password shows up in one of these targeted phishing attacks going forward].

-Do not respond to spam or phishing emails. Several readers reported sending virtual nastygrams back to their would-be sextortionists. Please resist any temptation to reply. In all likelihood, the only thing a reply will accomplish is letting the attackers know they have a live one on the hook, and ensuring that your email address will receive even more scams and spams in the future.

-Don’t pay off extortionists. For the same reason that replying to spammers is a bad idea, rewarding extortionists only serves to further the victimization of yourself and others. Also, even if someone really does have the goods on you, there is no way that you as the victim can be sure that paying makes the threat go away.

CryptogramUsing In-Game Purchases to Launder Money

Evidence that stolen credit cards are being used to purchase items in games like Clash of Clans, which are then resold for cash.

Worse Than FailureCodeSOD: The Mike Test

The Joel Test is about to turn 18 this year. Folks have attempted to “update” it, but even after graduating high school, the test remains a good starting point for identifying a “good” team.

Mike was impressed to discover a PHP script which manages to fail a number of points on the Joel Test in only 8 lines.

user@devcomputer:~/codearchive$ head -8 check-tank-balOLD.php
<?
// ONLY ENABLE WHEN READY, THIS IS AN AUTOMATED BATCH SCRIPT, IT WILL, I REPEAT, WILL PROCESS REAL LIVE CUSTOMER
//
// DISABLE IS NEEDED BY CHANGING TO 0
$ENABLED=1;
// END OF SCRIPT STATUS VARIABLE
//$testing=1; // *******************************************************for testing.

Before we even look at the code, we know that they’re not using source control, at least not properly. check-tank-balOLD.php tells us that much. They can’t make a release in one step, as depending on the release mode, they’d need to change the $ENABLED flag (and from the comments, it’s impossible to know for sure what exactly I’m doing by setting those flags). This also means no usable daily builds are being made. No, no one’s fixed bugs before writing new code, and apparently old code just lingers around anyway. Are they using the best tools money can buy? Clearly not, and I’m not just saying that to pick on PHP. They don’t seem to be using any sort of meaningful tooling.

Do programmers have quiet working conditions? No, I don’t imagine so, because I imagine in this shop, there’s a constant stream of profanity being shouted.

In conclusion, allow me to propose the “Mike Test”, which is the number of statements or lines of code you need about an organization to know it’s definitely performing miserably on the Joel Test.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityReddit Breach Highlights Limits of SMS-Based Authentication

Reddit.com today disclosed that a data breach exposed some internal data, as well as email addresses and passwords for some Reddit users. As Web site breaches go, this one doesn’t seem too severe. What’s interesting about the incident is that it showcases once again why relying on mobile text messages (SMS) for two-factor authentication (2FA) can lull companies and end users into a false sense of security.

In a post to Reddit, the social news aggregation platform said it learned on June 19 that between June 14 and 18 an attacker compromised a several employee accounts at its cloud and source code hosting providers.

Reddit said the exposed data included internal source code as well as email addresses and obfuscated passwords for all Reddit users who registered accounts on the site prior to May 2007. The incident also exposed the email addresses of some users who had signed up to receive daily email digests of specific discussion threads.

Of particular note is that although the Reddit employee accounts tied to the breach were protected by SMS-based two-factor authentication, the intruder(s) managed to intercept that second factor.

“Already having our primary access points for code and infrastructure behind strong authentication requiring two factor authentication (2FA), we learned that SMS-based authentication is not nearly as secure as we would hope, and the main attack was via SMS intercept,” Reddit disclosed. “We point this out to encourage everyone here to move to token-based 2FA.”

Reddit didn’t specify how the SMS code was stolen, although it did say the intruders did not hack Reddit employees’ phones directly. Nevertheless, there are a variety of well established ways that attackers can intercept one-time codes sent via text message.

In one common scenario, known as a SIM-swap, the attacker masquerading as the target tricks the target’s mobile provider into tying the customer’s service to a new SIM card that the bad guys control. A SIM card is the tiny, removable chip in a mobile device that allows it to connect to the provider’s network. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

Another typical scheme involves mobile number port-out scams, wherein the attacker impersonates a customer and requests that the customer’s mobile number be transferred to another mobile network provider. In both port-out and SIM swap schemes, the victim’s phone service gets shut off and any one-time codes delivered by SMS (or automated phone call) get sent to a device that the attackers control.

APP-BASED AUTHENTICATION

A more secure alternative to SMS involves the use of a mobile app — such as Google Authenticator or Authy — to generate the one-time code that needs to be entered in addition to a password. This method is also sometimes referred to as a “time-based one-time password,” or TOTP. It’s more secure than SMS simply because the attacker in that case would need to steal your mobile device or somehow infect it with malware in order to gain access to that one-time code. More importantly, app-based two-factor removes your mobile provider from the login process entirely.

Fundamentally, two-factor authentication involves combining something you know (the password) with either something you have (a device) or something you are (a biometric component, for example). The core idea behind 2FA is that even if thieves manage to phish or steal your password, they still cannot log in to your account unless they also hack or possess that second factor.

Technically, 2FA via mobile apps and other TOTP-based methods are more accurately described as “two-step authentication” because the second factor is supplied via the same method as the first factor. For example, even though the second factor may be generated by a mobile-based app, that one-time code needs to be entered into the same login page on a Web site along with user’s password — meaning both the password and the one-time code can still be subverted by phishing, man-in-the-middle and credential replay attacks.

SECURITY KEYS

Probably the most secure form of 2FA available involves the use of hardware-based security keys. These inexpensive USB-based devices allow users to complete the login process simply by inserting the device and pressing a button. After a key is enrolled for 2FA at a particular site that supports keys, the user no longer needs to enter their password (unless they try to log in from a new device). The key works without the need for any special software drivers, and the user never has access to the code — so they can’t give it or otherwise leak it to an attacker.

The one limiting factor with security keys is that relatively few Web sites currently allow users to use them. Some of the most popular sites that do accept security keys include Dropbox, Facebook and Github, as well as Google’s various services.

Last week, KrebsOnSecurity reported that Google now requires all of its 85,000+ employees to use security keys for 2FA, and that it has had no confirmed reports of employee account takeovers since the company began requiring them at the beginning of 2017.

The most popular maker of security keys — Yubico — sells the basic model for $20, with more expensive versions that are made to work with mobile devices. The keys are available directly from Yubico, or via Amazon.com. Yubico also includes a running list of sites that currently support keys for authentication.

If you’re interested in migrating to security keys for authentication, it’s a good idea to purchase at least two of these devices. Virtually all sites that I have seen which allow authentication via security keys allow users to enroll multiple keys for authentication, in case one of the keys is lost or misplaced.

I would encourage all readers to pay a visit to twofactorauth.org, and to take full advantage of the most secure 2FA option available for any site you frequent. Unfortunately many sites do not support any kind of 2-factor authentication — let alone methods that go beyond SMS or a one-time code that gets read to you via an automated phone call. In addition, some sites that do support more robust, app- or key-based two-factor authentication still allow customers to receive SMS-based codes as a fallback method.

If the only 2FA options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password. But it’s high time that popular Web sites of all stripes start giving their users more robust authentication options like TOTP and security keys. Many companies can be nudged in that direction if enough users start demanding it, so consider using any presence and influence you may have on social media platforms to make your voice heard on this important issue.

,

CryptogramGCHQ on Quantum Key Distribution

The UK's GCHQ delivers a brutally blunt assessment of quantum key distribution:

QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms -- such as digital signatures -- than on encryption.

QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.

I agree with them. It's a clever idea, but basically useless in practice. I don't even think it's anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.

Read the whole thing. It's short.

Planet Linux AustraliaFrancois Marier: Mercurial commit series in Phabricator using Arcanist

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes...
pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong...

Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2484

Included changes:
  M       modules/libpref/init/all.js
  M       netwerk/base/nsChannelClassifier.cpp
  M       netwerk/base/nsChannelClassifier.h
  M       toolkit/components/url-classifier/Classifier.cpp
  M       toolkit/components/url-classifier/SafeBrowsing.jsm
  M       toolkit/components/url-classifier/nsUrlClassifierDBService.cpp
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       xpcom/base/ErrorList.py

~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4
3 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2485

Included changes:
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76
2 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (e40312d)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2486

Included changes:
  M       toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html
  M       toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html

Link all revisions together

In order to ensure that these commits depend on one another, click on that last phabricator.services.mozilla.com link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes...
pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Updated an existing Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2485

Included changes:
  M       browser/base/content/test/general/trackingPage.html
  M       netwerk/test/unit/test_trackingProtection_annotateChannels.js
  M       toolkit/components/antitracking/test/browser/browser_imageCache.js
  M       toolkit/components/antitracking/test/browser/browser_subResources.js
  M       toolkit/components/antitracking/test/browser/head.js
  M       toolkit/components/antitracking/test/browser/popup.html
  M       toolkit/components/antitracking/test/browser/tracker.js
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

CryptogramBackdoors in Cisco Routers

We don't know if this is error or deliberate action, but five backdoors have been discovered already this year.

Worse Than FailureCodeSOD: Fortran the Undying

There are certain languages which are still in use, are still changing and maturing, and yet are also frozen in time. Fortran is a perfect example- over the past 40–60 years, huge piles of code, mostly for scientific and engineering applications, was written. It may be hard to believe, but modern Fortran supports object-oriented programming and has a focus on concurrency.

Most of the people using Fortran, it seems, learned it in the 70s. And no matter what happens to the language, they still write code like it’s the 70s. Fortran’s own seeming immortality has imbued its users with necromantic energy, turning them into undying, and unchanging Liches.

Which brings us to Greg. Greg works with an engineer. This engineer, the Dark Lich Thraubagh, has a personal “spellbook” containing snippets of Fortran they’ve picked up over the past 40 years. For example, there’s this block, which creates an array of every number from 1–999, padded out to three characters.

I don’t know Fortran, so I give credit to this engineer/lich for writing code which even I can understand why it’s wrong and bad.

      character*3 c(999)
      character*1 b(9)
      b(0)='0'
      b(1)='1'
      b(2)='2'
      b(3)='3'
      b(4)='4'
      b(5)='5'
      b(6)='6'
      b(7)='7'
      b(8)='8'
      b(9)='9'
      
      do 1 i=1,999
      
      c(i)(3:3)=b( int( 10*((i/10.0)-int(i/10.0) ) ) )
      c(i)(2:2)=b( int( 10*((i/100.0)-int(i/100.0) ) ) )
      c(i)(1:1)=b( int( 10*((i/1000.0)-int(i/1000.0) ) ) )
      
1     continue

Start by creating a 1000-element array of 3-character long strings. Then create a 10-element array of 1-character long strings, containing the numbers 0–9.

Then we loop, and populate the strings in that larger array by doing some modular arithmetic and selecting from the 10-element array.

Fun fact: even in Fortran 77, there were a number of formatting options. If, for example, you wanted to put a number into a padded string, you could do write(string_var,"I3.3") 99. string_var now contains "099". Maybe not the clearest thing, but much simpler, certainly.

Funner fact: the c array was basically not used anywhere, in the particular program which caused Greg to send this along. The Dark Lich Thraubagh simply copies their “spellbook” into every program they write, gradually adding new methods to the spellbook, but never updating or modifying the old. They’ve been doing this for 40 years, and plan to do it for at least a century more.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet Linux AustraliaSimon Lyall: Audiobooks – July 2018

The Return of Sherlock Holmes by Sir Arthur Conan Doyle

I switched to Stephen Fry for this collection. Very happy with his reading of the stories. He does both standard and “character” voices well and is not distracting. 8/10

Roughing It by Mark Twain

A bunch of anecdotes and stories from Twain’s travels in Nevada & other areas in the American West. Quality varies. Much good but some stories fall flat. Verbose writing (as was the style at the time…) 6/10

Asteroids Hunters by Carrie Nugent

Spin off of a Ted talk. Covers hunting for Asteroids (by the author and others) rather than the Asteroids themselves. Nice level of info in a short (2h 14m) book. 7/10

Things You Should Already Know About Dating, You F*cking Idiot by Ben Schwartz & Laura Moses

100 dating tips (roughly in order of use) in 44 minutes. Amusing and useful enough. 7/10

Protector – A Classic of Known Space by Larry Niven

Filling in a spot in Niven’s universe. Better than many of his Known Space stories. Great background on the Pak in Hard Core package. Narrator gave everybody strong Australian accents for some reason. 7/10

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly

Good book on 12 long term “deep trends” ( filtering, remixing, tracking, etc ) and how they have worked over the last and next few decades (especially in the context of the Internet). Pretty interesting and mostly plausible. 7/10

Caesar’s Last Breath: Decoding the Secrets of the Air Around Us by Sam Kean

Works it’s way though the gases in & evolution of earth’s atmosphere, their discovery and several interesting asides. Really enjoyed this, would have enjoyed 50% more of it. 9/10

Share

,

CryptogramHacking a Robot Vacuum

The Diqee 360 robotic vacuum cleaner can be turned into a surveillance device. The attack requires physical access to the device, so in the scheme of things it's not a big deal. But why in the world is the vacuum equipped with a microphone?

Worse Than FailureUndermining the Boss

During the browser wars of the late 90's, I worked for a company that believed that security had to consist of something you have and something you know. As an example, you must have a valid site certificate, and know your login and password. If all three are valid, you get in. Limiting retry attempts would preclude automated hack attempts. The security (mainframe) team officially deemed this good enough to thwart any threat that might come from outside our firewall.

The Murder of Julius Caesar

As people moved away from working on mainframes to working on PCs, it became more difficult to get current site certificates to every user every three months (security team mandate). The security team decreed that e/snail-mail was not considered secure enough, so a representative of our company had to fly to every client company, go to every user PC and insert a disk to install the latest site certificate. Every three months. Ad infinitum.

You might imagine that this quickly became a rather significant expense for the business (and you'd be right), so they asked our department to come up with something less costly.

After a month of designing, our crack engineers came up with something that would cost several million dollars and take more than a year to build. I tried, but failed to stifle a chuckle. I told them that I could do it for $1500 (software license) and about two days of work. Naturally, this caused a wave of laughter, but the boss+1 in charge asked me to explain.

I said that we could put an old PC running a web server outside the firewall and manually dump all the site certificate installer programs on it. Then we could give the help desk a simple web page to create a DB entry that would allow our users to go to that PC, load the single available page to enter the unique code provided by the help desk, and get back a link to download a self-installing program to install the site certificate.

To preempt the inevitable concerns, I pointed out that while I had some knowledge of how to secure PCs and databases, that I was not by any means an expert, but that our Security Analysts (SAs) and DBAs were. We could have the SA's strip out all but the most necessary services, and clamp down the firewall rules to only let it access a dedicated DB on an internal machine on a specific port. The DBA's could lock down the dedicated DB with a single table to only allow read access from the web page; to pass in the magic phrase and optionally spit back a link to download the file.

Of course, everyone complained that the PC in-the-wild would be subject to hacking.

Since I believe in hoping for the best but planning for the worst, I suggested that we look at the worst possible case. I take out a full page ad in Hacker's Weekly saying "Free site certificates on exposed PC at IP a.b.c.d. They can be used at http://www.OurCompany.com. Enjoy!" After all, it can't get worse than that, right? So Mr. Hacker goes to the page and downloads the site certificate installation programs for every user and then goes to our website. What's the first thing he faces? Something he has and something he knows. He has the certificates, but doesn't know any login/passwords. Since the security people have already blessed this as "Good Enough", we should be safe.

After much discussion, everyone agreed that this made sense, but that they (reasonably) wanted to verify it. It was agreed that the SA's and DBA's had the needed expertise to strip and lock down the PC, firewall and DB. I took an old PC out of one of the closets, did a fresh install, put on the latest web server and relevant software, and then installed the few things we needed. Then I handed it to the SA's and told them to strip it and lock it down. I created a tiny DB with a single table and two stored procedures; one for the help desk to add a new time-limited entry for a user and the other to check to see if an unexpired entry existed and return a link to the installer on the exposed PC. Then I handed it to the DBA's and told them to restrict it so the table could only be accessed via the two stored procs, and to only allow the Help desk to call the proc that created the time limited entry for the user, and the external IP to call the proc to query the table. Since all of our users already had credentials to call the help desk, this was only a minimal additional cost.

We threw a couple of test certificate installers on it and put it outside the firewall. After I tested the "good" paths, I had the SA's and DBA's try to hack around their restrictions. When they couldn't, it was deemed safe and loaded up with all the certificates. I wrote up a very short how-to manual and had it installed in production.

This reduced the certificate installations to one trip per quarter to our data center.

The user was pleased at having saved millions of dollars on an ongoing basis.

I found out later that I inadvertently pissed off the boss+1 because he was planning on hiring more people for this project and I negated the need for him to expand his empire.

Whoops.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Sam VargheseWill the Lions be third-time lucky? And will the ref learn to hold his peace?

For a third successive year, South African super rugby side the Lions have made it to the final where they will, for a second year running, lock horns with the Canterbury Crusaders, the most successful team in the 23-year history of the competition.

Last year, the Lions took on the Crusaders at home but were beaten 17-25, with their coach, Johan Ackermann, to blame. Ackermann let the side down in 2016 too, when the Lions were beaten by the Hurricanes.

This year, the task will be harder as the final is being played at the Crusaders’ home ground.

The Crusaders have won the trophy eight times, the Lions not even once. In fact, South African teams have only won the tournament thrice, with the Bulls being the lone team to succeed. The last time the Bulls won was in 2010. Australia has not done that well either, with four wins in the 22 years; the Brumbies have won twice, and the Reds and Waratahs one apiece.

Teams from New Zealand have won 15 of the 22 finals since the tournament began in 1996, with every team in that country having won at least once. Only on three occasions has a New Zealand team not figured in the final. The odds are thus heavily weighted in favour of New Zealand.

The Lions, at least, have a new coach this year and hopefully Sys de Bruin will not make the same fundamental errors that Ackermann did. But de Bruin did not sound too optimistic when, soon after making the finals, he characterised the Crusaders as “unreal” and said he was hoping for a miracle in Saturday’s (August 4) final.

Said de Bruin: “I saw a few miracles happen today (the Lions’ win over the Waratahs). The bounce of the ball went our way and I am very thankful that we are able to go to the best team in the world.

“I believe in miracles and this team has proven it. So anything can happen. The Crusaders are the favourites – they are a very good team – but it is still 80 minutes between four white lines so it will be interesting.”

Some pundits were already saying after the Crusaders overwhelmed the Hurricanes in the semi-finals that the Canterbury outfit might as well be crowned champions then and there. New Zealand rugby writer Gregor Paul wrote: “The Crusaders might as well be crowned champions now. There doesn’t seem to be a lot of need to ask them to play the final – as, barring some kind of impossible to foresee disaster, they are going to win it.

“There is too much quality in their personnel and too much certainty and trust in how they are playing, that no other side in the competition gets remotely close.”

That may be a trifle optimistic. The Lions are a good team, but they will have to, more than anything else, overcome a big mental hurdle.

Playing in Canterbury will not help their cause one bit. If the weather is dry, then they will probably be more competitive. But if it is wet and cold, then the Crusaders would be overwhelming favourites. They seem to be adept at winning those tight, grim battles that are often decided on penalties.

However there is one factor that can screw up the final and that is the choice of referee, with Australia’s Angus Gardner set to officiate. There are two kinds of referees in rugby: one type stamps his authority on a game very early on, and then more or less becomes invisible, with the players toeing the line, having understood from those early manoeuvres that the ref is not to be toyed with.

The other type is the Gardner type, one who thinks the game is all about him, is overly verbose in explaining each and everything to the players, and unnecessarily making chit-chat without shutting his trap and letting the game flow.

No matter how well the Lions or Crusaders play, Gardner could well be the decisive factor in the 23rd super rugby final. Don’t say you have not been warned.

Planet Linux AustraliaDavid Rowe: Rpitx and 2FSK

This post describes tests to evaluate the use of rpitx as a 2FSK transmitter.

About 10 years ago I worked on the Village Telco – a system for community telephone networks based on WiFi. At the time we used WiFi SoCs which were open source at the OS level, but the deeper layers were completely opaque which led (at least for me) to significant frustration.

Since then I’ve done a lot of work on the physical layer, in particular building my skills in modem implementation. Low cost SDR has also become a thing, and CPU power has dropped in price. The physical layer is busy migrating from hardware to software. Software can, and should, be free.

So now we can build open source radios. No more chip sets and closed source.

Sadly, some other aspects haven’t changed. In many parts of the world it’s still very difficult (and expensive) to move an IP packet over the “last 100 miles”. So, armed with some new skills and technology, I feel it’s time for another look at developing world and humanitarian communications.

I’m exploring the use rpitx as the heart of HF and UHF data terminals. This clever piece of software turns a Raspberry Pi into a radio transmitter. Evariste (F5OEO) the author of rpitx, has recently developed the v2beta branch that has improved performance, and includes some support for FreeDV waveforms.

Running Tests

I have some utilities for the Codec 2 FSK modem that generate frames of test bits. I modified the fsk_mod_ext_vco utility to drive a utility Evariste kindly wrote for FreeDV experiments with rpitx. So here are the command lines that generate 600 seconds (10 minutes) of 100 bit/s 2FSK test frames, and transmit them out of rpitx, using a 7.177 MHz carrier frequency:

$ ./fsk_get_test_bits - 60000 | ./fsk_mod_ext_vco - ~/rpitx/2fsk_100.f 2 --rpitx 800 100
~/rpitx $ sudo ./freedv 2fsk_100.f 7177000

On the receive side I used my FT-817 connected to FreeDV to sample the signal as a wave file, then fed the signal into C and Octave versions of the demodulator. The RPi is top left at rear, the HackRF in the foreground was used initially as a reference transmitter:

Results

It works really well! One of the most important tests for any modem is adding calibrated noise and measuring the Bit Error Rate (BER). I tried Eb/No = 9dB (-5.7dB SNR), and obtained 1% BER, right on theory for a 2FSK modem:

 $ ./cohpsk_ch ~/Desktop/2fsk_100_rx_rpi.wav - -26 | ./fsk_demod 2 8000 100 - - | ./fsk_put_test_bits -
FSK BER 0.009076, bits tested 11900, bit errors 108
SNR3k(dB): -5.62

This line takes the sample wave file from the FT-817, adds some noise using the cohpsk_ch utility, then pipes the signal to the FSK demodulator, before counting the bit errors in the test frames. I adjusted the 3rd “No” parameter in cohpsk_ch by hand until I obtained about 1% BER, then checked the SNR against the theoretical SNR for an Eb/No of 9dB.

Here are some plots from the Octave version of the demodulator, with no channel noise applied. The first plot shows the time and frequency domain signal at the input of the demodulator. I set the shift at 800 Hz, so you can see one tone at 800 Hz,the other at 1600 Hz:

Here is the output the of the FSK demodulator filters (red and blue for the two filter outputs). We can see a nice separation, but the red “high” level is a bit lower than blue. Red is probably the 1600 Hz tone, the FT-817 has a gentle low pass filter in it’s output, reducing higher frequency tones by a few dB.

There is some modulation on the filter outputs, which I think is explained by the timing offset below:

The sharp jump at 160 samples is expected, that’s normal behaviour for modem timing estimators, where a sawtooth shape is expected. However note the undulation of the timing estimate as it ramps up, indicating the modem sample clock has a little jitter. I guess this is an artefact of rpitx. However the BER results are fine and the average sample clock offset (not shown) is about 50ppm which is better than many sound cards I have seen in use with FreeDV. Many of our previous modem transmitters (e.g. the first pass at Wenet) started with much larger sample clock offsets.

A common question about rpitx is “how clean is the spectrum”. Here is a sample from my Rigol DSA815, with a span of 1MHz around the 7.177 MHz tx frequency. The Tx power is actually 11dBm, but the marker was bouncing around due to FSK modulation. On a wider span all I can see are the harmonics one would expect of a square wave signal. Just like any other transmitter, these would be removed by a simple low pass filter.

So at 7.177 MHz it’s clean to the limits of my spec analyser, and exceeds spectral purity requirements (-43dBc + 10log(Pwatts)) for Amateur (and I presume other service) communications.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV August 2018 Main Meeting: PF on OpenBSD and fail2ban

Aug 7 2018 19:00
Aug 7 2018 21:00
Aug 7 2018 19:00
Aug 7 2018 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE CHANGED START TIME

7:00 PM to 9:00 PM Tuesday, August 7, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

August 7, 2018 - 19:00

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV August 2018 Workshop: Dual-Booting Windows 10 & Ubuntu

Aug 25 2018 12:30
Aug 25 2018 16:30
Aug 25 2018 12:30
Aug 25 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

UPDATE: Rescheduled due to a conflict on the original date - please note new date!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

August 25, 2018 - 12:30

read more

Rondam RamblingsI didn't do it... but if I'd done it...

Team Trump's rhetoric is starting to sound like the Cell Block Tango. They had it coming, they had it coming!They had it coming all along!I didn't do it, but if I done itHow could you tell me that I was wrong? Seriously, go through the ten iterations (at least) of Trump's story and tell me that it doesn't remind you of those lyrics, particularly Rudi Giuliani saying, "The President wasn't in the

,

Cory DoctorowI’ll be live on BookTV’s In Depth on August 5!

I’m headed to DC to sit down in studio with BookTV’s “In Depth” on August 5; it’ll air live on Aug 5 at 12PM Eastern/9AM Pacific and be repeated on August 6 at 12AM Eastern/(9PM Pacific on Aug 5) and on Aug 11 at 9AM Eastern/6AM Pacific. It’s a phone-in!

LongNowLightning, Stars and Space: Art That Leaves the Gallery Behind

Star Axis by Charles Ross.

In Part I of our exploration of Land Art in the American West, we covered the birth of the Land Art movement in the 01960s and some of the seminal works created by Robert Smithson, Michael Heizer, Nancy Holt and James Turrell, which expanded the definition of art and opened up new possibilities for the location of artworks. Drawn to the desert for its long vistas, compelling terrain, beautiful light and dark night skies, these artists pushed through the boundaries of art in their day to create monumental works that explored the expansiveness of earth and time. Though the early death of Smithson dampened the momentum of the movement, the ideas around Land Art had taken hold.

In Part II of our series, we move out of the 01960s to explore the work of three artists who created their major works during the 01970s and 01980s. We see a shift with these artists to a focus on complete control over the exhibition of their work and meticulously curating the experience the viewer has coupled with a goal of permanence of the artwork in situ. Marfa, Texas is only about 80 miles from our Clock Site, making Donald Judd’s work there especially relevant to us.

Donald Judd and Marfa, Texas

Donald Judd, untitled work in concrete.

Donald Judd began his artistic career as a painter in New York in the late forties. He graduated from Columbia University with a degree in philosophy in 01953 and was soon making almost exclusively three-dimensional artwork. Like many other artists in the 01950s and 60s, he rejected traditional painting and sculpture — as well as the museum gallery system of exhibiting art — as too limited.

Donald Judd.

This reaction against the conventional art world manifested itself in different ways for different artists. Early large-scale Land Art such as Robert Smithson’s Spiral Jettyand Michael Heizer’s Double Negative was created in direct opposition to the idea that art was an object which could be exhibited in a room in a museum, or even purchased and mounted on a wall.

Land Artists literally fled New York City for the desert, where few art collectors ventured, and, in Heizer’s case, made negative artwork — excavations in the earth which could not be commercialized. Similarly, the subject of Judd’s artwork came to include “the relationship of the object to space and the larger environment.”¹ But as Judd’s work matured he strove to create art that was in the American Southwest, not simply outside New York City museums.

Judd had the opportunity to develop and articulate his own artistic perspectives by writing art criticism for major art journals in the early sixties. His 01964 essay Specific Objects defined an emerging approach to art. “The work is diverse,” he wrote, “and much in it that is not in painting and sculpture is also diverse. But there are some things that occur nearly in common.”

One main characteristic that Judd identified is three-dimensionality. He argued that this art “resembles sculpture more than it does painting, but it is nearer to painting.” Judd wanted to avoid both the composed nature of traditional sculpture (“Most sculpture is made part by part, by addition, composed. The main parts remain fairly discrete.”) and the illusionary nature of traditional painting (“Actual space is intrinsically more powerful and specific than paint on a flat surface.”).

“Judd’s art produces local order, meaning that there is no framework surrounding creative experience.” — Art historian David Raskin²

Judd’s concern for the relationship between an art piece and the space surrounding it led him to develop very particular standards for how art should be displayed. He was extremely critical of the way most galleries curated their exhibits and made it one of his highest priorities to control the circumstances in which viewers experienced his art. Former Tate director Nicholas Serota writes that

his attention to the installation and presentation of his own work, and that of artists whom he admired, established new parameters for the display of art, challenging and eventually changing the conventions of museums.³

Judd wanted exhibits that were not only carefully installed, but also permanent. He observed that artists’ work could be distorted not only by improper display but by the passage of time. In later years, when he established The Chinati Foundation to permanently maintain the artwork at Marfa, Texas, he made his intentions clear:

Somewhere a portion of contemporary art has to exist as an example of what the art and its context were meant to be. Somewhere, just as the platinum-iridium meter guarantees the tape measure, a strict measure must exist for the art of this time and place.⁴

Judd’s first opportunity to design his own exhibition space came in 01968 when he purchased a former garment factory in SoHo at 101 Spring Street.

101 Spring Street. Judd Foundation.

Judd cleared the cast-iron-frame building out to create open and brightly sunlit floors — each one designated for either living, working or exhibiting art. Soon after, though, his aspirations outgrew the dense urban setting of New York City and he began searching the southwestern U.S. for a site where he could permanently house comprehensive collections of his own and other artists’ work. This search led him to Marfa, a small town in western Texas where he was to spend much of his life converting old industrial buildings into meticulously designed living, working and exhibition spaces.

Donald Judd first encountered the landscape of western Texas in 01946 while en route to Los Angeles and sent a telegram from Van Horn — about 75 miles from Marfa — to his mother:

“dear mom van horn texas. 1260 population. nice town beautiful country mountains love don.”⁵

Judd began renting a house in Marfa in 01973 and purchased two former army buildings which he began to renovate, though he did not make Marfa his permanent residence until 01977. With the help of the Dia Foundation, whose mission “to commission, support, and present site-specific long-term installations and single-artists exhibitions to the public” conforms remarkably well to Judd’s philosophy, Judd began purchasing more land and buildings. He also started working on the art pieces for Marfa, and in 01980 fifteen concrete sculptures were among the first pieces to be completed.

Another seminal work by Judd, 100 untitled works in mill aluminum, reveals his deep commitment to both the works of art and the context in which they are exhibited. The 100 works were created over a number of years and installed in two large former artillery sheds. Extensive reworkings of these two buildings opened the interior space up to a flood of natural light through the replacement of the garage doors with walls of glass. Vaulted roofs were added to heighten and refine the proportions of the buildings; these proportions were echoed in the installation of the 100 works. The works themselves, 100 aluminum boxes with identical outside dimensions, but with different interior treatments, offer a subtle and continuously shifting experience of the artwork as the angle of the light outside changes and plays off the metal exteriors and varied interiors. The viewer is rewarded with patience and the piece reveals itself over time.

In 01986, Judd formed The Chinati Foundation to ensure that the site would continue to develop and to be maintained, and the following year he held an open house in Marfa with the town’s residents, understanding that a permanent installation there would be an important part of the community. The complex now includes 15 buildings — mostly old military facilities — as well as ranch land, and permanently displays bodies of work installed not only by Donald Judd, but by Carl AndreDan FlavinRichard LongClaes Oldenburg and Coosje van BruggenJohn Chamberlain, and several others.

Judd’s careful execution of his vision for Marfa allowed him and the artists he invited there to create artwork on their own terms. An artist’s control over their work often diminishes or disappears once it leaves their studio, entering the world of art collectors or museums where it can continue to change hands indefinitely. Marfa offered the opportunity for Judd to be both artist and curator, and it assured that his installations would remain unaltered as long as the institutions that maintain them survive. For Judd, these conditions were paramount.

Judd’s own buildings and those of the Chinati Foundation manifest his ideals: his demands on art and on the manner of its installation; its connection to life as it is lived, to architecture, and to the landscape.⁶

Donald Judd passed away in 01994, having established The Chinati Foundation as a caretaker for Marfa as well as The Judd Foundation to maintain and preserve his own work there and in New York.

My first and last interest is in my relation to the natural world, all of it, all the way out. This interest includes my existence […] the existence of everything and the space and time that is created by the existing things. Art emulates this creation or definition by also creating, on a small scale, space and time. — Donald Judd


Charles Ross and Star Axis

Like James Turrell, Charles Ross came to the art world with a scientific background. He attended UC Berkeley, graduating with a BA in mathematics before completing his MA in sculpture in 1962. His early work was mainly focused on installations using welded steel, plexiglass, prisms and lenses, sometimes in collaboration with theater and dance groups.

Coffin, Dwan Gallery 01968

Amongst the materials Ross worked with early in his career, prisms played an especially important role. In 01965 he developed a new technique for building large, clear prisms and began exploring the possibilities of prisms as sculptures. He’d been spending his time working and exhibiting in both San Francisco and New York and came to know Michael Heizer and the other artists in the Dwan orbit. Ross’s transition from multi-media installationist to prism expert prompted Heizer to write an obituary for Ross, marking the death of his early work period.

Truncated Cubes, Dwan Gallery 01968

Ross exhibited his new prisms at the Dwan Gallery in New York three times between 01968 and 01971. Over those years, Ross’s inculcation into the scene that gave birth to Land Art seems to have turned his thinking about prisms inside-out:

In 1969 Ross shifted the emphasis of his artwork from that of the minimal prism object, to the prism as an instrument through which light revealed itself so that the orchestration of spectrum light became the artwork. This began his life long interest in projecting large bands of solar spectrum into living spaces. — Charles Ross Studio

Dwan Light Sanctuary, 0996.

Rather than objects on pedestals for people to gather around and look into, prisms for Ross became a tool to create an artwork that instead surrounds its viewer.

In his progression from creating sculptural objects to immersive gallery installations, Ross sought communion with light. The next step in that direction took him, along with the Land Artists he’d met working at the Dwan Gallery, beyond what New York could offer and into the American Southwest. In 1971 he conceived Star Axis, his signature Land Art piece.

After four years of searching for a suitable location, Ross began construction on what he calls an “architechtonic sculpture,” a sculptural representation of stellar alignments and the ways they change over long time periods. The piece is meant to evoke the deep history of humanity’s relationship to the stars. Built on a mesa in the New Mexico desert, Star Axis has been under construction for decades and will be eleven stories tall when complete. As a naked-eye observatory, Star Axis features alignments with the Sun on solstices and equinoxes, as well as a long stairway and tunnel that are aligned with the Earth’s axis, allowing the axial precession cycle currently centered on Polaris to be viewed while climbing the structure.

Star Axis features, among other things, an opening at the top of the Star Tunnel that is aligned with Earth’s Axis so that it points directly at the celestial pole — the point in space around which we observe the stars rotating over the course of a night. For most of human history, Polaris has been very close to this point and so has been known as the North Star or the Pole Star — it appears to remain motionless while all the other stars spin around it. The Earth’s axis wobbles, though, every 26,000 years and this axial precessioncauses Polaris to appear to move away from the celestial pole. As Polaris shifts farther from the celestial pole, it begins to rotate through the sky in a wider circle.

Standing at the base of Star Axis’s Star Tunnel, a viewer sees only a tiny portion of the sky surrounding the celestial pole. This signifies Polaris’s extreme position within the precessional cycle at its closest to being our true North Star. While ascending the tunnel, a visitor’s perspective shifts so that more sky is revealed, which signifies Polaris’s shift away from the celestial pole and towards a wider circular motion in the sky. The peak of the tunnel is the opposite extreme — Polaris’s orbit when it is farthest from being our North Star.

Ross hopes to incorporate these astronomical details along with alignments with solstices, equinoxes, the equator and more into an experience combining site and structure that communicates our multi-millennial relationship with the stars.

It’s about the feeling you get when you’re grasping your relationship to the stars. Ultimately, it’s about the earth’s environment — both in time and space — extending out to the stars. — Charles Ross

Turrell, Ross and others create experiences that can only be had in particular places, often at particular times. These experiences are meant to create for the viewer a relationship with the surrounding environment and to expand, temporally and spatially, what constitutes that environment. A small contingent of Long Now Staff and Board members was able to visit the site in 02000 and found the scale and multi-decade effort impressive. Stewart Brandpointed out after the visit:

Astronomical alignments can be pretty exciting, and they are intensely educational. We don’t usually sense the grand orientations, but when we do we take on something deep.

The Precession of the Equinoxes and its 26,000 year time frame can be apprehended personally and thrillingly. — Stewart Brand


Walter De Maria and The Lightning Field

Walter De Maria, The Lightning Field, 1977. © Estate of Walter De Maria. Photo: John Cliett

Walter De Maria’s most well-known piece, called The Lightning Field, is a great example of Land Art’s interest in crafting experiences. It is physically made up of 400 stainless steel poles set out in a grid in western New Mexico. Nearby, there is a cabin for visitors to spend the night in hopes of catching a glimpse of thunderstorms. The experience designed by De Maria is a specific one and includes strict instructions for visitors (no vehicles, cameras, or outside food) that are enforced by the caretakers of the piece, the Dia Art Foundation.

Walter De Maria

Created in 01977, The Lightning Field is an experience of the land and the sky and the ways they interact, but it also explores our attempts to quantify our environment. Measurement is a theme throughout much of De Maria’s work and it is embodied in the dimensions of The Lightning Field. The grid of poles measures one mile along one side and one kilometer on the other, mixing normally incompatible systems and illustrating their arbitrariness. Along similar lines, De Maria’s Silver Metersand Gold Meters feature plugs of precious metals inserted into plates of steel. The total precious metal contained in each plate is one troy ounce, but it is distributed over an increasing number of plugs (the squares of the integers 2–9) for each successive plate.

…in its exploration of the numerical in relation to the serial, De Maria’s trajectory has been singular. It ranges from the vast scale of The Lightning Field — a Land Art work in New Mexico realized under the auspices of Dia Art Foundation in 1977 — whose grid of four hundred stainless steel poles spans a field a kilometer by a mile in dimension, to this more modest pair of works, which similarly incorporates both metric and English (or Imperial) systems of linear measurement. Common to both, too, is the use of highly polished metal components and pristine workmanship, together serving to impart to the piece a sense of absoluteness — of indubitability — as if it existed outside the vicissitudes of chance and the inchoate. — Dia Art Foundation

Top: Silver Meters, 01976. Bottom: Gold Meters, 01976–77.

During a visit to The Lightning Field in 02000, a small group of Long Now Staff and Board Members found the experience, given the actual extreme rarity of lightning, more focused on the surrounding landscape:

There was no sign of lightning but we had an incredible sunset and the cabin is actually the best part. At the least this is probably the coolest place to stay in New Mexico. The Lightning Field itself is great at sunset/rise and almost unseen otherwise. Each 22ft tall by 2 inch diameter solid stainless pole is parabolically tapered at the top to a sharp point. The stainless has remained shiny for 23 years. An interesting effect occurs as the sun angles down toward the horizon and picks up reflection off the taper, they almost look as though they have lights at the tops.

Since lightning can never be guaranteed, the experience of visiting the piece is often more about meditation and exploration than it is about watching a particular phenomenon. The possibility and implication of lightning hints at the power of the environment, while the grid and its measurements allude to our hopes to quantify and bring order to it. Through simple, minimal creations, Land Art or otherwise, De Maria creates experiences that evoke the world’s complexity and defiance in the face of our attempts to understand it.


Land Art started in many ways as a rejection. The early participants all consciously sought to escape the limits and the commodification of the modern, minimalist gallery scene and so defined much of their work in the negative (Heizer’s Double Negative making it explicit). As the scene grew and pulled in more artists, the ideas being realized through Earthworks began to take on a positivity. Donald Judd wanted full control over the environments in which his sculptures were viewed and he wanted them to remain where he put them permanently. Charles Ross’s explorations of the sun and stars led him away from the bright city lights to a place where the sky is a defining characteristic of the landscape. Walter De Maria envisioned an experience that could only be sufficiently crafted and controlled in an isolated natural setting.

Fully immersive, designed experiences, in touch with the natural environment and evocative of our relationship to it, often installed with the intention of permanence (or at least of letting nature decide how and when to end the show instead of a curator) came to be the hallmarks of the genre. These characteristics describe some of the 10,000 Year Clock’s aspirations as well; the paths and tools forged in the Land Art movement’s workshops and sketchbooks are a resource for Long Now’s attempts to provide an experience that encourages long-term thinking through The Clock. In a third installment, we’ll take a look at more contemporary efforts within Land Art — some of the original artists are still working away while many newcomers have helped to keep the movement vital.



CryptogramIdentifying People by Metadata

Interesting research: "You are your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information," by Beatrice Perez, Mirco Musolesi, and Gianluca Stringhini.

Abstract: Metadata are associated to most of the information we produce in our daily interactions and communication in the digital world. Yet, surprisingly, metadata are often still categorized as non-sensitive. Indeed, in the past, researchers and practitioners have mainly focused on the problem of the identification of a user from the content of a message.

In this paper, we use Twitter as a case study to quantify the uniqueness of the association between metadata and user identity and to understand the effectiveness of potential obfuscation strategies. More specifically, we analyze atomic fields in the metadata and systematically combine them in an effort to classify new tweets as belonging to an account using different machine learning algorithms of increasing complexity. We demonstrate that through the application of a supervised learning algorithm, we are able to identify any user in a group of 10,000 with approximately 96.7% accuracy. Moreover, if we broaden the scope of our search and consider the 10 most likely candidates we increase the accuracy of the model to 99.22%. We also found that data obfuscation is hard and ineffective for this type of data: even after perturbing 60% of the training data, it is still possible to classify users with an accuracy higher than 95%. These results have strong implications in terms of the design of metadata obfuscation strategies, for example for data set release, not only for Twitter, but, more generally, for most social media platforms.

Worse Than FailureOld Lennart

Gustav's tech support job became a whole lot easier when remote support technology took off. Instead of having to slowly describe the remedy for a problem to a computer-illiterate twit, he could connect to their PC and fix it himself. The magical application known as TeamViewer was his new best friend.

Vnc-03

Through Gustav's employer's support contract with CAD+, a small engineering design firm, he came to know Roger. The higher-ups at CAD+ decided to outsource most of their IT work and laid off everyone except Roger, who was to stay on as liaison to Gustav and Co. Roger was the type whose confidence in his work did not come close to matching the quality of it. He still felt like he could accomplish projects on his own without any outside help. Thanks to that, Gustav had to get him out of a jam several times early in their contract.

Roger's latest folly was upgrading all of the office workstations from the disaster that was Windows Vista to the much more reliable Windows 7. "I had such a smooth rollout to Windows 7, I tell ya," Roger bragged over the phone. "I brilliantly used this cloning process to avoid installing the same things repeatedly!" Gustav rolled his eyes while wondering if this was a support call or if Roger just needed someone to talk to. "Well anywho, I had two system images - one with the basic software everyone gets, and one that included SolidWorks for our CAD designers. Seems they don't have SolidWorks even though I installed it. Can you do your support wizardry and take a look?"

Gustav agreed and was transferred to Dave, their lead CAD designer. He figured Dave just didn't know where to find SolidWorks on the new OS and it would be a quick call. He got Dave's TeamViewer ID and connected to his PC in no time. He decided to ignore the fact that Dave had a browser open to an article about the twenty greatest Michael Bolton songs of all time.

"Thanks Dave, I'm in. Are you able to see me moving your mouse?" Gustav asked. He wasn't. "Ok then, there must be some lag here. I'm going to look around to find where you have SolidWorks and make a shortcut for you." Gustav spent the next few minutes looking at the places any sane person would install SolidWorks. Since Roger wasn't sane, he didn't find it in a logical directory. He then went to Programs and Features and noticed that it wasn't installed anywhere.

"Hey Dave, bad news," Gustav informed. "It seems like your PC got the wrong image from Roger and your CAD software isn't on there."

"Bummer. I didn't know if you were doing anything, on my side the computer screen was just sitting there," Dave replied, just as an incredible ruckus broke out behind him. Gustav could hear doors slamming and someone cursing at Roger in the background. Dave chuckled quietly into the phone.

"... is everything ok there?" Gustav asked, concerned.

"Oh, that's just Old Lennart throwing a fit. He's our cranky old Vice President. He's super pissed because Roger keeps hacking his computer and messing around on it."

Gustav suddenly had a suspicion. "Dave, don't take this the wrong way, but were you reading an article about Michael Bolton's greatest hits?" he asked cautiously.

"Hell no, why would I be doing that?" Dave shot back, almost sounding insulted. Gustav apologized and quickly ended their call.

Once Roger was done getting chewed out by Old Lennart, Gustav gave him a call. In their discussion, Roger revealed how he'd installed TeamViewer before making his system clones to save time. Gustav explained how that caused every system image to have the same TeamViewer ID, which was bad. Since Old Lennart was always the first one in the office each morning, any remote connection through TeamViewer would connect to his PC. Thus, whenever Gustav or one of his cohorts connected for remote support, it seemed like someone was hacking in to Lennart's computer.

Roger remedied the problem over the next couple days by reinstalling TeamViewer and bringing in a series of "I'm sorry" baked goods for Old Lennart; but it wasn't enough to save his hide. By the end of the week, he was informed that their IT department would be further reduced from one employee to zero. With his computer and Roger problems addressed, Old Lennart could return to researching his favorite performing artist.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Rondam RamblingsFor at least one Trump voter, heaven is about new appliances

The Washington Post has a really great in-depth profile of evangelical Christian Trump voters in one small Alabama town, and Daily Kos did a terrific analysis.    There's not much I can add, except to say that I think both pieces are worth reading in their entirety, especially the original Post piece. The part of that piece that really struck me was this: She rubbed her sore knee, which was

,

Rondam RamblingsFire TFR update: from bad to worse

A week ago I reported on an unprecedented number of fire-related temporary flight restrictions in southern Oregon.  This is what the situation looked like this morning: As you can see, what used to be half a dozen separate TFRs (temporary flight restrictions) have merged into one monster TFR (though they are still officially listed separately).  And it's still only July.  It will likely be

LongNowThe Future Right Around the Corner

Medium’s “Future Human” essay collection explores the scientific, technological, social and medical advances that are changing where and how we live. The collection features work by and about various members of the Long Now community, including past speakers (Andy Weir and Annalee Newitz), collaborators (the geneticist George Church), and staff (Long Now Editor Ahmed Kabil).

 

Digitocracy by Andy Weir

The author of The Martian and Artemis offers a vision of a future where computers rule. Weir spoke at The Interval at Long Now in 02015 about what a real world mission to Mars would require.

 

Sex Robots Could Save Your Relationship (And Other Good News on the Future of Love) by Annalee Newitz

“In a nonbinary, nonmonogamous future,” writes Annalee Newitz, “kids’ lives could be full of many loving caregivers.” Newitz spoke at The Interval at Long Now in 02018 about how science needs fiction.

 

Predictions from the Most Influential Geneticist of Our Time by Matthew Hutson

Matthew Hutson interviews Dr. George Church, who has collaborated with Revive & Restore in bringing back the woolly mammoth.

 

What Happens When A Computer Runs Your Life by Ahmed Kabil

Long Now Editor Ahmed Kabil interviews Max Hawkins, an ex-Google programmer who lets an algorithm pick where he lives, what he does—even what tattoo to get.

You can read the rest of the collection here.

 

Don MartiFederated paywalls and consent bits

Today’s web advertising is mostly a hacking contest. Whoever can build the best system to take personal information from the user wins, whether or not the user knows about it. Publishers are challenging adfraud and adtech hackers to a hacking contest, and, no surprise, coming in third.

The near future of web advertising is likely to be much different.

  • Mainstream browsers, starting with Apple Safari, are doing better at implementing user preferences on tracking. Most users don't want to be "followed" from one site to another. Users generally want their activity on a trusted site to stay with that trusted site. Only about a third of users prefer ads to be matched to them, so browsers are putting more emphasis on the majority's preferences.

  • Privacy law—from Europe, to California, to India, is being updated to better reflect user expectations and to keep up with new tracking practices.

As users get the tools to control who they share their information with (and they don’t want to leak it to everyone) then the web advertising business is transforming from a hacking contest into a reputation contest. The rate-limiting reactant for web advertising isn't (abundant and low-priced) user data, it's the (harder to collect) consent bits required to use that data legally. Whoever can build the most trustworthy place for users to choose to share their information wins. This is good news if you're in the business of reporting trustworthy news or fairly compensating people for making cultural works, not so good news if you're in the business of tricking people out of their data.

Federated paywall systems are not just yet another attempt at micropayments, but also have value as a tool for collecting trust. The user's willingness to pay for something is a big trust signal. A small payment to get past a paywall can produce a little money, but a lot of valuable user data and the consent bits that are required to use that data.

The catch is to figure out how to design federated paywalls so that the trusted site, not the paywall platform, captures the value of the data, and so that the platform can't leak or sell the user's data consent outside the context in which they gave it. In the long run, a consent system that tries to hack around user data norms to rebuild conventional adtech is going to fail, but not before a lot of programmers lose a lot of carpal tunnels on privacy vs. anti-privacy coding, and a lot of users face a lot of frustrating consent dialogs. Browser improvements and court cases will filter deceptively collected consent bits out of the system.

Consent bits are a new item of value that needs new rules. The web ad business is not going to be able to sell and and sync consent bits the same way that it handles tracking cookies now. Consent bits are not a "data is the new oil" commodity, and can really only move along trust networks, with all the complexity that comes with them. New tools such as federated paywalls are an opportunity to implement consent handling in a sustainable way.

,

Rondam RamblingsThe Republican's three-step plan

Republicans have a three-step plan to try to protect Donald Trump from facing the consequences of collaborating with the Russians. Step 1: Deny that it happened (c.f. "Witch hunt") Step 2: When there is evidence that it happened deny the evidence (c.f. "Fake news") Step 3: When the evidence becomes undeniable, deny that it was ever a big deal. To those currently implementing step 3, I pose

Rondam RamblingsThe wheels on the bus go round and round

I don't know if the wheels are actually coming off the Trump bus yet, but at least a few of the lug nuts might be starting to loosen up a bit.  It has been pretty clear from the outset that Donnie pater not only knew about the Trump Tower meeting before it happened, but was eagerly anticipating its results.  The Trumps have been digging themselves into a pretty deep hole by insisting otherwise,

,

TEDWhy TED takes two weeks off every summer

Why_we_close_983pxTED.com is about to go quiet for two weeks. No new TED Talks will be posted on the website until Monday, August 13, 2018, while most of the TED staff takes our annual two-week summer holiday.

Yes, we all, or almost all, go on holiday at the same time. (No, we don’t all go to the same place.)

We’ve been doing it this way now for almost a decade. Our summer break is a little lifehack that solves the problem of a digital media and events company in perpetual-startup mode, where something new is always going on and everyone has raging FOMO. We avoid the fear of missing out on emails and new projects and blah blah blah … by making sure that nothing is going on.

I love how the inventor of this holiday, TED’s founding head of media June Cohen, once explained it: “When you have a team of passionate, dedicated overachievers, you don’t need to push them to work harder, you need to help them rest. By taking the same two weeks off, it makes sure everyone takes vacation,” she said. “Planning a vacation is hard — most of us still feel a little guilty to take two weeks off, and we’d be likely to cancel when something inevitably comes up. This creates an enforced rest period, which is so important for productivity and happiness.”

Bonus: “It’s efficient,” she said. “In most companies, people stagger their vacations through the summer. But this means you can never quite get things done all summer long. You never have all the right people in the room.” Instead, for two weeks — almost no one is.

So, as the bartender said: You don’t have to go home, but you can’t stay here. We won’t post new TED Talks on the website for the next two weeks. (Though we’ll keep serving up great recommendations for talks you already love or might have missed across all our platforms.) The office is more than three-quarters empty. And we stay off email. The whole point is that vacation time should be truly restful, and we should be able to recharge without having to worry about what we’re missing back at the office.

See you on Monday, August 13!

Note: This piece was first posted on July 17, 2014. It was updated on July 27, 2015, again on July 20, 2016, and again on June 23, 2017, and yet again on July 27, 2018.

CryptogramFriday Squid Blogging: Squid Deception

This is a fantastic video of a squid attracting prey with a tentacle that looks like a smaller squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramNew Report on Police Digital Forensics Techniques

According to a new CSIS report, "going dark" is not the most pressing problem facing law enforcement in the age of digital data:

Over the past year, we conducted a series of interviews with federal, state, and local law enforcement officials, attorneys, service providers, and civil society groups. We also commissioned a survey of law enforcement officers from across the country to better understand the full range of difficulties they are facing in accessing and using digital evidence in their cases. Survey results indicate that accessing data from service providers -- much of which is not encrypted -- is the biggest problem that law enforcement currently faces in leveraging digital evidence.

This is a problem that has not received adequate attention or resources to date. An array of federal and state training centers, crime labs, and other efforts have arisen to help fill the gaps, but they are able to fill only a fraction of the need. And there is no central entity responsible for monitoring these efforts, taking stock of the demand, and providing the assistance needed. The key federal entity with an explicit mission to assist state and local law enforcement with their digital evidence needs­ -- the National Domestic Communications Assistance Center (NDCAC)­has a budget of $11.4 million, spread among several different programs designed to distribute knowledge about service providers' poli­cies and products, develop and share technical tools, and train law enforcement on new services and tech­nologies, among other initiatives.

From a news article:

In addition to bemoaning the lack of guidance and help from tech companies -- a quarter of survey respondents said their top issue was convincing companies to hand over suspects' data -- law enforcement officials also reported receiving barely any digital evidence training. Local police said they'd received only 10 hours of training in the past 12 months; state police received 13 and federal officials received 16. A plurality of respondents said they only received annual training. Only 16 percent said their organizations scheduled training sessions at least twice per year.

This is a point that Susan Landau has repeatedly made, and also one I make in my new book. The FBI needs technical expertise, not backdoors.

Here's the report.

Krebs on SecurityState Govts. Warned of Malware-Laden CD Sent Via Snail Mail from China

Here’s a timely reminder that email isn’t the only vector for phishing attacks: Several U.S. state and local government agencies have reported receiving strange letters via snail mail that include malware-laden compact discs (CDs) apparently sent from China, KrebsOnSecurity has learned.

This particular ruse, while crude and simplistic, preys on the curiosity of recipients who may be enticed into popping the CD into a computer. According to a non-public alert shared with state and local government agencies by the Multi-State Information Sharing and Analysis Center (MS-ISAC), the scam arrives in a Chinese postmarked envelope and includes a “confusingly worded typed letter with occasional Chinese characters.”

Several U.S. state and local government agencies have reported receiving this letter, which includes a malware-laden CD. Images copyright Sarah Barsness.

The MS-ISAC said preliminary analysis of the CDs indicate they contain Mandarin language Microsoft Word (.doc) files, some of which include malicious Visual Basic scripts. So far, State Archives, State Historical Societies, and a State Department of Cultural Affairs have all received letters addressed specifically to them, the MS-ISAC says. It’s not clear if anyone at these agencies was tricked into actually inserting the CD into a government computer.

I’m sure many readers could think of clever ways that this apparent mail-based phishing campaign could be made more effective or believable, such as including tiny USB drives instead of CDs, or at least a more personalized letter that doesn’t look like it was crafted by someone without a mastery of the English language.

Nevertheless, attacks like this are a reminder that cybercrime can take many forms. The first of Krebs’s 3 Basic Rules for Online Safety — “If you didn’t go looking for it don’t install it” — applies just as well here: If you didn’t go looking for it, don’t insert it or open it.

Worse Than FailureError'd: When BSODs Pile Up

"I suppose the ropes were there to keep the infection from spreading to other screens," Stan I. wrote.

 

"I was visiting the ESA Shop and, well, though I'm not actually in the UK, it looks like I'll be shopping there!" wrote Ben S.

 

Marcin K. writes, "I spotted this one at a Warsaw subway station and, I'm pleased to say, that this billboard was the only thing that crashed."

 

"Life is full of important, deep questions. Ubuntu 18.04 seems to feel the same way during install when I tried to create an encrypted volume," writes Bernard M.

 

Brad W. wrote, "Sometimes it's easy to see how test artifacts slip into production, other times it's not."

 

"This is what happens when you are looking for answers on CodeProject, but you find only more questions," writes David.

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaFrancois Marier: Recovering from a botched hg histedit on a mercurial bookmark

If you are in the middle of a failed Mercurial hg histedit, you can normally do hg histedit --abort to cancel it, though sometimes you also have to reach out for hg update -C. This is the equivalent of git's git rebase --abort and it does what you'd expect.

However, if you go ahead and finish the history rewriting and only notice problems later, it's not as straighforward. With git, I'd look into the reflog (git reflog) for the previous value of the branch pointer and simply git reset --hard to that value. Done.

Based on a Stack Overflow answer, I thought I could undo my botched histedit using:

hg unbundle ~/devel/mozilla-unified/.hg/strip-backup/47906774d58d-ae1953e1-backup.hg

but it didn't seem to work. Maybe it doesn't work when using bookmarks.

Here's what I ended up doing to fully revert my botched Mercurial histedit. If you know of a simpler way to do this, feel free to leave a comment.

Collecting the commits to restore

The first step was to collect all of the commits hashes I needed to restore. Luckily, I had sumitted my patch to Try before changing it and so I was able to look at the pushlog to get all of the commits at once.

If I didn't have that, I could also go to the last bookmark I pushed and click on parent commits until I hit the first one that's not mine. Then I could collect all of the commits using the browser's back button:

For that last one, I had to click on the changeset commit hash link in order to get the commit hash instead of the name of the bookmark (/rev/hashstore-crash-1434206).

Recreating the branch from scratch

This is what did to export patches for each commit and then re-import them one after the other:

for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe ; do hg export $c > ~/$c.patch ; done
hg up ff8505d177b9
hg bookmarks hashstore-crash-1434206-new
for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe 4140cd9c67b0 ; do hg import ~/$c.patch ; done

Copying a bookmark

As an aside, if you want to make a copy of a bookmark before you do an hg histedit, it's not as simple as:

hg up hashstore-crash-1434206
hg bookmarks hashstore-crash-1434206-copy
hg up hashstore-crash-1434206

While that seemed to work at the time, the histedit ended up messing with both of them.

An alternative that works is to push the bookmark to another machine. That way if worse comes to worse, you can hg clone from there and hg export the commits you want to re-import using hg import.

Planet Linux AustraliaIan Wienand: Local qemu/kvm virtual machines, 2018

For work I run a personal and a work VM on my laptop. When I was at VMware I dogfooded internal builds of Workstation which worked well, but was always a challenge to have its additions consistently building against latest kernels. About 5 and half years ago, the only practical alternative option was VirtualBox. IIRC SPICE maybe didn't even exist or was very early, and while VNC is OK to fiddle with something, completely impractical for primary daily use.

VirtualBox is fine, but there is the promised land of all the great features of qemu/kvm and many recent improvements in 3D integration always calling. I'm trying all this on my Fedora 28 host, with a Fedora 28 guest (which has been in-place upgraded since Fedora 19), so everything is pretty recent. Periodically I try this conversion again, but, spoiler alert, have not yet managed to get things quite right.

As I happened to close an IRC window, somehow my client seemed to crash X11. How odd ... so I thought, everything has just disappeared anyway; I might as well try switching again.

Image conversion has become much easier. My primary VM has a number of snapshots, so I used the VirtualBox GUI to clone the VM and followed the prompts to create the clone with squashed snapshots. Then simply convert the VDI to a RAW image with

$ qemu-img convert -p -f vdi -O raw image.vdi image.raw

Note if you forget the progress meter, send the pid a SIGUSR1 to get it to spit out a progress.

virt-manager has come a long way too. Creating a new VM was trivial. I wanted to make sure I was using all the latest SPICE gl etc., stuff. Here I hit some problems with what seemed to be permission denials on drm devices before even getting the machine started. Something suggested using libvirt in session mode, with the qemu:///session URL -- which seemed more like what I want anyway (a VM for only my user). I tried that, put the converted raw image in my home directory and the VM would boot. Yay!

It was a bit much to expect it to work straight away; while GRUB did start, it couldn't find the root disks. In hindsight, you should probably generate a non-host specific initramfs before converting the disk, so that it has a larger selection of drivers to find the boot devices (especially the modern virtio drivers). On Fedora that would be something like

sudo dracut --no-hostonly --regenerate-all -f

As it turned out, I "simply" attached a live-cd and booted into that, then chrooted into my old VM and regenerated the initramfs for the latest kernel manually. After this the system could find the LVM volumes in the image and would boot.

After a fiddly start, I was hopeful. The guest kernel dmesg DRM sections showed everything was looking good for 3D support, along with the glxinfo showing all the virtio-gpu stuff looking correct. However, I could not get what I hoped was trivial automatic window resizing happening no matter what. After a bunch of searching, ensuring my agents were running correctly, etc. it turns out that has to be implemented by the window-manager now, and it is not supported by my preferred XFCE (see https://bugzilla.redhat.com/show_bug.cgi?id=1290586). Note you can do this manually with xrandr --output Virtual-1 --auto to get it to resize, but that's rather annoying.

I thought that it is 2018 and I could live with Gnome, so installed that. Then I tried to ping something, and got another selinux denial (on the host) from qemu-system-x86 creating icmp_socket. I am guessing this has to do with the interaction between libvirt session mode and the usermode networking device (filed https://bugzilla.redhat.com/show_bug.cgi?id=1609142). I figured I'd limp along with ICMP and look into details later...

Finally when I moved the window to my portrait-mode external monitor, the SPICE window expanded but the internal VM resolution would not expand to the full height. It looked like it was taking the height from the portrait-orientation width.

Unfortunately, forced swapping of environments and still having two/three non-trivial bugs to investigate exceeded my practical time to fiddle around with all this. I'll stick with VirtualBox for a little longer; 2020 might be the year!

,

TEDTEDx talk under review

An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer titled: “Why our perception of pedophilia has to change.”

After reviewing the talk, we believe it cites research in ways that are open to serious misinterpretation. This led some viewers to interpret the talk as an argument in favor of an illegal and harmful practice. TED would like to make clear that it does not promote pedophilia.

In the TEDx talk, a speaker described pedophilia as a condition some people are born with, and suggested that if we recognize it as such, we can do more to prevent those people from acting on their instincts and harming children. This field of science is developing, and the definition of the condition is just one of many points that are in debate across the global scientific community (and even in standard reference works).

While there is much debate around this field, scholars in the field are united in their wish to keep children from coming to harm — as the speaker makes clear in her own talk, saying “Let me be very clear here. Abusing children is wrong without any doubt.”

TEDx events are organized independently from the main annual TED conference, with some 3,500 events held every year in more than 100 countries. Our nonprofit TED organization does not control TEDx events’ content.

After contacting the local TEDx organizer to understand why the talk had been taken down, we learned that the speaker herself requested it be removed from the internet because she had concerns about her own safety.

Our policy is and always has been to remove speakers’ talks when they request we do so. That is why we support this TEDx organizer’s decision to respect this speaker’s wishes and keep the talk offline.

Original, posted June 19, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer had titled: “Why our perception of pedophilia has to change.”
We were not aware of this organizer’s actions, but understand now that their decision to remove the talk was at the speaker’s request for her safety.
In our review of the talk in question, we at TED believe it cites research open for serious misinterpretation.
TED does not support or advocate for pedophilia.
We are now reviewing the talk to determine how to move forward.
Until we can review this talk for potential harm to viewers, we are taking down any illegal copies of the talk posted on the Internet.  

Sociological ImagesCreepy Videos Show Routines Running Wild

People talk. Their interactions become habits, habits become routines, and routines become rules. Sociologists call this emergent behavior, and sometimes it happens so slowly we don’t even notice it until we look back and think “where did that come from?”  Emergent behavior can be quirky and fun (think of Taco Friday at the office or “on Wednesdays we wear pink“), but sometimes it can also be far more serious or more troubling.

The challenge is that new technology makes these interactions happen much faster, on a much larger scale, and with less editing—often with odd results. Check out this TED talk—The Nightmare Videos of Children’s YouTube— for a good illustration of the dark side of emergent behavior when algorithms accelerate and exploit social interactions online.

 

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramGoogle Employees Use a Physical Token as Their Second Authentication Factor

Krebs on Security is reporting that all 85,000 Google employees use two-factor authentication with a physical token.

A Google spokesperson said Security Keys now form the basis of all account access at Google.

"We have had no reported or confirmed account takeovers since implementing security keys at Google," the spokesperson said. "Users might be asked to authenticate using their security key for many different apps/reasons. It all depends on the sensitivity of the app and the risk of the user at that point in time."

Now Google is selling that security to its users:

On Wednesday, the company announced its new Titan security key, a device that protects your accounts by restricting two-factor authentication to the physical world. It's available as a USB stick and in a Bluetooth variation, and like similar products by Yubico and Feitian, it utilizes the protocol approved by the FIDO alliance. That means it'll be compatible with pretty much any service that enables users to turn on Universal 2nd Factor Authentication (U2F).