Planet Russell

,

Planet DebianMolly de Blanc: Thinkers

Free and open source software, ethical technology, and digital autonomy have a number of great thinkers, inspiring leaders, and hard working organizations. I see two discussions occurring now that I feel the need to address: What will we do next? Who will our new great leader be?

The thing is, we don’t need to do something new next, and we don’t need to find new leader.

Organizations and individuals have been doing amazing work in our sphere for more than thirty years. We only need to look at the works of groups like Public Labs, OpenStreetMap, and Wikimedia to see where the future of our work lies: applying the principles of user freedom to create demonstrable change, build equity, and fight for justice. I am positively inspired by the GNOME community and their dedication to building software for people in every country, of every ability, and of every need. Outreachy and projects and companies that participate in Outreachy internships are working hard to build the future of community that we want to see.

Deb Nicholson recently reminded me that we cannot build a principled future where people are excluded from the process of building it. She also pointed out that once we’ve have a techno-utopia, it will include everyone, because it needs to. This utopia is built on ideas, but it is also built by plumbers — by people doing work on the ground with those ideas.

Deb Nicholson is another inspiration to me. I’ve been lucky enough to know her since 2010, when she graciously began to mentor me. I now consider her both a mentor and a dear friend. Her ideas are innovative, her principles hard, and her vision wide.

Deb is one of the many  people who have helped and continue to help shape my ideas, teach me things. Allison Randall, Asheesh Laroia, Christopher Lemmer-Webber, Daniel Khan Gilmore, Elana Hashman, Gabriella Coleman, Jeffrey Warren, Karen Sandler, Karl Fogel, Stefano Zacchiroli — these are just a few of the individuals who have been necessary figures in my life.

We don’t need to find new leaders and thinkers because they’re already here. They’ve been here, thinking, writing, speaking, and doing for years.

What we need to do is listen to their voices.

As I see people begin to discuss the next president of the Free Software Foundation, they do so in a context of asking who will be leading the free software movement. The free software movement is more than the FSF and it’s more than any given individual. We don’t need to go in search of the next leader, because there are leaders who work every day not just for our digital rights, but for a better world. We don’t need to define a movement by one man, nor should we do so. We instead need to look around us and listen to what is already happening.

Worse Than FailureA Learning Experience

Jakob M. had the great pleasure of working as a System Administrator in a German school district. At times it was rewarding work. Most of the time it involved replacing keyboard keys mischievous children stole and scraping gum off of monitor screens. It wasn't always the students that gave him trouble though.

Frau Fritzenberger was a cranky old math teacher at a Hauptschule near Frankfurt. Jakob regularly had to answer support calls she made for completely frivolous things. Having been teaching since before computers were a thing, she put up a fight for every new technology or program Jakob's department wanted to implement.

Over the previous summer, a web-based grading system called NotenWertung was rolled out across the district's network. It would allow teachers to grade homework and post the scores online. They could work from anywhere, with any computer. There was even a limited mobile application. Students and parents could then get a notification and see them instantly. Frau Fritzenberger was predictably not impressed.

She threw a fit on the first day of school and Jakob was dispatched to defuse it. "Why do we need computers for grading?!" she screeched at Jakob. "Paper works just fine like it has for decades! How else can I use blood red pen to shame them for everything they get wrong!"

"I understand your concern, Frau Fritzenberger," Jakob replied while making a 'calm down' gesture with his arms. "But we can't have you submitting grades on paper when the entire rest of the district is using NotenWertung." He had her sit down at the computer and gave her a For Dummies-type walkthrough. "There, it's easier than you think. You can even do this at night from the comfort of your own home," he assured her before getting up to leave.

Just as he was exiting the classroom, he heard her shout, "If you were my student, I would smack you with my ruler!" Jakob brushed it off and left to answer a call about paper clips jammed in a PC fan.

The next morning, Jakob got a rare direct call on his desk phone. It was Frau and she was in a rage. All he could make out between strings of aged German cuss words was "computer is broken!" He hung up and prepared to head to Frau's Hauptschule.

Jakob expected to find that Frau didn't have a network connection, misplaced the shortcut to her browser, didn't realize the monitor was off, or something stupid like that. What he found was Frau's computer was literally broken. The LCD screen of her monitor was an elaborate spider web, her keyboard was cracked in half, and the PC tower looked like it had been run over on the Autobahn. Bits of the motherboard dangled outside the case, and the HDD swung from its cable. "Frau Fritzenberger... what in the name of God happened here?!"

"I told you the computer was broken!" Frau shouted while meanly pointing her crooked index finger at Jakob. "You told me I have to do grades on the computer. So I packed it up to take home on my scooter. It was too heavy for me to ride with it on back so I wiped out and it smashed all over the road! This is all your fault!"

Jakob stared on in disbelief at the mangled hunks of metal and plastic. Apparently you can teach an old teacher new tricks but you can't teach her that the same web application can be accessed from any computer.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

TEDTransform: The talks of TED@DuPont

Hosts Briar Goldberg and David Biello open TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Transformation starts with the spark of something new. In a day of talks and performances about transformation, 16 speakers and performers explored exciting developments in science, technology and beyond — from the chemistry of everyday life to innovations in food, “smart” clothing, enzyme research and much more.

The event: TED@DuPont: Transform, hosted by TED’s David Biello and Briar Goldberg

When and where: Thursday, September 12, 2019, at The Fillmore in Philadelphia, PA

Music: Performances by Elliah Heifetz and Jane Bruce and Jeff Taylor, Matt Johnson and Jesske Hume

The talks in brief:

“The next time you send a text or take a selfie, think about all those atoms that are hard at work and the innovation that came before them,” says chemist Cathy Mulzer. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Cathy Mulzer, chemist and tech shrinker

Big idea: You owe a big thank you to chemistry for all that technology in your pocket.

Why? Almost every component that goes into creating a superpowered device like a smartphone or tablet exists because of a chemist — not the Silicon Valley entrepreneurs that come to most people’s minds. Chemistry is the real hero in our technological lives, Mulzer says — building up and shrinking down everything from vivid display screens and sleek bodies to nano-sized circuitries and long-lasting batteries.

Quote of talk: The next time you send a text or take a selfie, think about all those atoms that are hard at work and the innovation that came before them.”


Adam Garske, enzyme engineer

Big Idea: We can harness the power of new, scientifically modified enzymes to solve urgent problems across the world.

How? Enzymes are proteins that catalyze chemical reactions — they’re what turns milk into cheese, for example. Through a process called “directed evolution,” scientists can carefully edit and design the building blocks of enzymes for specific functions — to help treat diseases like diabetes, for instance, reduce CO2 in our laundry and even break down plastics in the ocean. Enzyme evolution is already changing how we tackle health and environmental issues — and there’s so much more ahead.

Quote of the talk: With enzymes, we can edit what nature wrote — or write our own stories.”


Henna-Maria Uusitupa, bioscientist

Big idea: Our bodies host an entire ecosystem of microorganisms that we’ve been cultivating since we were babies. And as it turns out, the bacteria we acquire as infants help keep us healthier as adults. Henna-Maria Uusitupa wants to ensure that every baby grows a healthy microbiome.

How? Babies must acquire the right balance of microbes in their bodies, but they must also receive them at the correct stages of their lives. C-sections and disruptions in breastfeeding can throw a baby’s microbiome out of balance. With a carefully curated blend of probiotics and other chemicals, scientists are devising ways to restore harmony — and beneficial microbes — to young bodies.

Quote of the talk: “I want to contribute to the unfolding of a future in which each baby has an equal starting point to be programmed for life-long health.”


Leon Marchal, innovation director 

Big Idea: Animals account for 50 to 80 percent of antibiotic consumption worldwide — a major contributing factor to the growing threat of antimicrobial resistance. To combat this, farmers can adopt microbiome-nourishing practices like balanced, antibiotic-free nutrition on their farms.

Why: The UN predicts that antimicrobial resistance will become our biggest killer by 2050. To prevent that from happening, Marchal is working to transform a massive global industry: animal feed. Antibiotics are used in animal feed to keep animals healthy and to grow them faster and bigger. They can be found in the most unlikely places — like the treats we give our pets. This constant, low-dose exposure could lead some animals to develop antibiotic-resistant bugs, which could cause wide-ranging health problems for animals and humans alike. The solution? Antibiotic-free production — and it all starts with better hygiene. This means taking care of animal’s good bacteria with balanced nutrition and alterations to the food they eat, to keep their microbiomes more resilient.

Quote of the talk: “We have the knowledge on how to produce meat, eggs and milk without or with very low amounts of antibiotics. This is a small price to pay to avoid a future in which bacterial infections again become our biggest killer.”


Physical organic chemist Tina Arrowood shares a simple, eco-friendly proposal to protect our freshwater resources from future pollution. She speaks at TED@DuPont at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Tina Arrowood, physical organic chemist

Big idea: Human activity is a threat to freshwater rivers. We can transform that risk into an environmental and economic reward.

How? A simple, eco-friendly proposal to protect our precious freshwater resources from future pollution. We’ve had technology that purifies industrial wastewaters for the last 50 years. Arrowood suggests that we go a step further: as we clean our rivers, we can sell the salt byproduct as a primary resource — to de-ice roads and for other chemical processing — rather than using the tons of salt we currently mine from the earth.

Fun fact: If you were to compare the relative volume of ocean water to fresh river water on our planet, the former would be an Olympic-sized swimming pool — and the latter would be a one-gallon jug.


“Why not transform clothing and make it a part of our digitized world, in a manner that shines continuous light into our health and well-being?” asks designer Janani Bhaskar. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Janani Bhaskar, smart clothing designer

Big Idea: By designing “smart” clothing with durable technologies, we can better keep track of health and well-being.

How? Using screen-printing technology, we can design and attach biometric “smart stickers” to any piece of clothing. These stickers are super durable, Bhaskar says: they can withstand anything our clothing can, including workouts and laundry. They’re customizable, too — athletes can use them to track blood pressure and heart rate, healthcare providers can use them to remotely monitor vital signs, and expecting parents can use them to receive information about their baby’s growth. By making sure this technology is affordable and accessible, our clothing — the “original wearables” — can help all of us better understand our bodies and our health.

Quote of the talk: “Why not transform clothing and make it a part of our digitized world, in a manner that shines continuous light into our health and well-being?”


Camilla Andersen, neuroscientist and food scientist

Big idea: We can create tastier, healthier foods with insights from people’s brain activity.

How? Our conscious experience of food — how much we enjoy a cup of coffee or how sweet we find a cookie to be — is heavily influenced by hidden biases. Andersen provides an example: after her husband started buying a fancy coffee brand, she conducted a blind taste test with two cups of coffee. Her husband described the first cup as cheap and bitter, and raved about the second — only to find out that the two were actually the same kind of coffee. The taste difference was the result of his bias for the new, fancy coffee — the very kind of bias that can leave food scientists in the dark when testing out new products. But there’s a workaround: brain scans can access the raw, unfiltered, unconscious taste information that’s often lost in people’s conscious assessments. With this kind of information, Andersen says, we can create healthier foods without sacrificing taste — like creating a zero-calorie milkshake that tastes just like the original.

Fun fact: The five basic tastes are universally accepted: sweet, salty, sour, bitter and umami. But, based on evidence from Andersen’s EEG experiments, there’s evidence of a new sixth basic taste: fat, which we may sense beyond its smell and texture. 


“Science is an integral part of our everyday lives, and I think we’re only at the tip of the iceberg in terms of harnessing all of the knowledge we have to create a better world,” says enzyme scientist Vicky Huang. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Vicky Huang, enzyme scientist

Big idea: Enzymes are unfamiliar to many of us, but they’re far more important in our day-to-day lives than we realize — and they might help us unlock eco-friendly solutions to everything from food spoilage to household cleaning problems. 

How? We were all taught in high school that enzymes are a critical part of digestion and, because of that, they’re also ideal for household cleaning. But enzymes can do much more than remove stains from our clothes, break down burnt-on food in our dishwashers and keep our baguettes soft. As scientists are able to engineer better enzymes, we’ll be able to cook and clean with less energy, less waste and fewer costs to our environment.

Quote of the talk: “Everywhere in your homes, items you use every day have had a host of engineers and scientists like me working on them and improving them. Just one part of this everyday science is using enzymes to make things more effective, convenient, and environmentally sustainable.”


Geert van der Kraan, microbe detective

Big Idea: We can use microbial life in oil fields to make oil production safer and cleaner.

How? Microbial life is often a problem in oil fields, corroding steel pipes and tanks and producing toxic chemicals like dihydrogen sulfide. We can transform this challenge into a solution by studying the clues these microbes leave behind. By tracking the presence and activity of these microbes, we can see deep within these undergrounds fields, helping us create safer and smoother production processes.

Quote of the talk: “There are things we can learn from the microorganisms that call oil fields their homes, making oil field operations just a little cleaner. Who knows what other secrets they may hold for us?”


Lori Gottlieb, psychotherapist and author

Big idea: The stories we tell about our lives shape who we become. By editing our stories, we can transform our lives for the better.

How? When the stories we tell ourselves are incomplete, misleading or just plain wrong, we can get stuck. Think of a story you’re telling about your life that’s not serving you — maybe that everyone’s life is better than yours, that you’re an impostor, that you can’t trust people, that life would be better if only a certain someone would change. Try exploring this story from another point of view, or asking a friend if there’s an aspect of the story you might be leaving out. Rather than clinging to an old story that isn’t doing us any good, Gottlieb says, we can work to write the most beautiful story we can imagine, full of hard truths that lead to compassion and redemption — our own “personal Pulitzer Prize.” We get to choose what goes on the page in our minds that shapes our realities. So get out there and write your masterpiece.

Quote of the talk: “We talk a lot in our culture about ‘getting to know ourselves,’ but part of getting to know yourself is to unknow yourself: to let go of the one version of the story you’ve told yourself about who you are — so you can live your life, and not the story you’ve been telling yourself about your life.”


“I’m standing here before you because I have a vision for the future: one where technology keeps my daughter safe,” says tech evangelist Andrew Ho. He speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Andrew Ho, tech evangelist

Big idea: As technological devices become smaller, faster and cheaper, they make daily tasks more convenient. But they can also save lives.

How? For epilepsy patients like Andrew Ho’s daughter Hilarie, a typical day can bring dangerous — or even fatal — challenges. Medical devices currently under development could reduce the risk of seizures, but they’re bulky and fraught with risk. The more quickly developers can improve the speed and portability of these devices (and other medical technologies), the sooner we can help people with previously unmanageable diseases live normal lives.

Quote of the talk: Advances in technology are making it possible for people with different kinds of challenges and problems to lead normal lives. No longer will they feel isolated and marginalized. No longer will they live in the shadows, afraid, ashamed, humiliated, and excluded. And when that happens, our world will be a much more diverse and inclusive place, a better place for all of us to live.”


“Learning from our mistakes is essential to improvement in many areas of our lives, so why not be intentional about it in our most risk-filled activity?” asks engineer Ed Paxton. He speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Ed Paxton, aircraft engineer and safety expert

Big idea: Many people fear flying but think nothing of driving their cars every day. Statistically, driving is far more dangerous than flying — in part because of common-sense principles pilots use to govern their behavior. Could these principles help us be safer on the road?

How? There’s a lot of talk about how autonomous vehicles will make traffic safer in the future. Ed Paxton shares three principles that can reduce accidents right now: “positive paranoia” (anticipating possible hazards or mishaps without anxiety), allowing feedback from passengers who might see things you don’t and learning from your mistakes (near-misses caused by driving while tired, for example).

Quote of the talk:  “Driving your car is probably the most dangerous activity that most of you do … it’s almost certain you know someone who’s been seriously injured or lost their life out on the road … Over the last ten years, seven billion people have boarded domestic airline flights, and there’s been just one fatality.”


Jennifer Vail, tribologist

Big idea: Complex systems lose much of their energy to friction; the more energy they lose, the more power we consume to keep them running. Tribology — or the study of friction and things that rub together — could unlock massive energy savings by reducing wear and alleviating friction in cars, wind turbines, motors and engines.

How? By studying the different ways surfaces rub together, and engineering those surfaces to create more or less friction, tribologists can tweak a surprising range of physical products, from dog food that cleans your pet’s teeth to cars that use less gas; from food that feels more appetizing in our mouth to fossil fuel turbines that waste less power. Some of these changes could have significant impacts on how much energy we consume.

Quote of the talk: “I have to admit that it’s a lot of fun when people ask me what I do for my job, because I tell them: ‘I literally rub things together.'”

Planet DebianNeil McGovern: GNOME relationship with GNU and the FSF

On Saturday, I wrote an email to the FSF asking them to cancel my membership. Other people who I greatly respect are doing the same. This came after the president of the FSF made some pretty reprehensible remarks saying that the “most plausible scenario is that [one of Epstein’s underage victims] presented themselves as entirely willing” while being trafficked. This isn’t the only incident, but it is the straw that broke the camel’s back.

In my capacity as the Executive Director of the GNOME Foundation, I have also written to the FSF. One of the most important parts of my role is to think of the well being of our community and the GNOME mission. One of the GNOME Foundation’s strategic goals is to be an exemplary community in terms of diversity and inclusion. I feel we can’t continue to have a formal association with the FSF or the GNU project when its main voice in the world is saying things that hurt this aim.

I greatly admire the work of FSF staffers and volunteers, but have now reached the point of concluding that the greatest service to the mission of software freedom is for Richard to step down from FSF and GNU and let others continue in his stead. Should this not happen in a timely manner, then I believe that severing the historical ties between GNOME, GNU and the FSF is the only path forward.

Edit: I’ve also cross-posted this to the GNOME discourse instance.

Planet DebianSven Hoexter: ansible scp_if_ssh: smart debugging

I guess that is just one of the things you've to know, so maybe it helps someone else.

We saw some warnings in our playbook rollouts like

[WARNING]: sftp transfer mechanism failed on [192.168.23.42]. Use
ANSIBLE_DEBUG=1 to see detailed information

They were actually reported for sftp and scp usage. If you look at the debug output it's not very helpful for the average user, similar if you go to verbose mode with -vvv. The later one at least helped to see parameters passed to sftp and scp, but you still see no error message. But if you set

scp_if_ssh: True

or

scp_if_ssh: False

you will suddenly see the real error message

fatal: [docker-023]: FAILED! => {"msg": "failed to transfer file to /home/sven/testme.txt /home/sven/
.ansible/tmp/ansible-tmp-1568643306.1439135-27483534812631/source:\n\nunknown option -- A\r\nusage: scp [-346BCpqrv]
[-c cipher] [-F ssh_config] [-i identity_file]\n           [-l limit] [-o ssh_option] [-P port] [-S program] source
... target\n"}

Lesson learned, as long as ansible is running in "smart" mode it will hide all error messages from the user. Now we could figure out that the culprit is the -A for AgentForwarding, which is for obvious reasons not available in sftp and scp. One can move it to group_vars ansible_ssh_extra_args. The best documentation regarding this, beside of the --help output, seems to be the commit message of 3ad9b4cba62707777c3a144677e12ccd913c79a8.

CryptogramAnother Side Channel in Intel Chips

Not that serious, but interesting:

In late 2011, Intel introduced a performance enhancement to its line of server processors that allowed network cards and other peripherals to connect directly to a CPU's last-level cache, rather than following the standard (and significantly longer) path through the server's main memory. By avoiding system memory, Intel's DDIO­short for Data-Direct I/O­increased input/output bandwidth and reduced latency and power consumption.

Now, researchers are warning that, in certain scenarios, attackers can abuse DDIO to obtain keystrokes and possibly other types of sensitive data that flow through the memory of vulnerable servers. The most serious form of attack can take place in data centers and cloud environments that have both DDIO and remote direct memory access enabled to allow servers to exchange data. A server leased by a malicious hacker could abuse the vulnerability to attack other customers. To prove their point, the researchers devised an attack that allows a server to steal keystrokes typed into the protected SSH (or secure shell session) established between another server and an application server.

Worse Than FailureCodeSOD: Should I Do this? Depends.

One of the key differences between a true WTF and an ugly hack is a degree of self-awareness. It's not a WTF if you know it's a WTF. If you've been doing this job for a non-zero amount of time, you have had a moment where you have a solution, and you know it's wrong, you know you shouldn't do this, but by the gods, it works and you've got more important stuff to worry about right now, so you just do it.

An anonymous submitter committed a sin, and has reached out to us for absolution.

This is a case of "DevOps" hackery. They have one server with no Internet- one remote server with no Internet. Deploying software to a server you can't access physically or through the Internet is a challenge. They have a solution involving hopping through some other servers and bridging the network that lets them get the .deb package files within reach of the destination server.

But that introduces a new problem: these packages have complex dependency chains and unless they're installed in the right order, it won't work. The correct solution would be to install a local package repository on the destination server, and let apt worry about resolving those dependencies.

And in the long run, that's what our anonymous submitter promises to do. But they found themselves in a situation where they had more important things to worry about, and just needed to do it.

#!/bin/bash count=0 for f in ./*.deb do echo "Attempt $count" for file in ./*.deb do echo "Installing $file" sudo dpkg -i $file done (( count++ )) done

This is a solution to dependency management which operates on O(N^2): we install each package once for the total number of packages in the folder. It's the brutest of force solutions, and no matter what our dependency chain looks like, by sheer process of elimination, this will eventually get every package installed. Eventually.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianSam Hartman: Free as in Sausage Making: Inside the Debian Project

Recently, we’ve been having some discussion around the use of non-free software and services in doing our Debian work. In judging consensus surrounding a discussion of Git packaging, I said that we do not have a consensus to forbid the use of non-free services like Github. I stand behind that consensus call. Ian Jackson, who initially thought that I misread the consensus later agreed with my call.


I have been debating whether it would be wise for me as project leader to say more on the issue. Ultimately I have decided to share my thoughts. Yes, some of this is my personal opinion. Yet I think my thoughts resonate with things said on the mailing list; by sharing my thoughts I may help facilitate the discussion.


We are bound together by the Social Contract. Anyone is welcome to contribute to Debian so long as they follow the Social Contract, the DFSG, and the rest of our community standards. The Social Contract talks about what we will build (a free operating system called Debian). Besides SC #3 (we will not hide problems), the contract says very little about how we will build Debian.


What matters is what you do, not what you believe. You don’t even need to believe in free software to be part of Debian, so long as you’re busy writing or contributing to free software. Whether it’s because you believe in user freedom or because your large company has chosen Debian for entirely pragmatic reasons, your free software contributions are welcome.


I think that is one of our core strengths. We’re an incredibly diverse community. When we try to tie something else to what it means to be Debian beyond the quality of that free operating system we produce, judged by how it meets the needs of our users, we risk diminishing Debian. Our diversity serves the free software community well. We have always balanced pragmatic concerns against freedom. We didn’t ignore binary blobs and non-free firmware in the kernel, but we took the time to make sure we balanced our users’ needs for functional systems against their needs for freedom. By being so diverse, we have helped build a product that is useful both to people who care about freedom and other issues. Debian has been pragmatic enough that our product is wildly popular. We care enough about freedom and do the hard work of finding workable solutions that many issues of software freedom have become mainstream concerns with viable solutions.


Debian has always taken a pragmatic approach to its own infrastructure and to how Debian is developed. The Social Contract requires that the resulting operating system be 100% free software. But that has never been true of the Debian Project nor of our developers.



  • At the time the Social contract was adopted, uploading a package to Debian involved signing it with the non-free PGP version 2.6.3. It was years later that GnuPG became commonly used.

  • Debian developers of the day didn’t use non-free tools to sign the Social Contract. They didn’t digitally sign it at all. Yet their discussions used the non-free Qmail because people running the Debian infrastructure decided that was the best solution for the project’s mailing lists.


“That was then,” you say.



  • Today, some parts of security.debian.org redirect to security-cdn.debian.org, a non-free web service

  • Our recommended mirror (deb.debian.org) is backed by multiple non-free CDN web services.

  • Some day we may be using more non-free services. If trends in email handling continue, we may find that we need to use some non-free service to get the email we send accepted by major email providers. I know of no such plan in Debian today, but I know other organizations have faced similar choices.


Yet these choices to use non-free software and non-free services in the production of Debian have real costs. Many members of our community prefer to use free software. When we make these choices, we can make it harder for people to contribute to Debian. When we decline to use free software we may also be missing out on an opportunity to improve the free software community or to improve Debian itself. Ian eloquently describes the frustrations those who wish to use only free software face when faced with choices to use non-free services.


As alternatives to non-free software or services have become available, we as a project have consistently moved toward free options.


Normally, we let those doing the work within Debian choose whether non-free services or software are sufficiently better than the free alternatives that we will use them in our work. There is a strong desire to prefer free software and self-hosted infrastructure when that can meet our needs.


For individual maintainers, this generally means that you can choose the tools you want to do your Debian work. The resulting contributions to Debian must themselves be free. But if you want to go write all your Debian packaging in Visual Studio on Windows, we’re not going to stop you, although many of us will think your choices are unusual.


And my take is that if you want to store Debian packages on Github, you can do that too. But if you do that, you will be making it harder for many Debian contributors to contribute to your packages. As Ian discussed, even if you listen to the BTS, you will create two classes of contributors: those who are comfortable with your tools and those who are not. Perhaps you’ve considered this already. Perhaps you value making things easier for yourself or for interacting with an upstream community on Github over making it easier for contributors who want to use only free tools. Traditionally in Debian, we’ve decided that the people doing the work generally get to make that decision. Some day perhaps we’ll decide that all Debian packaging needs to be done in a VCS hosted on Debian infrastructure. And if we make that decision, we will almost certainly choose a free service to host. We’re not ready to make that change today.


So, what can you do if you want to use only free tools?



  • You could take Ian’s original approach and attempt to mandate project policy. Yet each time we mandate such policy, we will drive people and their contributions away. When the community as a whole evaluates such efforts we’ll need to ask ourselves whether the restriction is worth what we will lose. Sometimes it is. But unsurprisingly in my mind, Debian often finds a balance on these issues.


  • You could work to understand why people use Github or other non-free tools. As you take the time to understand and value the needs of those who use non-free services, you could ask them to understand and value your needs. If you identify gaps in what free software and services offer, work to fix those gaps.


  • Specifically in this instance, I think that setting up easy ways to bidirectionally mirror things between Github and services like Salsa could really help.



Conclusions



  1. We have come together to make a free operating system. Everything else is up for debate. When we shut down that debate—when we decide there is one right answer—we risk diluting our focus and diminishing ourselves.

  2. We and the entire free software community win through the Debian Project’s diversity.

  3. Freedom within the Debian Project has never been simple. Throughout our entire history we’ve used non-free bits in the sausage making, even though the result consists (and can be built from) entirely free bits.

  4. This complexity and diversity is part of what allows us to advocate for software freedom more successfully. Over time, we have replaced non-free software that we use with free alternatives, but those decisions are nuanced and ever-changing.

,

Planet DebianShirish Agarwal: Freedom, Chandrayaan 2 and Corporations in Space.

Today will be a longish blogpost so please excuse if you do not want to read a long article.

While today is my birthday, I don’t feel at all like celebrating. When 8 million Kashmiris are locked down in Kashmir and 19 million to be sent in detention camp, the number may increase, how one one feel happy? Sadly, many people disregard that illegal immigration is everywhere. Whether it is UK or US, Indians too have illegally immigrated. If you look at the comments either in US or UK papers is just as toxic as you would find in twitter in India. Most of the people are uninformed of the various reasons that people choose to take a dangerous path to make home a new country. Alliances are also divided because the children grow up in another culture and then they will be ‘corrupted’ especially if women are sent back to India. The situation in India have never been as similar as they are today, see this from Najam Sethi, an internationally known left-leaning journalist
https://www.youtube.com/watch?v=OCcrobZMy7A
and similarly you can see how Indian and US Investigative journalism is having a slow death in both India and U.S.
https://www.youtube.com/watch?v=65P44plUCng

You can also see how similar the societies are going into with this conversations
https://www.youtube.com/watch?v=ieWZi4gm_yE
https://www.youtube.com/watch?v=g_1oJui2Zq8

There are good moments too as can be seen here –

People going to Ganesh Visarjan and Muharram, Hyderabad.

We always say we are better than Pakistan but we seem to be going down to the same road and that can’t be good. Forget politics, even human right issues are not being tackled sensitively by our Supreme Court. Just today there came an incident involving one of the victims of the Muzaffarpur Shelter Home being allegedlly raped in a moving car. The case has been pending in the Supreme Court for quite sometime but no action being taken so far. Journalists either in Uttar Pradesh or Haryana are being booked for showing the truth. I have been trying to engage with people from across the political divide, i.e. the ones who support BJP. Majority of them are don’t have jobs and this is the only way they can get their frustration out. Also the dissonance in political message is such they feel that their jobs are being taken by outsiders. Ironically the Ministers get away with saying things like ‘North Indians lack qualifications’ . It shows lack of empathy on the Minister’s part. If they are citizens of the state, then it is the state’s responsibility of making sure they are skilled. If they are not skilled, then it is the Central Government and State Governments responsibility. Most of the States in the North are governed by BJP. I could share more but then it will all be about BJP only and nothing about the Chandrayaan 2 mission.

Chandrayaan 2 and Corporate Interests

Before we get to Chandrayaan 2, there are few interesting series I want to talk about, share about. The first one is AltBalaji’s Mission Over Mars which in some ways is similar to Mars 6-part series Docu-drama made by National Geographic and lot of movies, books etc. read over years. In both these and other books, movies etc. it has been shown how Corporate Interests win over science and exploration which the motives of such initiatives were and are. The rich become richer and richer while the poor become more poorer.

There has been also lot of media speculation that ISRO should be privatized similar to how NASA is and people saying that NASA’s importance has not lessened even though they couldn’t have been more wrong. Take the Space Launch System . It was first though of in the 2010 after the NASA Authorization Act of 2010 came into being. When it was shared or told it was told that it would be ready somewhere in 2016. Now it seems it won’t be ready until 2025. And who is responsible for this, the same company which has been responsible for lot of bad news in the international aviation business Boeing. The auditor’s report for NASA while blaming NASA for oversight also blames Boeing for not doing things right. And from what we have come to know that in the american system of self-regulation leaves much to be desired. More so, when an ex-employee of Boeing is exercising his fifth Amendment rights which raises the suspicion that there is more than just simply an oversight issue. Boeing also is a weapons manufacturer but that’s another story altogether. For people interested in the arms stuff, a wired article published 2 years back gives enough info. as to how America is good bad or Arms sale.

I know the whole thing gives a rather complex picture but that is the nature of things. The only thing I would say is we should be very careful in privatizing ISRO as the same issues are bound to happen sooner or later, the more private and non-transparent things become. We, the common citizens would never come to know if any sort of national or industrial espionage is happening and of course all profits would be of corporates while losses will be public as seen can be nowadays. Doesn’t leave a good taste in the mouth.

Vikram Lander

Now while the jury is still out as to what happened or might have happened and we hope that Vikram does connect within the 14 days there are lots of theories as to what could have gone wrong. While I’m no expert, I do find it hard to take the statement that probably ISRO saw an image of Chandrayaan 2 lander, at least not a single image has been released in the public. What ISRO has shared in its updates is that it located a lander which doesn’t tell much. While the Chandrayaan 2 orbitor started at 100 km. lunar orbit it probably would have some deviations to make sure that the orbiter itself doesn’t get into Moon’s gravity and crash-lands on the moon itself. The lens which probably would have been used would be to take panaromic shots and not telescopic in nature. As to what happened, we just don’t know as of yet. There are probably a dozen or two probabilities. One of the most simplest explanation to my mind could be some space rock could have crashed into the lander when it was landing. The dark side of the moon has more impacts than the one which we face so it’s entirely possible that the lander got hit by a space rock or lava. From what little I have been able to learn, the lander doesn’t seem to have any A.I. to manoeuver if such a scenario happens. Also any functioning A.I. would probably need more energy and for space missions energy, weight, electrical interference, contamination are all issues that Space Agencies have to deal with it. The other is of course, sensor failure, wrong calculation or a rough spot where it landed and broke the antennae. Till ISRO doesn’t share more details with us, we have only conjecture to help us.

Chandrayaan 2 Imaging

While we await news about the lander, would be curious to know about the images that Chandrayaan 2 is getting. Sadly, none of the images have made it to the public domain as of yet. Whether the images are in FITS or RAW format and whatever spectrum, (Chandrayaan 2 is going to image in a wide range of spectrum.) . Like Spirit and Opportunity did for NASA, I hope ISRO does show renderings of Moon as captured by the Orbitor, even though its lifeless so people, especially children get enthused about getting into Space Sciences .

Planet DebianMolly de Blanc: Free software activities (August 2019)

A photo of a beach in Greece, with bleach and tourquoise water and well trodden sand. In the forground is an inflatable uniforn with a rainbow mane.

August was really marked by traveling too much. I took the end of the month off from non-work activities in order to focus on GUADEC and GUADEC-follow up.

Personal

  • The Debian Community Team (CT) had a meeting where we discussed some of our activities, including potential new team members!
  • CT team members went on VAC, so we took a bit of a break in the second half of the month.
  • The OSI standing committee and board had meetings.
  • I handled some paperwork in my capacity as president.
  • I had regular meetings with the OSI general manager.
  • I gave a keynote at FrOSCon on “Open source citizenship for everyone!” TL;DR: We have rights and responsibilities as people participating in free software and the open source ecosystem — “we” here includes corporate actors.
  • I bought a really sweet pair of GNOME socks. Do recommend.

Professional

  • The LAS sponsorship team met and handled the creation of some important paperwork, and discussed fundraising strategy for the event.
  • I attended the GNOME Advisory Board meeting, where I got to meet and speak with the Foundation Board and the Advisory Board about activities over the past year, plans for the future, and the needs of the communities of AdBoard members. It was really educational and a lot of fun.
  • I attended my first GUADEC! it was amazing. I wrote a trip report over on the GNOME Engagement Blog.
  • At GUADEC, I spent some time helping out with basic operations, including keeping time in sessions.
  • We, the staff and board, did a Q&A at the Annual General Meeting.
  • I drank a lot of coffee. Like, a lot.

Planet DebianDirk Eddelbuettel: pinp 0.0.9: Real Fix and Polish

Another pinp package release! pinp allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

pinp vignette

This release comes exactly one week (i.e. the minimal time to not earn a NOTE) after the hot-fix release 0.0.8 which addressed breakage on CRAN tickled by changed in TeX Live. After updating the PNAS style LaTeX macros, and avoiding the issue with an (older) custom copy of titlesec, we now have the real fix, thanks to the eagle-eyed attention of Javier Bezos. The error, as so often, was simple and ours: we had left a stray \makeatother in pinp.cls where it may have been in hiding for a while. A very big Thank You! to Javier for spotting it, to Norbert for all his help and to James for double-checking on PNAS.

The good news in all of this is that the package is now in better shape than ever. The newer PNAS style works really well, and I went over a few of our extensions (such as papersize support for a4 as well as letter), direct on/off off a Draft watermark, a custom subtitle and more—and they all work consistently. So happy vignette or paper writing!

The NEWS entry for this release follows.

Changes in pinp version 0.0.9 (2019-09-15)

  • The processing error first addressed in release 0.0.8 is now fixed by removing one stray command; many thanks to Javier Bezos.

  • The hotfix of also installing titlesec.sty has been reverted.

  • Processing of the 'papersize' and 'watermark' options was updated.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDidier Raboud: miniDebConf19 Vaumarcus – Oct 25-27 2019 – Call for Presentations

MiniDebConf Vaumarcus 2019 - Oct 25.-27.

Talks wanted

We’re opening the Call for Presentations for the miniDebConf19 Vaumarcus now, until October 20, so please contribute to the MiniDebConf by proposing a talk, workshop, birds of feather (BoF) session, etc, directly on the Debian wiki: /Vaumarcus/TalkSubmissions We are aiming for talks which are somehow related to Debian or Free Software in general, see the wiki for subject suggestions. We expect submissions and talks to be held in English, as this is the working language in Debian and at this event. Registration is also still open; through the Debian wiki: Vaumarcus/Registration.

Debian Sprints are welcome

The place is ideal for a 2 days’ sprint; so we encourage teams to assemble and gather in Vaumarcus!

More sponsors and more hands wanted

We’re looking for more sponsors willing to help making this event possible; to help making it easier for anyone interested to attend.
Things are on a good track, but we need more help. Specifically, Attendee support would benefit from more hands.

Get in touch

We gather on the #debian.ch channel on irc.debian.org and on the debian-switzerland@lists.debian.org list. For more private matters, talk to minidebconf19@debian.ch!

Thank you already!

Sponsors Supporters
ServerBase - We keep IT online C+R Informatique Libre

See ya!

We’re looking forward to seeing a lot of you in Vaumarcus! (This was also sent to debian-devel-announce@l.d.o, amongst other lists)

,

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Planet DebianMatthew Garrett: It's time to talk about post-RMS Free Software

Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

[1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

(A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
(B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

[2] This argument is, obviously, bullshit

comment count unavailable comments

,

Planet DebianDirk Eddelbuettel: ttdo 0.0.3: New package

A new package of mine arrived on CRAN yesterday, having been uploaded a few days prior on the weekend. It extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam. Mark also tweeted about it.

ttdo screenshot

The package was written to address a fairly specific need. In teaching STAT 430 at Illinois, I am relying on the powerful PrairieLearn system (developed there) to provides tests, quizzes or homework. Alton and I have put together an autograder for R (which is work in progress, more on that maybe another day), and that uses this package to provides colorized differences between supplied and expected answers in case of an incorrect answer.

Now, the aspect of providing colorized diffs when tests do not evalute to TRUE is both simple and general enough. As our approach works rather well, I decided to offer the package on CRAN as well. The small screenshot gives a simple idea, the README.md contains a larger screenshoot.

The initial NEWS entries follow below.

Changes in ttdo version 0.0.3 (2019-09-08)

  • Added a simple demo to support initial CRAN upload.

Changes in ttdo version 0.0.2 (2019-08-31)

  • Updated defaults for format and mode to use the same options used by diffobj along with fallbacks.

Changes in ttdo version 0.0.1 (2019-08-26)

  • Initial version, with thanks to both Mark and Brodie.

Please use the GitHub repo and its issues for any questions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramFriday Squid Blogging: How Scientists Captured the Giant Squid Video

In June, I blogged about a video of a live juvenile giant squid. Here's how that video was captured.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBorder Stories: A night of talks on immigration, justice and freedom

Hosts Anne Milgram and Juan Enriquez kick off the evening at TEDSalon: Border Stories at the TED World Theater in New York City on September 10, 2019. (Photo: Ryan Lash / TED)

Immigration can be a deeply polarizing topic. But at heart, immigration policies and practices reflect no less than our attitude towards humanity. At TEDSalon: Border Stories, we explored the reality of life at the US-Mexico border, the history of the US immigration policy and possible solutions for reform — and investigated what’s truly at stake.

The event: TEDSalon: Border Stories, hosted by criminal justice reformer Anne Milgram and author and academic Juan Enriquez

When and where: Tuesday, September 10, 2019, at the TED World Theater in New York City

Speakers: Paul A. Kramer, Luis H. Zayas, Erika Pinheiro, David J. Bier and Will Hurd

Music: From Morley and Martha Redbone

A special performance: Poet and thinker Maria Popova, reading an excerpt from her book Figuring. A stunning meditation on “the illusion of separateness, of otherness” — and on “the infinitely many kinds of beautiful lives” that inhabit this universe — accompanied by cellist Dave Eggar and guitarist Chris Bruce.

“There are infinitely many kinds of beautiful lives,” says Maria Popova, reading a selection of her work at TEDSalon: Border Stories. (Photo: Ryan Lash / TED)

The talks in brief:

Paul A. Kramer, historian, writer, professor of history

  • Big idea: It’s time we make the immigration conversation to reflect how the world really works.
  • How? We must rid ourselves of the outdated questions, born from nativist and nationalist sentiments, that have permeated the immigration debate for centuries: interrogations of usefulness and assimilation, of parasitic rhetoric aimed at dismantling any positive discussions around immigration. What gives these damaging queries traction and power, Kramer says, is how they tap into a seemingly harmless sense of national belonging — and ultimately activate, heighten and inflame it. Kramer maps out a way for us to redraw those mental, societal and political borders and give immigrants access to the rights and resources that their work, activism and home countries have already played a fundamental role in creating.
  • Quote of the talk: “[We need] to redraw the boundaries of who counts — whose life, whose rights and whose thriving matters. We need to redraw … the borders of us.”

Luis H. Zayas, social worker, psychologist, researcher

  • Big idea: Asylum seekers — especially children — face traumatizing conditions at the US-Mexico border. We need compassionate, humane practices that give them the care they need during arduous times.
  • Why? Under prolonged and intense stress, the young developing brain is harmed — plain and simple, says Luis H. Zayas. He details the distressing conditions immigrant families face on their way to the US, which have only escalated since children started being separated from their parents and held in detention centers. He urges the US to reframe its practices, replacing hostility and fear with safety and compassion. For instance: the US could open processing centers, where immigrants can find the support they need to start a new life. These facilities would be community-oriented, offering medical care, social support and the fundamental human right to respectful and dignified treatment.
  • Quote of the talk: “I hope we can agree on one thing: that none of us wants to look back at this moment in our history when we knew we were inflicting lifelong trauma on children, and that we sat back and did nothing. That would be the greatest tragedy of all.”

Immigration lawyer Erika Pinheiro discusses the hidden realities of the US immigration system. “Seeing these horrors day in and day out has changed me,” she says. (Photo: Ryan Lash / TED)

Erika Pinheiro, nonprofit litigation and policy director

  • Big idea: The current US administration’s mass separations of asylum-seeking families at the Mexican border shocked the conscience of the world — and the cruel realities of the immigration system have only gotten worse. We need a legal and social reckoning.
  • How? US immigration laws are broken, says Erika Pinheiro. Since 2017, US attorneys general have made sweeping changes to asylum law to ensure fewer people qualify for protection in the US. This includes all types of people fleeing persecution: Venezuelan activists, Russian dissidents, Chinese Muslims, climate change refugees — the list goes on. The US has simultaneously created a parallel legal system where migrants are detained indefinitely, often without access to legal help. Pinheiro issues a call to action: if you are against the cruel and inhumane treatment of migrants, then you need to get involved. You need to demand that your lawmakers expand the definition of refugees and amend laws to ensure immigrants have access to counsel and independent courts. Failing to act now threatens the inherent dignity of all humans.
  • Quote of the talk: “History shows us that the first population to be vilified and stripped of their rights is rarely the last.”

David J. Bier, immigration policy analyst

  • Big idea: We can solve the border crisis in a humane fashion. In fact, we’ve done so before.
  • How? Most migrants who travel illegally from Central America to the US do so because they have no way to enter the US legally. When these immigrants are caught, they find themselves in the grips of a cruel system of incarceration and dehumanization — but is inhumane treatment really necessary to protect our borders? Bier points us to the example of Mexican guest worker programs, which allow immigrants to cross borders and work the jobs they need to support their families. As legal opportunities to cross the border have increased, the number of illegal Mexican immigrants seized at the border has plummeted 98 percent. If we were to extend guest worker programs to Central Americans as well, Bier says, we could see a similar drop in the numbers of illegal immigrants.
  • Quote of the talk: “This belief that the only way to maintain order is with inhumane means is inaccurate — and, in fact, the opposite is true. Only a humane system will create order at the border.”

“Building a 30-foot-high concrete structure from sea to shining sea is the most expensive and least effective way to do border security,” says Congressman Will Hurd in a video interview with Anne Milgram at TEDSalon: Border Stories. (Photo: Ryan Lash / TED)

Will Hurd, US Representative for Texas’s 23rd congressional district

  • Big idea: Walls won’t solve our problems.
  • Why? Representing a massive district that encompasses 29 counties and two times zones and shares an 820-mile border with Mexico, Republican Congressman Will Hurd has a frontline perspective on illegal immigration in Texas. Legal immigration options and modernizing the Border Patrol (which still measures their response times to border incidents in hours and days) will be what ultimately stems the tide of illegal border crossings, Hurd says. Instead of investing in walls and separating families, the US should invest in their own defense forces — and, on the other side of the border, work to alleviate poverty and violence in Central American countries.
  • Quote of the talk: “When you’re debating your strategy, if somebody comes up with the idea of snatching a child out of their mother’s arms, you need to go back to the drawing board. This is not what the United States of America stands for. This is not a Republican or a Democrat or an Independent thing. This is a human decency thing.”

Juan Enriquez, author and academic

  • Big idea: If the US continues to divide groups of people into “us” and “them,” we open the door to inhumanity and atrocity — and not just at our borders.
  • How? Countries that survive and grow as the years go by are compassionate, kind, smart and brave; countries that don’t govern by cruelty and fear, says Juan Enriquez. In a personal talk, he calls on us to realize that deportation, imprisonment and dehumanization aren’t isolated phenomena directed at people crossing the border illegally but instead things are happening to the people who live and work by our sides in our communities. Now is the time to stand up and do something to stop our country’s slide into fear and division — whether it’s engaging in small acts of humanity, loud protests in the streets or activism directed at enacting legislative or policy changes.
  • Quote of the talk: “This is how you wipe out an economy. This isn’t about kids and borders, it’s about us. This is about who we are, who we the people are, as a nation and as individuals. This is not an abstract debate.”

TEDNot All Is Broken: Notes from Session 6 of TEDSummit 2019

Raconteur Mackenzie Dalrymple regales the TEDSummit audience with a classic Scottish story. He speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In the final session of TEDSummit 2019, the themes from the week — our search for belonging and community, our digital future, our inextricable connection to the environment — ring out with clarity and insight. From the mysterious ways our emotions impact our biological hearts, to a tour-de-force talk on the languages we all speak, it’s a fitting close to a week of revelation, laughter, tears and wonder.

The event: TEDSummit 2019, Session 6: Not All Is Broken, hosted by Chris Anderson and Bruno Giussani

When and where: Thursday, July 25, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Johann Hari, Sandeep Jauhar, Anna Piperal, Eli Pariser, Poet Ali

Interlude: Mackenzie Dalrymple sharing the tale of an uncle and nephew competing to become Lord of the Isles

Music: Djazia Satour, blending 1950s Chaabi (a genre of North African folk music) with modern grooves

The talks in brief:

Johann Hari, journalist

Big idea: The cultural narrative and definitions of depression and anxiety need to change.

Why? We need to talk less about chemical imbalances and more about imbalances in the way we live. Johann Hari met with experts around the world, boiling down his research into a surprisingly simple thesis: all humans have physical needs (food, shelter, water) as well as psychological needs (feeling that you belong, that your life has meaning and purpose). Though antidepressant drugs work for some, biology isn’t the whole picture, and any treatment must be paired with a social approach. Our best bet is to listen to the signals of our bodies, instead of dismissing them as signs of weakness or madness. If we take time to investigate our red flags of depression and anxiety — and take the time to reevaluate how we build meaning and purpose, especially through social connections — we can start to heal in a society deemed the loneliest in human history.

Quote of the talk: “If you’re depressed, if you’re anxious — you’re not weak. You’re not crazy. You’re not a machine with broken parts. You’re a human being with unmet needs.”


“Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways,” says cardiologist Sandeep Jauhar. He speaks at TEDSummit: A Community Beyond Borders, July 21-25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sandeep Jauhar, cardiologist

Big Idea: Emotional stress can be a matter of life and death. Let’s factor that into how we care for our hearts.

How? “The heart may not originate our feelings, but it is highly responsive to them,” says Sandeep Jauhar. In his practice as a cardiologist, he has seen extensive evidence of this: grief and fear can cause profound cardiac injury. “Takotsubo cardiomyopathy,” or broken heart syndrome, has been found to occur when the heart weakens after the death of a loved one or the stress of a large-scale natural disaster. It comes with none of the other usual symptoms of heart disease, and it can resolve in just a few weeks. But it can also prove fatal. In response, Jauhar says that we need a new paradigm of care, one that considers the heart as more than “a machine that can be manipulated and controlled” — and recognizes that emotional stress is as important as cholesterol.

Quote of the talk: “Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways.”


“In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated,” says e-governance expert Anna Piperal. She speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Anna Piperal, e-governance expert 

Big idea: Bureaucracy can be eradicated by going digital — but we’ll need to build in commitment and trust.

How? Estonia is one of the most digital societies on earth. After gaining independence 30 years ago, and subsequently building itself up from scratch, the country decided not only to digitize existing bureaucracy but also to create an entirely new system. Now citizens can conduct everything online, from running a business to voting and managing their healthcare records, and only need to show up in person for literally three things: to claim their identity card, marry or divorce, or sell a property. Anna Piperal explains how, using a form of blockchain technology, e-Estonia builds trust through the “once-only” principle, through which the state cannot ask for information more than once nor store it in more than one place. The country is working to redefine bureaucracy by making it more efficient, granting citizens full ownership of their data — and serving as a model for the rest of the world to do the same.

Quote of the talk: “In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated.”


Eli Pariser, CEO of Upworthy

Big idea: We can find ways to make our online spaces civil and safe, much like our best cities.

How? Social media is a chaotic and sometimes dangerous place. With its trolls, criminals and segregated spaces, it’s a lot like New York City in the 1970s. But like New York City, it’s also a vibrant space in which people can innovate and find new ideas. So Eli Pariser asks: What if we design social media like we design cities, taking cues from social scientists and urban planners like Jane Jacobs? Built around empowered communities, one-on-one interactions and public censure for those who act out, platforms could encourage trust and discourse, discourage antisocial behavior and diminish the sense of chaos that leads some to embrace authoritarianism.

Quote of the talk: “If online digital spaces are going to be our new home, let’s make them a comfortable, beautiful place to live — a place we all feel not just included, but actually some ownership of. A place we get to know each other. A place you’d actually want not just to visit, but to bring your kids.”


“Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds,” says Poet Ali. He speaks at at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Poet Ali, architect of human connection

Big idea: You speak far more languages than you realize, with each language representing a gateway to understanding different societies, cultures and experiences.

How? Whether it’s the recognized tongue of your country or profession, or the social norms of your community, every “language” you speak is more than a lexicon of words: it also encompasses feelings like laughter, solidarity, even a sense of being left out. These latter languages are universal, and the more we embrace their commonality — and acknowledge our fluency in them — the more we can empathize with our fellow humans, regardless of our differences.

Quote of the talk: “Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds.”

CryptogramWhen Biology Becomes Software

All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.

Genes and genomes are based on code-- just like the digital language of computers. But instead of zeros and ones, four DNA letters --- A, C, T, G -- encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.

If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.

Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it's a relatively simple operation by today's standards, it'll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they're running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient's immune system.

We have known the mechanics of DNA for some 60 plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first "recombinant" virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.

In 2010 Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases the researchers created what could arguably be called new forms of life.

This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to "run" it in an already existing organism, from a virus to a wheat plant to a person.

In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.

Within the biological science community, urgent conversations are occurring about "cyberbiosecurity," an admittedly contested term which exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.

These risks have occupied not only learned bodies -- the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively -- but have made it to mainstream media: genome editing was a major plot element in Netflix's Season 3 of "Designated Survivor."

Our worries are more prosaic. As synthetic biology "programming" reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.

Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.

Even finished code still has problems. Again due to the complexity of modern software systems, "works properly" doesn't mean that it's perfectly correct. Modern software is full of bugs -- thousands of software flaws -- that occasionally affect performance or security. That's why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.

Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn't work in biology.

In nature, a similar type of trial and error is handled by "the survival of the fittest" and occurs slowly over many generations. But human-generated code from scratch doesn't have that kind of correction mechanism. Inadvertent or intentional release of these newly coded "programs" may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.

Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn't make its way into the larger body of practitioners who have important contributions to make.

Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.

What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.

Those opportunities will not occur without effort or financial support. Let's find those resources, public, private, philanthropic, or any combination. And then let's use those resources to set up some novel opportunities for digital geeks and bionerds -- as well as ethicists and policymakers -- to share experiences, concerns, and come up with creative, constructive solutions to these problems that are more than just patches.

These are overarching problems; let's not let siloed thinking or funding get in the way of breaking down barriers between communities. And let's not let technology of any kind get in the way of the public good.

This essay previously appeared on CNN.com.

Planet DebianNorbert Preining: Gaming: Puzzle Agent

Two lovely but short puzzle games: Puzzle Agent and Puzzle Agent II, follow agent Nelson Tethers in his quest to solve an obscure case in Scoggins, Minnesota: The erasers factory delivering to the White House stopped production – a dangerous situation for the US and the world. Tethers embarks on a wild journey.

Starting in his office, agent Tethers is used to office work solving puzzles, mostly inspired by chewing gum. Until a strange encounter and a phone call kicks him out in to the wild.

The game is full of puzzles, most of them rather easy, some of them tricky. One can use the spare chewing gums to get a hint in case one gets stuck. Chewing gums are rare in Scoggins, so agent Tethers needs to collect used gums from all kind of surfaces.

Solved puzzles are sent to evaluation, also showing the huge amount of money one single FBI agent costs. After that the performance of agent Tethers is evaluated based on the amount of hints (chewing gums) and false submissions.

The rest are dialog trees to collect information, and driving around in the neighborhood of Scoggins. The game shines by the well balanced list of puzzles to be solved, the quirky dialogs with quirky people of Scoggins.

The game is beautifully drawn in cartoon-style, far from the shine ray-tracing world, but this particularly adds a lot of charme to the game.

A simply, but very enjoyable pair of games. Unfortunately there is not much of replay value. Still, worth getting them when there are on sale.

CryptogramSmart Watches and Cheating on Tests

The Independent Commission on Examination Malpractice in the UK has recommended that all watches be banned from exam rooms, basically because it's becoming very difficult to tell regular watches from smart watches.

Worse Than FailureError'd: Many Languages, One WTF

"It's as if IntelliJ IDEA just gave up trying to parse my code," writes John F.

Henry D. writes, "If you have a phone in English but have it configured to recognize two different languages, simple requests sometimes morph into the weirdest things."

 

 

Carl C. wrote, "Maybe Best Buy's page is referring to a store near Nulltown, Indiana, but really, I think their site is on drugs."

 

"Yeah, Thanks Cisco, but I'm not sure I really want to learn more," writes Matt P.

 

"Ebay is alerting me to something. No idea what it is, but I can tell you what they named their variables," Lincoln K. wrote.

 

"Not quite sure what secrets the Inner Circle holds, I guess knowing Latin?" writes Matt S.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

LongNowShort film of Comet 67P made from 400,000 Rosetta images is released

On August 6, 02014, the European Space Agency’s Rosetta probe successfully reached Comet 67P. In addition to studying the comet, Rosetta was able to place one of Long Now’s Rosetta Disks on its surface via its Philae lander.

In 02017, ESA released over 400,000 images from the Rosetta mission. Now, motion designer Christian Stangl has made a short film out of the images.
The Comet offers a remarkable, beautiful, and haunting look at this alien body from the Kuiper belt. Watch it below:

the Comet from Christian Stangl on Vimeo.

Planet DebianJonas Meurer: debian lts report 2019.08

Debian LTS report for August 2019

This month I was allocated 10 hours. Unfortunately, I didn't find much time to work on LTS issues, so I only spent 0.5 hours on the task listed below. That means that I carry over 9.5 hours to September.

  • Triaged CVE-2019-13640/qbittorrent: After digging through the code, it became obvious that qbittorrent 3.1.10 in Debian Jessie is not affected by this vulnerability as the affected code is not present yet.

Planet DebianBen Hutchings: Debian LTS work, August 2019

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I prepared and, after review, released Linux 3.16.72, including various security and other fixes. I then rebased the Debian package onto that. I uploaded that with a small number of other fixes and issued DLA-1884-1. I also prepared and released Linux 3.16.73 with another small set of fixes.

I backported the latest security update for Linux 4.9 from stretch to jessie and issued DLA-1885-1 for that.

CryptogramFabricated Voice Used in Financial Fraud

This seems to be an identity theft first:

Criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

Another news article.

CryptogramNotPetya

Wired has a long article on NotPetya.

EDITED TO ADD (9/12): Another good article on NotPetya.

CryptogramDefault Password for GPS Trackers

Many GPS trackers are shipped with the default password 123456. Many users don't change them.

We just need to eliminate default passwords. This is an easy win.

EDITED TO ADD (9/12): A California law bans default passwords starting in 2020.

CryptogramMore on Law Enforcement Backdoor Demands

The Carnegie Endowment for International Peace and Princeton University's Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the "going dark" debate. They have released their report: "Moving the Encryption Policy Conversation Forward.

The main contribution seems to be that attempts to backdoor devices like smartphones shouldn't also backdoor communications systems:

Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

I don't believe that backdoor access to encryption data at rest offers "an achievable balance of risk vs. benefit" either, but I agree that the two aspects should be treated independently.

EDITED TO ADD (9/12): This report does an important job moving the debate forward. It advises that policymakers break the issues into component parts. Instead of talking about restricting all encryption, it separates encrypted data at rest (storage) from encrypted data in motion (communication). It advises that policymakers pick the problems they have some chance of solving, and not demand systems that put everyone in danger. For example: no key escrow, and no use of software updates to break into devices).

Data in motion poses challenges that are not present for data at rest. For example, modern cryptographic protocols for data in motion use a separate "session key"� for each message, unrelated to the private/public key pairs used to initiate communication, to preserve the message's secrecy independent of other messages (consistent with a concept known as "forward secrecy"). While there are potential techniques for recording, escrowing, or otherwise allowing access to these session keys, by their nature, each would break forward secrecy and related concepts and would create a massive target for criminal and foreign intelligence adversaries. Any technical steps to simplify the collection or tracking of session keys, such as linking keys to other keys or storing keys after they are used, would represent a fundamental weakening of all the communications.

These are all big steps forward given who signed on to the report. Not just the usual suspects, but also Jim Baker -- former general counsel of the FBI -- and Chris Inglis: former deputy director of the NSA.

Planet DebianThomas Lange: FAI.me service now support backports for Debian 10 (buster)

The FAI.me service for creating customized installation and cloud images now supports a backports kernel for stable release Debian 10 (aka buster). If you enable the backports option, you will currently get kernel 5.2. This will help you if you have newer hardware that is not support by the default kernel 4.19. The backports option is also still available for the images when using the old Debian 9 (stretch) release.

The URL of the FAI.me service is

https://fai-project.org/FAIme/

FAI.me

Worse Than FailureCodeSOD: Time to Wait

When dealing with customers- and here, we mean, “off the street” customers- they often want to know “how long am I going to have to wait?” Whether we’re talking about a restaurant, a mechanic, a doctor’s office, or a computer/phone repair shop, knowing (and sharing with our customers) reasonable expectations about how much time they’re about to spend waiting.

Russell F works on an application which facilitates this sort of customer-facing management. It does much more, too, obviously, but one of its lesser features is to estimate how long a customer is about to spend waiting.

This is how that’s calculated:

TimeSpan tsDifference = dtWorkTime - DateTime.Now;
string strEstWaitHM = ((tsDifference.Hours * 60) + tsDifference.Minutes).ToString();
if (Convert.ToInt32(strEstWaitHM) >= 60)
{
	decimal decWrkH = Math.Floor(Convert.ToDecimal(strEstWaitHM) / 60);
	int intH = Convert.ToInt32(decWrkH);
	txtEstWaitHours.Value = Convert.ToString(intH);
	int intM = Convert.ToInt32(strEstWaitHM) - (60 * intH);
	txtEstWaitMinutes.Value = Convert.ToString(intM);
}
else
{
	txtEstWaitHours.Value = "";
	txtEstWaitMinutes.Value = strEstWaitHM;
}

Hungarian Notation is always a great sign of bad code. It really is, and I think that’s because it’s easy to do, easy to enforce as a standard, and provides the most benefit when you have messy variable scoping and keeping track of what type a given variable is might actually be a challenge.

Or, as we see in this case, it’s useful when you’re passing the same data through a method with different types. We calculate the difference between the WorkTime and Now. That’s the last thing in this code which makes sense.

The key goal here is that, if we’re going to be waiting for more than an hour, we want to display both the hours and minutes, but if it’s just minutes, we want to display just that.

We have that TimeSpan object, which as you can see, has a convenient Hours and Minutes property. Instead of using that, though, we convert the hours to minutes, add it together, if the number is more than 60, we know we’ll be waiting for over an hour, so we want to populate the hours box, and the minutes box, so we have to convert back to hours and minutes.

In that context, the fact that we have to convert from strings to numbers and back almost seems logical. Almost. I especially like that they Convert.ToDecimal (to avoid rounding errors) and Math.floor the result (to round off). If only there were some numeric type that never rounded off, and always had an integer value. If only…

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianNorbert Preining: TeX Services at texlive.info

I have been working over the last weeks to provide four more services for the TeX (Live) community: an archive of TeX Live’s network installation directory tlnet, a git repository of CTAN, a mirror of the TeX Live historic archives, and a new tlpretest mirror. In addition to the services that have already been provided before on my server, this makes a considerable list, and I thought it is a good idea to summarize all of the services.

Overview of the services

New services added recently are marked with an asterisk (*) at the end.

For the git services, anonymous checkouts are supported. If a developer wants to have push rights, please contact me.

tlnet archive

TeX Live is distributed via the CTAN network in CTAN/systems/texlive/tlnet. The packages there are updated on a daily basis according to updates on CTAN that make it into the TeX Live repository. This has created some problems for distributions requiring specific versions, as well as problems with rollbacks in case of buggy packages.

Starting with 2019/08/30, for each day rsync backups of the tlnet directory are done, and they are available at https://www.texlive.info/tlnet-archive/YYYY/MM/DD/tlnet.

CTAN git repository

The second big item is putting CTAN into a git repository. In a perfect world I could get git commits for each single package update, but that would need a lot of collaboration with the CTAN Team (and maybe this will happen in the future), but for now there is one rsync of the CTAN a day committed after the sync.

Considering the total size of CTAN (currently around 40G), we decided to ignore file types that provide no useful information when put into git, mostly large binary files. The concrete list is tar, zip, pkg, cab, jar, dmg, rpm, deb, tgz, iso, exe, cab, as well as files containing one of these extensions (that means that files foobar.iso.gz will be ignored, too). This allows to keep the size of the .git directory for now at something reasonable amount (a few Gb).

We will see how the git repository grows over time, and whether we can support this on a long term time range.

While we exclude the above files from being recorded in the git repository, the actual CTAN directory is complete and contains all files, meaning that rsync checkout contains everything.

Access to these services is provided as follows:

TeX Live historic archives

The TeX Live historic archives hierarchy contains various items of interest in TeX history, from individual files to entire systems. See the article by Ulrik Vieth at https://tug.org/TUGboat/tb29-1/tb91vieth.pdf for an overview.

We provide a mirror available via rsync://texlive.info/historic/.

tlpretest mirror

During preparation of a new TeX Live release (the pretest phase) we are distributing preliminary builds via a few tlpretest mirrors. The current server will provide access to tlpretest, too:

TeX Live svn/git mirror

Since I prefer to work with git, and developing new features with git on separate branches is so much more convenient than working with subversion, I am running a git-svn mirror of the whole TeX Live subversion repository. This repo is updated every 15min with the latest changes. There are also git branches matching the subversion branches, and some dev/ branches where I am working on new features. The git repository carries, similar to the subversion, the full history back to our switch from Perforce to Subversion in 2005.This repository is quite big, so don’t do a casual checkout (checked out size currently close to 40Gb):

TeX Live contrib

The TeX Live Contrib repository is a companion to the core TeX Live (tlnet) distribution in much the same way as Debian’s non-free tree is a companion to the normal distribution. The goal is not to replace TeX Live: packages that could go into TeX Live itself should stay (or be added) there. The TeX Live Contrib is simply trying to fill in a gap in the current distribution system by providing ready made packages for software that is not distributed in TeX Live proper due to license reasons, support for non-free software, etc.:

TeX Live GnuPG

Starting with release 2016, TeX Live provides facilities to verify authenticity of the TeX Live database using cryptographic signatures. For this to work out, a working GnuPG program needs to be available. In particular, either gpg (version 1) or gpg2 (version 2). To ease adoption of verification, this repository provides a TeX Live package tlgpg that ships GnuPG binaries for Windows and MacOS (universal and x86_64). On other systems we expect GnuPG to be installed.

Supporting these services

We will try to keep this service up and running as long as server space, connectivity, and bandwidth allows. If you find them useful, I happily accept donations via PayPal or Patreon to support the server as well as my time and energy!

,

Sociological ImagesNormal Distributions in the Wild

Social scientists rely on the normal distribution all the time. This classic “bell curve” shape is so important because it fits all kinds of patterns in human behavior, from measures of public opinion to scores on standardized tests.

But it can be difficult to teach the normal distribution in social statistics, because at the core it is a theory about patterns we see in the data. If you’re interested in studying people in their social worlds, it can be more helpful to see how the bell curve emerges from real world examples.

One of the best ways to illustrate this is the “Galton Board,” a desk toy that lets you watch the normal distribution emerge from a random drop of ball-bearings. Check out the video below or a slow motion gif here.

The Galton Board is cool, but I’m also always on the lookout for normal distributions “in the wild.” There are places where you can see the distribution in real patterns of social behavior, rather than simulating them in a controlled environment. My absolute favorite example comes from Ed Burmila:

The wear patterns here show exactly what we would expect a normal distribution to tell us about weightlifting. More people use the machine at a middle weight setting for the average strength, and the extreme choices are less common. Not all social behavior follows this pattern, but when we find cases that do, our techniques to analyze that behavior are fairly simple.

Another cool example is grocery shelves. Because stores like to keep popular products together and right in front of your face (the maxim is “eye level is buy level“), they tend to stock in a normally-distributed pattern with popular stuff right in the middle. We don’t necessarily see this in action until there is a big sale or a rush in an emergency. When stores can’t restock in time, you can see a kind of bell curve emerge on the empty shelves. Products that are high up or off to the side are a little less likely to be picked over.

Paul Swansen, Flickr CC

Have you seen normal distributions out in the wild? Send them my way and I might feature them in a future post!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityNY Payroll Company Vanishes With $35 Million

MyPayrollHR, a now defunct cloud-based payroll processing firm based in upstate New York, abruptly ceased operations this past week after stiffing employees at thousands of companies. The ongoing debacle, which allegedly involves malfeasance on the part of the payroll company’s CEO, resulted in countless people having money drained from their bank accounts and has left nearly $35 million worth of payroll and tax payments in legal limbo.

Unlike many stories here about cloud service providers being extorted by hackers for ransomware payouts, this snafu appears to have been something of an inside job. Nevertheless, it is a story worth telling, in part because much of the media coverage of this incident so far has been somewhat disjointed, but also because it should serve as a warning to other payroll providers about how quickly and massively things can go wrong when a trusted partner unexpectedly turns rogue.

Clifton Park, NY-based MyPayrollHR — a subsidiary of ValueWise Corp. — disclosed last week in a rather unceremonious message to some 4,000 clients that it would be shutting its virtual doors and that companies which relied upon it to process payroll payments should kindly look elsewhere for such services going forward.

This communique came after employees at companies that depend on MyPayrollHR to receive direct deposits of their bi-weekly payroll payments discovered their bank accounts were instead debited for the amounts they would normally expect to accrue in a given pay period.

To make matters worse, many of those employees found their accounts had been dinged for two payroll periods — a month’s worth of wages — leaving their bank accounts dangerously in the red.

The remainder of this post is a deep-dive into what we know so far about what transpired, and how such an occurrence might be prevented in the future for other payroll processing firms.

A $26 MILLION TEXT FILE

To understand what’s at stake here requires a basic primer on how most of us get paid, which is a surprisingly convoluted process. In a typical scenario, our employer works with at least one third party company to make sure that on every other Friday what we’re owed gets deposited into our bank account.

The company that handled that process for MyPayrollHR is a California firm called Cachet Financial Services. Every other week for more than 12 years, MyPayrollHR has submitted a file to Cachet that told it which employee accounts at which banks should be credited and by how much.

According to interviews with Cachet, the way the process worked ran something like this: MyPayrollHR would send a digital file documenting deposits made by each of these client companies which laid out the amounts owed to each clients’ employees. In turn, those funds from MyPayrollHR client firms then would be deposited into a settlement or holding account maintained by Cachet.

From there, Cachet would take those sums and disburse them into the bank accounts of people whose employers used MyPayrollHR to manage their bi-weekly payroll payments.

But according to Cachet, something odd happened with the instructions file MyPayrollHR submitted on the afternoon of Wednesday, Sept. 4 that had never before transpired: MyPayrollHR requested that all of its clients’ payroll dollars be sent not to Cachet’s holding account but instead to an account at Pioneer Savings Bank that was operated and controlled by MyPayrollHR.

The total amount of this mass payroll deposit was approximately $26 million. Wendy Slavkin, general counsel for Cachet, told KrebsOnSecurity that her client then inquired with Pioneer Savings about the wayward deposit and was told MyPayrollHR’s bank account had been frozen.

Nevertheless, the payroll file submitted by MyPayrollHR instructed financial institutions for its various clients to pull $26 million from Cachet’s holding account — even though the usual deposits from MyPayrollHR’s client banks had not been made.

REVERSING THE REVERSAL

In response, Cachet submitted a request to reverse that transaction. But according to Slavkin, that initial reversal request was improperly formatted, and so Cachet soon after submitted a correctly coded reversal request.

Financial institutions are supposed to ignore or reject payment instructions that don’t comport with precise formatting required by the National Automated Clearinghouse Association (NACHA), the not-for-profit organization that provides the backbone for the electronic movement of money in the United States. But Slavkin said a number of financial institutions ended up processing both reversal requests, meaning a fair number of employees at companies that use MyPayrollHR suddenly saw a month’s worth of payroll payments withdrawn from their bank accounts.

Dan L’Abbe, CEO of the San Francisco-based consultancy Granite Solutions Groupe, said the mix-up has been massively disruptive for his 250 employees.

“This caused a lot of chaos for employers, but employees were the ones really affected,” L’Abbe said. “This is all very unusual because we don’t even have the ability to take money out of our employee accounts.”

Slavkin said Cachet managed to reach the CEO of MyPayrollHR — Michael T. Mann — via phone on the evening of Sept. 4, and that Mann said he would would call back in a few minutes. According to Slavkin, Mann never returned the call. Not long after that, MyPayrollHR told clients that it was going out of business and that they should find someone else to handle their payroll.

In short order, many people hit by one or both payroll reversals took to Twitter and Facebook to vent their anger and bewilderment at Cachet and at MyPayrollHR. But Slavkin said Cachet ultimately decided to cancel the previous payment reversals, leaving Cachet on the hook for $26 million.

“What we have since done is reached out to 100+ receiving banks to have them reject both reversals,” Slavkin said. “So most — if not all — employees affected by this will in the next day or two have all their money back.”

THE VANISHING MANN

Cachet has since been in touch with the FBI and with federal prosecutors in New York, and Slavkin said both are now investigating MyPayrollHR and its CEO. On Monday, New York Governor Andrew Cuomo called on the state’s Department of Financial Services to investigate the company’s “sudden and disturbing shutdown.”

A tweet sent Sept. 11 by the FBI’s Albany field office.

The $26 million hit against Cachet wasn’t the only fraud apparently perpetrated by MyPayrollHR and/or its parent firm: According to Slavkin, the now defunct New York company also stiffed National Payment Corporation (NatPay) — the Florida-based firm which handles tax withholdings for MyPayrollHR clients — to the tune of more than $9 million.

In a statement provided to KrebsOnSecurity, NatPay said it was alerted late last week that the bank accounts of MyPayrollHR and one of its affiliated companies were frozen, and that the notification came after payment files were processed.

“NatPay was provided information that MyPayrollHR and Cloud Payroll may have been the victims of fraud committed by their holding company ValueWise, whose CEO and owner is Michael Mann,” NatPay said. “NatPay immediately put in place steps to manage the orderly process of recovering funds [and] has more than sufficient insurance to cover actions of attempted or real fraud.”

Requests for comment from different executives at both MyPayrollHR and its parent firm ValueWise Corp. went unanswered, and the latter’s Web site is now offline. Several erstwhile MyPayrollHR employees reached via LinkedIn said none of them had seen or heard from Mr. Mann in days.

Meanwhile, Granite Solutions Groupe CEO L’Abbe said some of his employees have seen their bank accounts credited back the money that was taken, while others are still waiting for those reversals to come through.

“It varies widely,” L’Abbe said. “Every bank processes differently, and everyone’s relationship with the bank is different. Others have absolutely no money right now and are having a helluva time with their bank believing this is all the result of fraud. Things are starting to settle down now, but a lot of employees are still in limbo with their bank.”

For its part, Cachet Financial says it will be looking at solutions to better detect when and if instructions from clients for funding its settlement accounts suddenly change.

“Our system is excellent at protecting against outside hackers,” Slavkin said. “But when it comes to something like this it takes everyone by complete surprise.”

LongNowLong-term Building in Japan

The Ise Shrine in Japan, which has been rebuilt every 20 years for over 1,400 years. 

When I started working with Stewart Brand over two decades ago, he told me about the ideas behind Long Now, and how we might build the seed for a very long-lived institution. One of the first examples he mentioned to me was Ise Shrine in Japan, which has been rebuilt every 20 years in adjacent sites for over 1,400 years. This shrine is made of ephemeral materials like wood and thatch, but its symbiotic relationship with the Shinto belief and craftsmen has kept a version of the temple standing since 692 CE. Over these past decades many of us at Long Now have conjured with these temples as an example of long-term thinking, but it had not occurred to me that I might some day visit them.

That is, until a few years ago, when I came across a news piece about the temples. It announced that the shrine’s foresters were harvesting the trees for the next rebuild, and I decided to do some research to find out how and when visitors could go see the one temple being replaced by the next. This research turned out to be very difficult, in part because of the language barrier, but also because the last rebuild took place well before the world wide web was anything close to ubiquitous. I kept my ear out and asked people who might know about the shrines, but did not get very far.

Then, one morning in late September, Danny Hillis called to tell me that Daniel Erasmus, a Long Now member in Holland, had learned that the shrine transfer ceremony would be taking place the following Saturday. Danny said he was going to try and meet Daniel in Ise, and wanted to know if he should document it. I told him he wouldn’t need to, because I was going to get on a plane and meet them there.

Ise Shrine

The next few days were a blur of difficult travel arrangements to a rural Japanese town where little English was spoken and lodging was already way over-booked. I was greatly aided by a colleague’s Japanese wife, who was able to find us a room in a traditional ryokan home-stay very close to the temples. I also put the word out about the trip, and Ping Fu from the Long Now Board decided to join us, as well.

Streets of Osaka.

A few days later I met Ping at SFO for our flight to Osaka. Danny Hillis and Daniel Erasmus would be coming in from Tokyo a day later. We would stay the night in Osaka and then take the train to Ise. I found out that one of the other sites in Japan I had always wanted to visit was also close by: the Buddhist temples of Nara, considered to be some of the oldest continuously standing wooden structures in the world. We would be visiting Nara after our visit to Ise.

After landing, Ping and I spent a jet-lagged evening wandering around the Blade Runner streets of Osaka to find a restaurant. In Japan the best local food and drink are often tiny neighborhood affairs that only seat 5–10 people. Ping’s ability to read Kanji characters, which transfer over from Chinese, proved to be very helpful in at least figuring out if a sign was for a restaurant or a bathhouse.

“Fast food” in Osaka.

The next morning we headed east on a train to Ise eating “fast food” — morsels of fish and rice wrapped in beautiful origami of leaves. This was not one of the bullet trains; Ise is a small city whose economy has been largely driven by Shinto pilgrims for the last two millennia. A few decades before the birth of Christ, a Japanese princess is said to have spent over twenty years wandering Japan, looking for the perfect place to worship. Around year 4 of the current era she found Ise, where she heard the spirits whisper that this “is a secluded and pleasant land. In this land I wish to dwell.” And thus Ise was established as the Shinto spiritual center of Japan.

This is probably a good time to say a bit more about Shinto. While it is referred to often as a religion with priests and temples, there is actually a much deeper explanation, as with most things in Japan. Shinto is the indigenous belief system that goes back to at least 6 centuries BCE and pre-dates any religions in Japan — including Buddhism, which did not arrive until a millennium or so later. Shinto is an animist world view, which believes that spirits, or Kami, are a part of all things. It is said that nearly all Japanese are Shinto, even though many would self-describe as non-religious, or Buddhist. There are no doctrines or prophets in Shinto; people give reverence to various Kami for different reasons throughout their day, week, or life.

Shinto Priest at Ise gates.

There are over 80,000 Shinto temples, or Jinja, in Japan, and hundreds of thousands of Shinto “priests” who administer them. Of all of these temples, the structures at Ise, collectively referred to as Jingū, are considered the most important and the most highly revered. And of these, the Naikū shrine, which we were there to see, tops them all, and only members of the Japanese imperial family or the senior priests are allowed near or in the shrine. The simple yet stunningly beautiful Kofun-era architecture of the temples dates back over 2500 years, and the traditional construction methods have been refined to an unbelievably high art — even when compared to other Japanese craft.

Roof detail at shrine at Ise.

My understanding of how this twenty-year cycle became a tradition is that these shrines were originally used as seed banks. Since these were made of wood, they would need to be replaced and the seed stock transferred from one to the other. The design of the buildings and even the thatch roof are highly evolved for this. When there are rains, the thatch roof gets heavier, weighing down the wood joinery and making it water-tight. In the dry season, it gets lighter and the gaps between the wood are allowed to breathe again, avoiding mold.

The streets of Ise.

On Friday afternoon we arrived at Ise and, within a short walk, had checked in at our very basic ryokan hotel. The location was perfect, however, as we were directly across from the Naikū shrine area entrance. The town of Ise lies in a mainly flat lowland area across the bay from Nagoya (to the North). Its temples are the end destination of a pilgrimage route which people used to traverse largely by foot, and over the last 2,000 years various food and accommodation services have evolved to cater to those visitors.

Arriving at the temple area.

Ping and I wandered toward the entry and met up with Danny, Daniel, and Maholo Uchida, a friend of Daniel’s who is a curator at the National Museum of Emerging Science and Innovation in Tokyo. Maholo would prove to be an absolutely amazing guide through the next 24 hours, and most of what I now understand about Ise and its customs comes from her.

Danny Hillis and Maholo Uchida purifying at the Temizuya.

We traversed a small bridge and passed a low pool of water with a small roof over it. These Temizuya basins, found at the entry to all Shinto shrines, are a place to purify yourself before entry. As with all things in Japan — especially visits to shrines — there is an order and ceremony to washing your hands and mouth at the Temizuya. After this purification, we headed into the forest on a wide path of light grey gravel that crunched underfoot.

Just where the forest begins, we approached a large and beautifully crafted Shinto arch. These are apparently made from the timbers of an earlier shrine after it has been deconstructed. Visitors generally pass through three consecutive arches to enter a Shinto shrine area. Maholo quickly educated us on how to bow as we passed under the first arch (it is different for entering versus leaving) and on proper path walking etiquette. It is apparently too prideful to walk in the middle of the path: you should walk to one side, which is generally — but not always — the left side. As with everything here, there was etiquette to follow which was steeped in tradition and rules that would take a lifetime to understand fully.

Danny Hillis bowing under the first arch.

As we walked from arch to arch, Maholo explained that the forest here had historically been used exclusively to harvest timbers for all the shrines, but over the last millennia they had been harvested too heavily for various war efforts, or lost in fire. Since the beginning of this century the shrines’ caretakers have been bringing these forests back, and expect them to be self-sustaining again within the next two or three rebuilding periods — 40 to 60 years from now.

Third arch approaching the grand shrine.

Passing through a sequence of arches, we arrived at the Naikū shrine sanctuary area. This area includes a place that sells commemorative gifts. At this point you might be thinking “tourist trap gift shop,” but this adjacent structure is at least centuries old and of course perfectly fits the aesthetic. Instead of cheap plastic trinkets and coffee mugs, it offered hand-screened prints on wood from the last temple deconstruction, as well as calligraphic stamps for your shrine ‘passport’.

The 2,000 year-old gift shop.

Adjacent to the gift shop is the walled-off section of the Naikū shrine. Visitors are allowed to approach one spot, where there is a gap in the wall, and see a glimpse of the main temples. On the left, the one completed in 01993 has begun to grey (pictured below), and on the right gleams the newly finished temple, a dual view only seen once every 20 years. After this event, they will begin disassembly of the old shrine, and will leave just a little doghouse-sized structure in its place for the next two decades.

The old shrine, grey with age.

The audience for this event consisted of only a few hundred people. Maholo explained that this rebuilding has been going on for eight years, and that many people come for different parts of the process, including the harvesting of the trees, the blessing of the tools, the milling of the timbers, the placement of the white river foundation stones, and so on.

As we stood there, crowds were gathering, and we noticed behind us a series of chests that were roped off in the courtyard area. Some of these were plain wood and some of them were lacquered. These chests contained the temple “treasures” that are moved from the old temple to the new. Some are re-created every 20 years by the greatest craftspeople in Japan, some have been moved from temple to temple for 14 centuries, and some are totally secret to all but the priests. The treasures are what the Kami spirits follow from one temple to the next as they are rebuilt. So the Shinto priests move the treasures when the new temple is ready, and the Kami spirits move sometime in the night to follow them in to their new home.

Treasure change ceremony at Ise.

As we took photos, a large group of priests and press started lining up. We were ushered over to the gift building area and held back by white gloved security personnel. It was a bit comical as they did not seem to know exactly what to do with us. Since this ceremony happens only every 20 years, it is unlikely that any of the staff were present at the last occasion: while this is one of the oldest events in the world, it is simultaneously brand new. It was very apparent that none of the ritual acts were performed for the audience. All of this ceremony was designed for the benefit of the Kami spirits, not for people’s entertainment, and much of what we saw were glimpses through trees from a distance. While it was hard to see everything, we all agreed that this perspective made the tradition much more magical and interesting than if it had all been laid bare.

Without fanfare, the princess of Japan led a march of hundreds of Ise priests down the path that we had just walked, and they all lined up in rows next to the chests. After a ceremony with nearly 30 minutes of bowing, the chests were carried into the sanctuary and placed into the new shrine (though this was out of view).

Then they came back out, lined up again, and went through a series of wave like bows before being led away by the princess.

All very calm, very simple, and without any hurrah. The Kami would soon follow the treasures into their new home.

What was a real surprise was to learn that there are 125 shrines in Ise: all are rebuilt every 20 years, but on different schedules. This is also done at other Shinto shrine sites, but not always every 20 years; some have cycles as long as 60 years. Once we were allowed to wander around again, we hiked up the hill to some of the other temples, all built for different Kami. Some recently-built shrines stood next to the ones awaiting deconstruction, and some stood alone. These are all made with similar design and unerring construction, and unlike the main temple, we were allowed to walk right up to these and take pictures.

A recently-built shrine stands next to an old one.

We left the forest on a different path as the sun set, bowing our exit bows twice after each of the three arches. We wandered through the town a bit and I suggested we find a local bar that offered the traditional Japanese “bottle keep” so we could drink half of a bottle and leave it on the shelf to return in 20 years for the other half.

Hopefully we’ll drink from this bottle again in 02033.

Maholo took us to a tiny alley where she peeked into a few shoji screens, eventually finding us the right place. It had only eight or so seats, and the proprietor was a lovely Japanes grandmother. We ordered a bottle of Suntory whiskey and began to pour.

The barkeep was amazed to find out how far we had traveled to see the ceremony, and put our dated Long Now bottle on the highest shelf in a place of honor.

Afterwards, Maholo had arranged for us to have dinner at a beautiful ryokan with one of the Shinto priests, who had come in from Tokyo to help with the events in Ise. We were served course after course of incredible seafood while he gracefully answered our questions, all translated by Maholo.

We learned that the priests who run Ise are their own special group within the Shinto organization, and don’t really follow the line of the main organization. For instance, when several of the Shinto temples were offered UNESCO world heritage site status, they politely declined. I can just imagine them wondering why they would need an organization like UNESCO, that is not even half a century old, to tell them that they had achieved “historic” status. I suspect that maybe in a millennium or two, if UNESCO is still around, they might reconsider.

The priests bringing the Kami their first meal.

The next morning we returned to Naikū to catch a glimpse through the trees of the priests bringing the Kami their first meal. The Kami are fed in the morning and evening of each day from a kitchen building behind the temple sanctuary. We watched priests and their assistants bringing in chests of food as we chatted with an American who works for the Shinto central office in Tokyo. He had put together a beautiful book about the shrines at Ise, The Soul of Japan, to which he later sent me a link to share in this report.

Afterwards, we also visited the small but amazing museum at Ise that displays some of the “treasures” from past shrines, a temple simulacrum, and a display documenting the 1400-year reconstruction history along with the beautiful Japanese tools used for building the shrines.

Bridge to the Gekū shrines.

Then Maholo took us to the Gekū shrine areas, a few kilometers away, which allow much more access. These shrines, and the bridge that leads to them, are also built on the alternating-site, 20-year cycle. But here you walk on the right, and there are four arches — I could not find out why. Most interesting, however, is that in World War II the Japanese emperor ordered a rare temporary delay in shrine rebuilding. While the people of Ise could not defy him, they realized that he had only mentioned the shrines, so they went ahead and rebuilt the bridge as scheduled in the middle of a war-torn year.

Finally, we headed to the train station, from where Danny and Daniel would travel to Kyoto for their flights, and Maholo would return to Tokyo. Ping and I later boarded the train to Osaka to stay the night, and then headed to Nara prefecture the next day.

Entering Hōryū-ji

Hōryū-ji at Nara

Only 45 minutes by train from Osaka is the stop at Hōryū-ji, a bit before you get to Nara center. Almost concurrent to the building of the first shrine at Ise in the 7th century, a complex of Buddhist temples were built here beginning in 607 CE.

The tall pagoda at Hōryū-ji is one of the oldest continuously standing structures in the world. And while there is controversy over which parts of this temple complex are orginal, the central vertical pillar of wood in the Pagoda was definitively felled in 594.

The architecture has a strong Chinese influence, reflecting the route Buddhism traveled before arriving in Japan, and came with a tradition of continual maintenance rather than periodic rebuilding. 

Roof detail at Hōryū-ji

I suspect one of the main reasons these buildings have survived so long is their ceramic roof. The roof tiles can last centuries and are vastly less susceptible to fire than wood or thatch. Like the Shinto shrines, though, no one resides in these buildings, so the chance of human error starting a blaze is vastly diminished. I was amused to see the “no smoking” sign as we entered one of temples.

No smoking sign at Hōryū-ji

As you walk through these temples there are many beautiful little maintenance details. Places where water would have wicked into the bottom of a pillar or around the edge of a metal detail have been carefully removed, with new wood spliced back in over the centuries.

It is striking that this part of Japan houses two sets of structures, both of nearly equal age, and both made of largely ephemeral materials that have lasted over 14 centuries through totally different mechanisms and religions. Both require a continuous, diligent and respectful civilization to sustain them, yet one is punctuated and episodic, while the other is gradual. Both are great models for how to make a building, or an institution, last through millennia.


Learn More

  • Read Alexander Rose’s recent essay in BBC Future, “How to Build Something that Lasts 10,000 Years.”
  • See more photos from Alexander Rose’s trip to Japan here.
  • Read Soul of Japan: An Introduction to Shinto and Ise Jingu (02013) in full here.

CryptogramOn Cybersecurity Insurance

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart's law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): BoingBoing post.

Worse Than FailureCodeSOD: ImAlNumb?

I think it’s fair to say that C, as a language, has never had a particularly great story for working with text. Individual characters are okay, but strings are a nightmare. The need to support unicode has only made that story a little more fraught, especially as older code now suddenly needs to support extended characters. And by “older” I mean, “wchar was added in 1995, which is practically yesterday in C time”.

Lexie inherited some older code. It was not designed to support unicode, which is certainly a problem in 2019, and it’s the problem Lexie was tasked with fixing. But it had an… interesting approach to deciding if a character was alphanumeric.

Now, if we limit ourselves to ASCII, there are a variety of ways we could do this check. We could convert it to a number and do a simple check- characters 48–57 are numeric, 65–90 and 97–122 cover the alphabetic characters. But that’s a conditional expression- six comparison operations! So maybe we should be more clever. There is a built-in library function, isalnum, which might be more optimized, and is available on Lexie’s platform. But we’re dedicated to really doing some serious premature optimization, so there has to be a better way.

bool isalnumCache[256] =
{false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, false, false, false, false, false, false,
false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, false, false, false, false, false,
false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, true, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false};

This is a lookup table. Convert your character to an integer, and then use it to index the array. This is fast. It’s also error prone, and this block does incorrectly identify a non-alphanumeric as an alphanumeric. It also 100% fails if you are dealing with wchar_t, which is how Lexie ended up looking at this block in the first place.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianBenjamin Mako Hill: How Discord moderators build innovative solutions to problems of scale with the past as a guide

Both this blog post and the paper it describes are collaborative work led by Charles Kiene with Jialun “Aaron” Jiang.

Introducing new technology into a work place is often disruptive, but what if your work was also completely mediated by technology? This is exactly the case for the teams of volunteer moderators who work to regulate content and protect online communities from harm. What happens when the social media platforms these communities rely on change completely? How do moderation teams overcome the challenges caused by new technological environments? How do they do so while managing a “brand new” community with tens of thousands of users?

For a new study that will be published in CSCW in November, we interviewed 14 moderators of 8 “subreddit” communities from the social media aggregation and discussion platform Reddit to answer these questions. We chose these communities because each community had recently adopted the real-time chat platform Discord to support real-time chat in their community. This expansion into Discord introduced a range of challenges—especially for the moderation teams of large communities.

We found that moderation teams of large communities improvised their own creative solutions to challenges they faced by building bots on top of Discord’s API. This was not too shocking given that APIs and bots are frequently cited as tools that allow innovation and experimentation when scaling up digital work. What did surprise us, however, was how important moderators’ past experiences were in guiding the way they used bots. In the largest communities that faced the biggest challenges, moderators relied on bots to reproduce the tools they had used on Reddit. The moderators would often go so far as to give their bots the names of moderator tools available on Reddit. Our findings suggest that support for user-driven innovation is important not only in that it allows users to explore new technological possibilities but also in that it allows users to mine their past experiences to introduce old systems into new environments.

What Challenges Emerged in Discord?

Discord’s text channels allow for more natural, in the moment conversations compared to Reddit. In Discord, this social aspect also made moderation work much more difficult. One moderator explained:

“It’s kind of rough because if you miss it, it’s really hard to go back to something that happened eight hours ago and the conversation moved on and be like ‘hey, don’t do that.’ ”

Moderators we spoke to found that the work of managing their communities was made even more difficult by their community’s size:

On the day to day of running 65,000 people, it’s literally like running a small city…We have people that are actively online and chatting that are larger than a city…So it’s like, that’s a lot to actually keep track of and run and manage.”

The moderators of large communities repeatedly told us that the tools provided to moderators on Discord were insufficient. For example, they pointed out tools like Discord’s Audit Log was inadequate for keeping track of the tens of thousands of members of their communities. Discord also lacks automated moderation tools like the Reddit’s Automoderator and Modmail leaving moderators on Discord with few tools to scale their work and manage communications with community members. 

How Did Moderation Teams Overcome These Challenges?

The moderation teams we talked with adapted to these challenges through innovative uses of Discord’s API toolkit. Like many social media platforms, Discord offers a public API where users can develop apps that interact with the platform through a Discord “bot.” We found that these bots play a critical role in helping moderation teams manage Discord communities with large populations.

Guided by their experience with using tools like Automoderator on Reddit, moderators working on Discord built bots with similar functionality to solve the problems associated with scaled content and Discord’s fast-paced chat affordances. This bots would search for regular expressions and URLs that go against the community’s rules:

“It makes it so that rather than having to watch every single channel all of the time for this sort of thing or rely on users to tell us when someone is basically running amuck, posting derogatory terms and terrible things that Discord wouldn’t catch itself…so it makes it that we don’t have to watch every channel.”

Bots were also used to replace Discord’s Audit Log feature with what moderators referred to often as “Mod logs”—another term borrowed from Reddit. Moderators will send commands to a bot like “!warn username” to store information such as when a member of their community has been warned for breaking a rule and automatically store this information in a private text channel in Discord. This information helps organize information about community members, and it can be instantly recalled with another command to the bot to help inform future moderation actions against other community members.

Finally, moderators also used Discord’s API to develop bots that functioned virtually identically to Reddit’s Modmail tool. Moderators are limited in their availability to answer questions from members of their community, but tools like the “Modmail” helps moderation teams manage this problem by mediating communication to community members with a bot:

“So instead of having somebody DM a moderator specifically and then having to talk…indirectly with the team, a [text] channel is made for that specific question and everybody can see that and comment on that. And then whoever’s online responds to the community member through the bot, but everybody else is able to see what is being responded.”

The tools created with Discord’s API — customizable automated content moderation, Mod logs, and a Modmail system — all resembled moderation tools on Reddit. They even bear their names! Over and over, we found that moderation teams essentially created and used bots to transform aspects of Discord, like text channels into Mod logs and Mod Mail, to resemble the same tools they were using to moderate their communities on Reddit. 

What Does This Mean for Online Communities?

We think that the experience of moderators we interviewed points to a potentially important underlooked source of value for groups navigating technological change: the potent combination of users’ past experience combined with their ability to redesign and reconfigure their technological environments. Our work suggests the value of innovation platforms like APIs and bots is not only that they allow the discovery of “new” things. Our work suggests that these systems value also flows from the fact that they allow the re-creation of the the things that communities already know can solve their problems and that they already know how to use.


For more details, check out check out the full 23 page paper. The work will be presented in Austin, Texas at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW’19) in November 2019. The work was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). If you have questions or comments about this study, contact Charles Kiene at ckiene [at] uw [dot] edu.

,

Planet DebianMarkus Koschany: My Free Software Activities in August 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

Misc

  • I fixed two minor CVE in binaryen, a compiler and toolchain infrastructure library for WebAssembly, by packaging the latest upstream release.

Debian LTS

This was my 42. month as a paid contributor and I have been paid to work 21,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 12.8.2019 until 18.08.2019 and from 09.09.2019 until 10.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in kde4libs, apache2, nodejs-mysql, pdfresurrect, nginx, mongodb, nova, radare2, flask, bundler, giflib, ansible, zabbix, salt, imapfilter, opensc and sqlite3.
  • DLA-1886-2. Issued a regression update for openjdk-7. The regression was caused by the removal of several classes in rt.jar by upstream. Since Debian never shipped the SunEC security provider SSL connections based on elliptic curve algorithms could not be established anymore. The problem was solved by building sunec.jar and its native library libsunec.so from source. An update of the nss source package was required too which resolved a five year old bug. (#750400).
  • DLA-1900-1. Issued a security update for apache2 fixing 2 CVE, three more CVE did not affect the version in Jessie.
  • DLA-1914-1. Issued a security update for icedtea-web fixing 3 CVE.
  • I have been working on a backport of opensc, a set of libraries and utilities to access smart cards that support cryptographic operations, from Stretch which will fix more than a dozen CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fifteenth month and I have been assigned to work 15 hours on ELTS of which I used 10 of them.

  •  I was in charge of our ELTS frontdesk from 26.08.2019 until 01.09.2019 and I triaged CVE in dovecot, libcommons-compress-java, clamav, ghostscript, gosa as end-of-life because security support for them has ended in Wheezy. There were no new issues for supported packages. All in all this was a rather unspectacular week.
  • ELA-156-1. Issued a security update for linux fixing 9 CVE.
  • ELA-154-2. Issued a regression update for openjdk-7 and nss because the removed classes in rt.jar caused the same issues in Wheezy too.

Thanks for reading and see you next time.

Krebs on SecurityPatch Tuesday, September 2019 Edition

Microsoft today issued security updates to plug some 80 security holes in various flavors of its Windows operating systems and related software. The software giant assigned a “critical” rating to almost a quarter of those vulnerabilities, meaning they could be used by malware or miscreants to hijack vulnerable systems with little or no interaction on the part of the user.

Two of the bugs quashed in this month’s patch batch (CVE-2019-1214 and CVE-2019-1215) involve vulnerabilities in all supported versions of Windows that have already been exploited in the wild. Both are known as “privilege escalation” flaws in that they allow an attacker to assume the all-powerful administrator status on a targeted system. Exploits for these types of weaknesses are often deployed along with other attacks that don’t require administrative rights.

September also marks the fourth time this year Microsoft has fixed critical bugs in its Remote Desktop Protocol (RDP) feature, with four critical flaws being patched in the service. According to security vendor Qualys, these Remote Desktop flaws were discovered in a code review by Microsoft, and in order to exploit them an attacker would have to trick a user into connecting to a malicious or hacked RDP server.

Microsoft also fixed another critical vulnerability in the way Windows handles link files ending in “.lnk” that could be used to launch malware on a vulnerable system if a user were to open a removable drive or access a shared folder with a booby-trapped .lnk file on it.

Shortcut files — or those ending in the “.lnk” extension — are Windows files that link easy-to-recognize icons to specific executable programs, and are typically placed on the user’s Desktop or Start Menu. It’s perhaps worth noting that poisoned .lnk files were one of the four known exploits bundled with Stuxnet, a multi-million dollar cyber weapon that American and Israeli intelligence services used to derail Iran’s nuclear enrichment plans roughly a decade ago.

In last month’s Microsoft patch dispatch, I ruefully lamented the utter hose job inflicted on my Windows 10 system by the July round of security updates from Redmond. Many readers responded by saying one or another updates released by Microsoft in August similarly caused reboot loops or issues with Windows repeatedly crashing.

As there do not appear to be any patch-now-or-be-compromised-tomorrow flaws in the September patch rollup, it’s probably safe to say most Windows end-users would benefit from waiting a few days to apply these fixes. 

Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

The trouble is, Windows 10 by default will install patches and reboot your computer whenever it likes. Here’s a tutorial on how to undo that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Most importantly, please have some kind of system for backing up your files before applying any updates. You can use third-party software to do this, or just rely on the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule.

Finally, Adobe fixed two critical bugs in its Flash Player browser plugin, which is bundled in Microsoft’s IE/Edge and Chrome (although now hobbled by default in Chrome). Firefox forces users with the Flash add-on installed to click in order to play Flash content; instructions for disabling or removing Flash from Firefox are here. Adobe will stop supporting Flash at the end of 2020.

As always, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Cory DoctorowCharles de Lint on Radicalized

I’ve been a Charles de Lint fan since I was a kid (see photographic evidence, above, of a 13-year-old me attending one of Charles’s signings at Bakka Books in 1984!), and so I was absolutely delighted to read his kind words in his books column in Fantasy and Science Fiction for my latest book, Radicalized. This book has received a lot of critical acclaim (“among my favorite things I’ve read so far this year”), but to get such a positive notice from Charles is wonderful on a whole different level.

The stories, like “The Masque of the Red Death,” are all set in a very near future. They tackle immigration and poverty, police corruption and brutality, the U.S. health care system and the big pharma companies. None of this is particularly cheerful fodder. The difference is that each of the other three stories give us characters we can really care about, and allow for at least the presence of some hopefulness.

“Unauthorized Bread” takes something we already have and projects it into the future. You’ve heard of Juciero? It’s a Wi-Fi juicer that only lets you use the proprietary pre-chopped produce packs that you have to buy from the company. Produce you already have at home? It doesn’t work because it doesn’t carry the required codes that will let the machine do its work.

In the story, a young woman named Salima discovers that her toaster won’t work, so she goes through the usual steps one does when electronics stop working. Unplug. Reset to factory settings. Finally…

“There was a touchscreen option on the toaster to call support but that wasn’t working, so she used the fridge to look up the number and call it.”

I loved that line.

Books To Look For [Charles de Lint/F&SF]

Planet DebianErich Schubert: Altmetrics of a Retraction Notice

As pointed out by RetractionWatch, AltMetrics even tracks the metrics of a retraction notices.

This retraction notice has an AltMetric of 9 as I write, and it will grow with every mention on blogs (such as this) and Twitter. Even worse, even just one blog post and one tweet by Retraction watch was enough to put the retraction notice “In the top 25% of all research outputs”.

In my opinion, this shows how unreliable these altmetrics are. They are based on the false assumption that Twitter and blogs would be central (or at least representative) of academic importance and attention. But given the very low usage rates of these media by academics, this does not appear to work well, except for a few high-shot papers.

Existing citation indexes, with all their drawbacks, may still be more useful.

Planet DebianJonathan McDowell: Making xinput set-button-map permanent

Since 2006 I’ve been buying a Logitech Trackman Marble (or, as Amazon calls it, a USB Marble Mouse) for both my home and work setups (they don’t die, I just seem to lose them somehow). It’s got a solid feel to it, helps me avoid RSI twinges and when I’m thinking I can take the ball out and play with it. It has 4 buttons, but I find the small one on the right inconvenient to use so I treat it as a 3 button device (the lack of scroll wheel functionality doesn’t generally annoy me). Problem is the small left most button defaults to “Back”, rather than “Middle button”. You can fix this with xinput:

xinput set-button-map "Logitech USB Trackball" 1 8 3 4 5 6 7 2 9

but remembering to do that every boot is annoying. I could put it in a script, but a better approach is to drop the following in /usr/share/X11/xorg.conf.d/50-marblemouse.conf (the fact it’s in /usr/share instead of /etc or ~ is what meant it took me so long to figure out how I’d done it on my laptop for my new machine):

Section "InputClass"
    Identifier      "Marble Mouse"
    MatchProduct    "Logitech USB Trackball"
    MatchIsPointer  "on"
    MatchDevicePath "/dev/input/event*"
    Driver          "evdev"
    Option          "SendCoreEvents" "true"

    #  Physical buttons come from the mouse as:
    #     Big:   1 3
    #     Small: 8 9
    #
    # This makes left small button (8) into the middle, and puts
    #  scrolling on the right small button (9).
    #
    Option "Buttons"            "9"
    Option "ButtonMapping"      "1 8 3 4 5 6 7 2 9"
    Option "EmulateWheel"       "true"
    Option "EmulateWheelButton" "9"

EndSection

This post exists solely for the purpose of reminding future me how I did this on my Debian setup (given that it’s taken me way too long to figure out how I did it 2+ years ago) and apparently original credit goes to Ubuntu for their Logitech Marblemouse USB page.

Worse Than FailureDeath by Consumption

Tryton Party Module Address Database Diagram

The task was simple: change an AMQ consumer to insert data into a new Oracle database instead of an old MS-SQL database. It sounded like the perfect task for the new intern, Rodger; Rodger was fresh out of a boot camp and ready for the real world, if he could only get a little experience under his belt. The kid was bright as they came, but boot camp only does so much, after all.

But there are always complications. The existing service was installed on the old app servers that weren't setup to work with the new corporate app deployment tool. The fix? To uninstall the service on the old app servers and install it on the new ones. Okay, simple enough, if not well suited to the intern.

Rodger got permissions to set up the service on his local machine so he could test his install scripts, and a senior engineer got an uninstall script working as well, so they could seamlessly switch over to the new machines. They flipped the service; deployment day came, and everything went smoothly. The business kicked off their process, the consumer service picked up their message and inserted data correctly to the new database.

The next week, the business kicked off their process again. After the weekend, the owners of the old database realized that the data was inserted into the old database and not the new database. They promptly asked how this had happened. Rodger and his senior engineer friend checked the queue; it correctly had two consumers set up, pointing at the new database. Just to be sure, they also checked the old servers to make sure the service was correctly uninstalled and removed by tech services. All clear.

Hours later, the senior engineer refreshed the queue monitor and saw the queue now had three consumers despite the new setup having only two servers. But how? They checked all three servers—two new and one old—and found no sign of a rogue process.

By that point, Rodger was online for his shift, so the senior engineer headed over to talk to him. "Say, Rodger, any chance one of your installs duplicated itself or inserted itself twice into the consumer list?"

"No way!" Rodger replied. "Here, look, you can see my script, I'll run it again locally to show you."

Running it locally ... with dawning horror, the senior engineer realized what had happened. Roger had the install script, but not the uninstall—meaning he had a copy still running on his local developer laptop, connected to the production queue, but with the old config for some reason. Every time he turned on his computer, hey presto, the service started up.

The moral of the story: always give the intern the destructive task, not the constructive one. That can't go wrong, right?

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Cory DoctorowPodcast: DRM Broke Its Promise

In my latest podcast (MP3), I read my new Locus column, DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

MP3

,

Planet DebianIain R. Learmonth: Spoofing commits to repositories on GitHub

The following has already been reported to GitHub via HackerOne. Someone from GitHub has closed the report as “informative” but told me that it’s a known low-risk issue. As such, while they haven’t explicitly said so, I figure they don’t mind me blogging about it.

Check out this commit in torvalds’ linux.git on GitHub. In case this is fixed, here’s a screenshot of what I see when I look at this link:

GitHub page showing a commit in torvalds/linux with the commit message add super evil code

How did this get past review? It didn’t. You can spoof commits in any repo on GitHub due to the way they handle forks of repositories internally. Instead of copying repositories when forks occur, the objects in the git repository are shared and only the refs are stored per-repository. (GitHub tell me that private repositories are handled differently to avoid private objects leaking out this way. I didn’t verify this but I have no reason to suspect it is not true.)

To reproduce this:

  1. Fork a repository
  2. Push a commit to your fork
  3. Put your commit ref on the end of:
https://github.com/[parent]/[repo]/commit/

That’s all there is to it. You can also add .diff or .patch to the end of the URL and those URLs work too, in the namespace of the parent.

The situation that worries me relates to distribution packaging. Debian has a policy that deltas to packages in the stable repository should be as small as possible, targetting fixes by backporting patches from newer releases.

If you get a bug report on your Debian package with a link to a commit on GitHub, you had better double check that this commit really did come from the upstream author and hasn’t been spoofed in this way. Even if it shows it was authored by the upstream’s GitHub account or email address, this still isn’t proof because this is easily spoofed in git too.

The best defence against being caught out by this is probably signed commits, but if the upstream is not doing that, you can clone the repository from GitHub and check to see that the commit is on a branch that exists in the upstream repository. If the commit is in another fork, the upstream repo won’t have a ref for a branch that contains that commit.

Krebs on SecuritySecret Service Investigates Breach at U.S. Govt IT Contractor

The U.S. Secret Service is investigating a breach at a Virginia-based government technology contractor that saw access to several of its systems put up for sale in the cybercrime underground, KrebsOnSecurity has learned. The contractor claims the access being auctioned off was to old test systems that do not have direct connections to its government partner networks.

In mid-August, a member of a popular Russian-language cybercrime forum offered to sell access to the internal network of a U.S. government IT contractor that does business with more than 20 federal agencies, including several branches of the military. The seller bragged that he had access to email correspondence and credentials needed to view databases of the client agencies, and set the opening price at six bitcoins (~USD $60,000).

A review of the screenshots posted to the cybercrime forum as evidence of the unauthorized access revealed several Internet addresses tied to systems at the U.S. Department of Transportation, the National Institutes of Health (NIH), and U.S. Citizenship and Immigration Services (USCIS), a component of the U.S. Department of Homeland Security that manages the nation’s naturalization and immigration system.

Other domains and Internet addresses included in those screenshots pointed to Miracle Systems LLC, an Arlington, Va. based IT contractor that states on its site that it serves 20+ federal agencies as a prime contractor, including the aforementioned agencies.

In an interview with KrebsOnSecurity, Miracle Systems CEO Sandesh Sharda confirmed that the auction concerned credentials and databases were managed by his company, and that an investigating agent from the Secret Service was in his firm’s offices at that very moment looking into the matter.

But he maintained that the purloined data shown in the screenshots was years-old and mapped only to internal test systems that were never connected to its government agency clients.

“The Secret Service came to us and said they’re looking into the issue,” Sharda said. “But it was all old stuff [that was] in our own internal test environment, and it is no longer valid.”

Still, Sharda did acknowledge information shared by Wisconsin-based security firm Hold Security, which alerted KrebsOnSecurity to this incident, indicating that at least eight of its internal systems had been compromised on three separate occasions between November 2018 and July 2019 by Emotet, a malware strain usually distributed via malware-laced email attachments that typically is used to deploy other malicious software.

The Department of Homeland Security did not respond to requests for comment, nor did the Department of Transportation. A spokesperson for the NIH said the agency had investigated the activity and found it was not compromised by the incident.

“As is the case for all agencies of the Federal Government, the NIH is constantly under threat of cyber-attack,” NIH spokesperson Julius Patterson said. “The NIH has a comprehensive security program that is continuously monitoring and responding to security events, and cyber-related incidents are reported to the Department of Homeland Security through the HHS Computer Security Incident Response Center.”

One of several screenshots offered by the dark web seller as proof of access to a federal IT contractor later identified as Arlington, Va. based Miracle Systems. Image: Hold Security.

The dust-up involving Miracle Systems comes amid much hand-wringing among U.S. federal agencies about how best to beef up and ensure security at a slew of private companies that manage federal IT contracts and handle government data.

For years, federal agencies had few options to hold private contractors to the same security standards to which they must adhere — beyond perhaps restricting how federal dollars are spent. But recent updates to federal acquisition regulations allow agencies to extend those same rules to vendors, enforce specific security requirements, and even kill contracts that are found to be in violation of specific security clauses.

In July, DHS’s Customs and Border Patrol (CPB) suspended all federal contracts with Perceptics, a contractor which sells license-plate scanners and other border control equipment, after data collected by the company was made available for download on the dark web. The CPB later said the breach was the result of a federal contractor copying data on its corporate network, which was subsequently compromised.

For its part, the Department of Defense recently issued long-awaited cybersecurity standards for contractors who work with the Pentagon’s sensitive data.

“This problem is not necessarily a tier-one supply level,” DOD Chief Information Officer Dana Deasy told the Senate Armed Services Committee earlier this year. “It’s down when you get to the tier-three and the tier-four” subcontractors.

Planet DebianBen Hutchings: Distribution kernels at Linux Plumbers Conference 2019

I'm attending the Linux Plumbers Conference in Lisbon from Monday to Wednesday this week. This morning I followed the "Distribution kernels" track, organised by Laura Abbott.

I took notes, included below, mostly with a view to what could be relevant to Debian. Other people took notes in Etherpad. There should also be video recordings available at some point.

Upstream 1st: Tools and workflows for multi kernel version juggling of short term fixes, long term support, board enablement and features with the upstream kernel

Speaker: Bruce Ashfield, working on Yocto at Xilinx.

Details: https://linuxplumbersconf.org/event/4/contributions/467/

Yocto's kernel build recipes need to support multiple active kernel versions (3+ supported streams), multiple architectures, and many different boards. Many patches are required for hardware and other feature support including -rt and aufs.

Goals for maintenance:

  • Changes w.r.t. upstream are visible as discrete patches, so rebased rather than merged
  • Common feature set and configuration
  • Different feature enablements
  • Use as few custom tools as possible

Other distributions have similar goals but very few tools in common. So there is a lot of duplicated effort.

Supporting developers, distro builds and end users is challenging. E.g. developers complained about Yocto having separate git repos for different kernel versions, as this led to them needing more disk space.

Yocto solution:

  • Config fragments, patch tracking repo, generated tree(s)
  • Branched repository with all patches applied
  • Custom change management tools

Using Yocto to build a distro and maintain a kernel tree

Speaker: Senthil Rajaram & Anatoly ? from Microsoft.

Details: https://linuxplumbersconf.org/event/4/contributions/469/

Microsoft chose Yocto as build tool for maintaining Linux distros for different internal customers. Wanted to use a single kernel branch for different products but it was difficult to support all hardware this way.

Maintaining config fragments and sensible inheritance tree is difficult (?). It might be helpful to put config fragments upstream.

Laura Abbott said that the upstream kconfig system had some support for fragments now, and asked what sort of config fragments would be useful. There seemed to be consensus on adding fragments for specific applications and use cases like "what Docker needs".

Kernel build should be decoupled from image build, to reduce unnecessary rebuilding.

Initramfs is unpacked from cpio, which doesn't support SELinux. So they build an initramfs into the kernel, and add a separate initramfs containing a squashfs image which the initramfs code will switch to.

Making it easier for distros to package kernel source

Speaker: Don Zickus, working on RHEL at Red Hat.

Details: https://linuxplumbersconf.org/event/4/contributions/466/

Fedora/RHEL approach:

  • Makefile includes Makefile.distro
  • Other distro stuff goes under distro sub-directory (merge or copy)
  • Add targets like fedora-configs, fedora-srpm

Lots of discussion about whether config can be shared upstream, but no agreement on that.

Kyle McMartin(?): Everyone does the hierarchical config layout - like generic, x86, x86-64 - can we at least put this upstream?

Monitoring and Stabilizing the In-Kernel ABI

Speaker: Matthias Männich, working on Android kernel at Google.

Details: https://linuxplumbersconf.org/event/4/contributions/468/

Why does Android need it?

  • Decouple kernel vs module development
  • Provide single ABI/API for vendor modules
  • Reduce fragmentation (multiple kernel versions for same Android version; one kernel per device)

Project Treble made most of Android user-space independent of device. Now they want to make the kernel and in-tree modules independent too. For each kernel version and architecture there should be a single ABI. Currently they accept one ABI bump per year. Requires single kernel configuration and toolchain. (Vendors would still be allowed to change configuration so long as it didn't change ABI - presumably to enable additional drivers.)

ABI stability is scoped - i.e. they include/exclude which symbols need to be stable.

ABI is compared using libabigail, not genksyms. (Looks like they were using it for libraries already, so now using it for kernel too.)

Q: How we can ignore compatible struct extensions with libabigail?

A: (from Dodji Seketeli, main author) You can add specific "suppressions" for such additions.

KernelCI applied to distributions

Speaker: Guillaume Tucker from Collabora.

Details: https://linuxplumbersconf.org/event/4/contributions/470/

Can KernelCI be used to build distro kernels?

KernelCI currently builds arbitrary branch with in-tree defconfig or small config fragment.

Improvements needed:

  • Preparation steps to apply patches, generate config
  • Package result
  • Track OS image version that kernel should be installed in

Some in audience questioned whether building a package was necessary.

Possible further improvements:

  • Enable testing based on user-space changes
  • Product-oriented features, like running installer
Should KernelCI be used to build distro kernels?

Seems like a pretty close match. Adding support for different use-cases is healthy for KernelCI project. It will help distro kernels stay close to upstream, and distro vendors will then want to contribute to KernelCI.

Discussion

Someone pointed out that this is not only useful for distributions. Distro kernels are sometimes used in embedded systems, and the system builders also want to check for regressions on their specific hardware.

Q: (from Takashi Iwai) How long does testing typically takes? SUSE's full automated tests take ~1 week.

A: A few hours to build, depending on system load, and up to 12 hours to complete boot tests.

Automatically testing distribution kernel packages

Speaker: Alice Ferrazzi of Gentoo.

Details: https://linuxplumbersconf.org/event/4/contributions/471/

Gentoo wants to provide safe, tested kernel packages. Currently testing gentoo-sources and derived packages. gentoo-sources combines upstream kernel source and "genpatches", which contains patches for bug fixes and target-specific features.

Testing multiple kernel configurations - allyesconfig, defconfig, other reasonable configurations. Building with different toolchains.

Tests are implemented using buildbot. Kernel is installed on top of a Gentoo image and then booted in QEMU.

Generalising for discussion:

  • Jenkins vs buildbot vs other
  • Beyond boot testing, like LTP and kselftest
  • LAVA integration
  • Supporting other configurations
  • Any other Gentoo or meta-distro topic

Don Zickus talked briefly about Red Hat's experience. They eventually settled on Gitlab CI for RHEL.

Some discussion of what test suites to run, and whether they are reliable. Varying opinions on LTP.

There is some useful scripting for different test suites at https://github.com/linaro/test-definitions.

Tim Bird talked about his experience testing with Fuego. A lot of the test definitions there aren't reusable. kselftest currently is hard to integrate because tests are supposed to follow TAP13 protocol for reporting but not all of them do!

Distros and Syzkaller - Why bother?

Speaker: George Kennedy, working on virtualisation at Oracle.

Details: https://linuxplumbersconf.org/event/4/contributions/473/

Which distros are using syzkaller? Apparently Google uses it for Android, ChromeOS, and internal kernels.

Oracle is using syzkaller as part of CI for Oracle Linux. "syz-manager" schedules jobs on dedicated servers. There is a cron job that automatically creates bug reports based on crashes triggered by syzkaller.

Google's syzbot currently runs syzkaller on GCE. Planning to also run on QEMU with a wider range of emulated devices.

How to make syzkaller part of distro release process? Need to rebuild the distro kernel with config changes to make syzkaller work better (KASAN, KCOV, etc.) and then install kernel in test VM image.

How to correlate crashes detected on distro kernel with those known and fixed upstream?

Example of benefit: syzkaller found regression in rds_sendmsg, fixed upstream and backported into the distro, but then regressed in Oracle Linux. It turned out that patches to upgrade rds had undone the fix.

syzkaller can generate test cases that fail to build on old kernel versions due to symbols missing from UAPI headers. How to avoid this?

Q: How often does this catch bugs in the distro kernel?

A: It doesn't often catch new bugs but does catch missing fixes and regressions.

Q: Is anyone checking the syzkaller test cases against backported fixes?

A: Yes [but it wasn't clear who or when]

Google has public database of reproducers for all the crashes found by syzbot.

Wish list:

  • Syzkaller repo tag indicating which version is suitable for a given kernel version's UAPI
  • tarball of syzbot reproducers

Other possible types of fuzzing (mostly concentrated on KVM):

  • They fuzz MSRs, control & debug regs with "nano-VM"
  • Missing QEMU and PCI fuzzing
  • Intel and AMD virtualisation work differently, and AMD may not be covered well
  • Missing support for other architectures than x86

Worse Than FailureCodeSOD: Making a Nest

Tiffany started the code review with an apology. "I only did this to stay in style with the existing code, because it's either that or we rewrite the whole thing from scratch."

Jim J, who was running the code review nodded. Before Tiffany, this application had been designed from the ground up by Armando. Armando had gone to a tech conference, and learned about F#, and how all those exciting functional features were available in C#, and returned jabbering about "immutable data" and "functors" and "metaprogramming" and decided that he was now a functional programmer, who just happened to work in C#.

Some struggling object-oriented developers use dictionaries for everything. As a struggling functional programmer, Armando used tuples for everything. And these tuples would get deeply nested. Sometimes, you needed to flatten them back out.

Tiffany had contributed this method to do that:

public static Result<Tuple<T1, T2, T3, T4, T5>> FlatternTupleResult<T1, T2, T3, T4, T5>( Result<Tuple<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>, T5>> tuple ) { return tuple.Map(x => new Tuple<T1, T2, T3, T4, T5>(x.Item1.Item1.Item1.Item1, x.Item1.Item1.Item1.Item2, x.Item1.Item1.Item2, x.Item1.Item2, x.Item2)); }

It's safe to say that deeply nested generics are a super clear code smell, and this line: Result<Tuple<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>, T5>> tuple downright reeks. Tuples in tuples in tuples.

Tiffany cringed at the code she had written, but this method lived in the TaskResultHelper class, and lived alongside methods with these signatures:

public static Result<Tuple<T1, T2, T3, T4>> FlatternTupleResult<T1, T2, T3, T4>(Result<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>> tuple) public static Result<Tuple<T1, T2, T3>> FlatternTupleResult<T1, T2, T3>(Result<Tuple<Tuple<T1, T2>, T3>> tuple)

"This does fit in with the way the application currently works," Jim admitted. "I'm sorry."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory DoctorowCome see me in Santa Cruz, San Francisco, Toronto and Maine!

I’m about to leave for a couple of weeks’ worth of lectures, public events and teaching, and you can catch me in many places: Santa Cruz (in conversation with XKCD’s Randall Munroe); San Francisco (for EFF’s Pioneer Awards); Toronto (for Word on the Street, Seeding Utopias and Resisting Dystopias and 6 Degrees); Newry, ME (Maine Library Association) and Portland, ME (in conversation with James Patrick Kelly).

Here’s the full itinerary:

Santa Cruz, September 11, 7PM: Bookshop Santa Cruz Presents an Evening with Randall Munroe, Santa Cruz Bible Church, 440 Frederick St, Santa Cruz, CA 95062

San Francisco, September 12, 6PM: EFF Pioneer Awards, with Adam Savage, William Gibson, danah boyd, and Oakland Privacy; Delancey Street Town Hall, 600 Embarcadero St., San Francisco, California, 94107

Houston and beyond, September 13-22: The Writing Excuses Cruise (sorry, sold out!)

Toronto, September 22: Word on the Street:

Toronto, September 23, 6PM-8PM: Cory Doctorow in Discussion: Seeding Utopias & Resisting Dystopias , with Jim Munroe, Madeline Ashby and Emily Macrae; Oakwood Village Library & Arts Centre, 341 Oakwood Avenue, Toronto, ON M6E 2W1

Toronto, September 24: 360: How to Make Sense at the 6 Degrees Conference, with Aude Favre, Ryan McMahon and Nanjala Nyabola, Art Gallery of Ontario.

Newry, ME, September 30: Keynote for the Maine Library Association Annual Conference, Sunday River Resort, Newry, ME

Portland, ME, September 30, 6:30PM-8PM: In Conversation With James Patrick Kelly, Main Library, Rines Auditorium.

I hope you can make it!

,

Sam VargheseSerena Williams loses another Grand Slam final

Serena Williams has fallen flat on her face again in her bid to equal Margaret Court’s record of 24 Grand Slam titles. This time Williams’ loss was to Canadian teenager Bianca Andreescu – and what makes it better is that she lost in straight sets, 6-3, 7-5.

Andreescu, 19, is a raw hand at the game; she has never played in the main draw of the US Open before. Last year, ranked 208, she was beaten in the first round by Olga Danilovic.

Williams has now lost four Grand Slam finals in pursuit of 24 wins: Angelique Kerber defeated her at Wimbledon in 2018, Naomi Osaka defeated her in the last US Open and Simona Halep accounted for Williams at Wimbledon this year. In all those finals, Williams was unable to win more than four games in any set. And now Andreescu has sent her packing.

Williams appears to be obsessed with being the winner of most Grand Slams before she quits the game. But after returning from maternity leave, she has shown the inability to cope with the pressure of a final. Her last win was in the Australian Open in 2017, when she beat her sister, Venus, 6-4, 6-4.

Unlike many other players, Williams is obsessed with herself. Not for her the low-profile attitude cultivated by the likes of Roger Federer or Steffi Graf. The German woman, who dominated tennis for many years, was a great example for others.

In 1988, Graf thrashed Russian Natasha Zvereva 6-0, 6-0 in the final of the French Open in 34 minutes – the shortest and most one-sided Grand Slam final on record. And Zvereva had beaten the great Martina Navratilova en route to the final!

Yet Graf was low-key at the presentation. She did not laud it over Zvereva who was in tears, she did not indulge in triumphalism. One shudders to think of the way Williams would have carried on in such a situation. Graf was graciousness personified.

Williams is precisely the opposite. When she wins, it is because she played well. And when she loses, it is all because she did not play well. Her opponent only gets some reluctant praise.

It is time for Williams to do some serious soul-searching and consider whether it is time to bow out. This constant search for a 24th title — and I’m sure she will look for a 25th after that to be atop the winners’ list — is getting a little tiresome.

There is a time in life for everything as it says in the Biblical book of Ecclesiastes. Williams has had a good run but now her obsession with another win is getting on people’s nerves. There is much more to women’s tennis than Serena Williams – and it is time that she realised it as well and retired.

Planet DebianDirk Eddelbuettel: pinp 0.0.8: Bugfix

A new release of our pinp package is now on CRAN. pinp allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

pinp vignette

This release was spurned by one of those "CRAN package xyz" emails I received yesterday: processing of pinp-using vignettes was breaking at CRAN under the newest TeX Live release present on Debian testing as well as recent Fedora. The rticles package (which uses the PNAS style directly) apparently has a similar issue with PNAS.

Kurt was a usual extremely helpful in debugging, and we narrowed this down to an interaction with the newer versions of titlesec latex package. So for now we did two things: upgrade our code reusing the PNAS class to their newest verson of the PNAS class (as suggested by Norbert whom I also roped in), but also copying in an older version of titlesec.sty (plus a support file). In the meantime, we are also looking into titlesec directly as Javier offered help—all this was a really decent example of open source firing on all cylinders. It is refreshing.

Because of the move to a newer PNAS version (which seems to clearly help with the occassionally odd formatting of floating blocks near the document end) I may have trampled on earlier extension pull requests. I will reach out to the authors of the PRs to work towards a better process with cleaner diffs, a process I should probably have set up earlier.

The NEWS entry for this release follows.

Changes in pinp version 0.0.8 (2019-09-08)

  • Two erroraneous 'Provides' were removed from the pinp class.

  • The upquote package is now use to use actual (non-fancy) quotes in verbatim mode (Dirk fixing #75)

  • The underlying PNAS style was updated to the most recent v1.44 version of 2018-05-06 to avoid issues with newer TeXLive (Dirk in #79 fixing #77 and #78)

  • The new PNAS code brings some changes eg watermark is longer an option but typesetting paragraphs seems greatly improved. We may have stomped on an existing behavior, if see please file an issue.

  • However, it also conflicts with the current texlive version of titlesec so for now we copy titlesec.sty (and a support file) in using a prior version, just like we do for pinp.cls and jss.bst.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianShirish Agarwal: Depression, Harrapa, Space program

I was and had been depressed mostly when the election results were out. I was expecting like many others that Congress will come into power, but it didn’t . With that, came one bad after other, whether it was on politics (Jammu and Kashmir, Assam) both of which from my POV are inhumane not just on citizenship but even simply on humane levels. How people can behave like this with each other is beyond me. On the Economic Front, the less said the better. We are in midst of a prolonged recession and don’t see things turning out for the better any time soon. But as we have to come to terms with it and somehow live day-to-day, we are living. Because of the web, came to know there are so many countries where it is happening right now, whether it is Britian (Brexit), South Africa, Brazil. In fact, the West Papu thing is similar in many ways to what happened in Kashmir. Of course each region has its own complexities but this can be safely said that such events are happening all over. In every incident, one way ‘The Other’ is demonized. This has happened in all of the above incidents.

One question I have often asked and have had no clear answers. If Germany knew that Israel would be big and strong as it is now, would they have done what they did ? Had they known that Einstein, A Jew would go on to change the face of Science. Would America have been great without Einstein to such a degree ? I was flabbergasted when I saw ‘The Red Sea Diving Resort‘ which is based on real life done by Mossad as shared in the pictures after the movie.

Even among such blackness, I do see some hope. One thing which has good has been the rise of independant media. While the mainstream media has become completely ridiculous and instead of questioning the Government is toeing its line, independant media is trying to do what mainstream media should have been doing all along. I wouldn’t say much about this otherwise the whole blog post would be about independant India only. Maybe some other day 🙂

Harrapan Civilization

One of the more interesting things, videos has been the gamification of Evolution. There is a game called ‘Ancestors, the Humankind Odyssey‘ . While sadly the game is only on Epic Games Store, I have been following the gameplay as shared by GameRiotArmy . While almost all the people who are playing the game have divorced it from their personal beliefs because of the whole evolution, natural selection vs creationism debate, the game itself feeds on the evolution and natural selection bits. The game is open-world in nature. The only quibble I have is it should have started with the big bang but then it probably would have been too long a game perhaps. I am sure for many people, even this gameplay when the game would be complete would be at least 20-30 episodes .

The Harrappan bit comes in when the following bits came onto twitter . While looking into it, saw this also. I think most of the libraries for it are already in Debian. The papers they are presenting can be found at this link for those interested. What is/was interesting is that the ancient DNA they found is supposed to be Dravidian. As can be seen from the atlantic piece it is pretty political in nature hence the researchers are trying to just do their job. It does make for some interesting reading though.

Space, Chandrayaan 2 and Mars-sim

As far as space is concerned, it has been an eventful week. India crash-landed Chandrayaan 2. While it is too early to say what has gone wrong and we are waiting for the scientists to confirm exactly what went wrong, it came to the fore for the wrong reasons. The images with Mr. Modi and how he reacted before and after became the story rather than what Chandrayaan 2 will be doing. Also it came to the fore that ISRO’s scientists salaries have been cut which is a saddening affair. I had already spoken before how I had spoken to some ISRO scientists for merch. and how they had shared, that merchandising only happens in Gujarat. It really seems sad.

The only thing we know as of date is that we lost communications when it was two and half kilometers before touching the surface of the moon. I do hope there are lots of sensors which have captured but do also understand they can’t put many due to problems like cross-talk as well as power issues probably. I do hope that the lander is able to communicate with the orbiter and soon the lander starts on its wheels. Even if it does not, there is lots the orbiter will be able to do as shared by this twitter thread. I shared the unroll from threadreaderapp. Although I do hope it does start talking and takes baby steps.

As far as mars-sim is concerned, a game I am helping in my spare-time, it is going to take lot of time. We are hoping kotlin comes soon. I am thankful to the Java-team and hopefully the packages which are in NEW come to Debian archive soonish and we have kotlin in Debian. I know this will help with update to gradle as well, which is the reason that kotlin is coming in.

Planet DebianAndrew Cater: Chasing around installing CD images for Buster 10.1 ...

and having great fun, as ever, making a few mistakes and contributing mayhem and entropy to the CD release process. Buster 10.1 point update just released, thanks to RattusRattus, Sledge and Isy and Schweer (amongst others).

Waiting on the Stretch point release to try all over again.. I'd much rather be in Cambridge, but hey, you can't have everything.

Planet DebianDebian GSoC Kotlin project blog: Begining of the end.

Work done.

Hey all, since the last page of this post we have come so far into packaging Kotlin 1.3.30. I am glad to announce that Kotlin 1.3.30's dependencies are completely packaged and only refining work on intellij-community-java( which is the source package of the intellij related jars that kotlin depended on) and Kotlin remain.

I have roughly packaged Kotlin, the debian folder is pretty much done, and have pushed it here. Also the bootstrap package can be found here.

The links to all the dependencies of Kotlin 1.3.30 can be found in my previous blog pages but I ll list them here for convinience of the reader.

1.->java-compatibility-1.0.1 -> https://github.com/JetBrains/intellij-deps-java-compatibility (DONE: here)
2.->jps-model -> https://github.com/JetBrains/intellij-community/tree/master/jps (DONE: here)
3.->intellij-core -> https://github.com/JetBrains/intellij-community/tree/183.5153 (DONE: here)
4.->streamex-0.6.7 -> https://github.com/amaembo/streamex/tree/streamex-0.6.7 (DONE: here)
5.->guava-25.1 -> https://github.com/google/guava/tree/v25.1 (DONE: Used guava-19 from libguava-java)
6.->lz4-java -> https://github.com/lz4/lz4-java/blob/1.3.0/build.xml(DONE:here)
7.->libjna-java & libjna-platform-java recompiled in jdk 8. -> https://salsa.debian.org/java-team/libjna-java (DONE : commit)
8.->liboro-java recompiled in jdk8 -> https://salsa.debian.org/java-team/liboro-java (DONE : commit)
9.->picocontainer-1.3 refining -> https://salsa.debian.org/java-team/libpicocontainer-1-java (DONE: here)
10.->platform-api -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
11.->util -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
12.->platform-impl -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
13.->extensions -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
14.->jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow (DONE)
15.->trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j (DONE)
16.->proguard:6.0.3 in jdk8 (DONE: released as libproguard-java 6.0.3-2)
17.->io.javaslang:2.0.6 --> https://github.com/vavr-io/vavr/tree/javaslang-v2.0.6 (DONE)
18.->jline 3.0.3 --> https://github.com/jline/jline3/tree/jline-3.3.1 (DONE)
19.->protobuf-2.6.1 in jdk8 (DONE)
20.->com.jcabi:jcabi-aether:1.0 -> the file that requires this is commented out;can be seen here and here
21.->org.sonatype.aether:aether-api:1.13.1 -> the file that requires this is commented out;can be seen here and here

Important Notes.

It should be noted that at this point in time, 8th September 2019, the kotlin package only aims to package the jars generated by the ":dist" task of the kotlin build scripts. This task builds the kotlin home. So thats all we have, we don't have the kotlin-gradle-plugins or kotlinx or anything that isn't part of the kotlin home.

It can be noted that the kotlin boostrap package has kotlin-gradle-plugin, kotlinx and kotlin-dsl jars. The eventual plan is to build kotlin-gradle-plugins and kotlinx from kotlin source itself and to build kotlindsl from gradle source using kotlin as a dependency for gradle. After we do that we can get rid of the kotlin bootstrap package.

It should also be noted that this kotlin package as of 8th September 2019 may not be perfect and might contain a ton of bugs, this is because of 2 reasons; partly because I have ignored some code that depended on jcabi-aether(mentioned above with link to commit) and mostly because the platform-api.jar and patform-impl.jar from intellij-community-idea are not the same as their upstream counterpart but minimum files that are required to make kotlin compile without errors; I did this because they needed packaging new dependencies and at this time it didn't look like it was worth it.

Work left to be done.

Now I believe most of the building blocks of packaging kotlin are done and whats left is to remove this pesky bootstrap. I believe this can be counted as the completion of my GSoC (officially ended in August 26). The tasks left are as follows:

Major Tasks.

  1. Make kotlin build just using openjdk-11-jdk; now it builds iwth openjdk-8-jdk and openjdk-11-jdk.
  2. Build kotlin-gradle-plugins.
  3. Build kotlinx.
  4. Build kotlindsl from gradle.
  5. Do 2,3 and 4 and make kotlin build without bootstrap.

Things that will help the kotlin effort.

  1. refine intellij-community-idea and do its copyrights file proper.
  2. import kotlin 1.3.30 into a new debian-java-maintainers repository.
  3. move kotlin changes(now maintained as git commits) to quilt patches. link to kotlin -> here.
  4. do kotlin's copyrights file.
  5. refine kotlin.

Authors Notes.

Hey guys its been a wonderful ride so far. I hope to keep doing this and maintain kotlin in debian. I am only a final year student and my career fare starts this october 17nth 2019 so I have to prepare for coding interviews and start searching jobs. So until late November 2019 I'll only be taking on the smaller tasks and be doing them. Please note that I won't be doing it as fast as I used to up until now since I am going to be a little busy during this period. I hope I can land a job that lets me keep doing this :) .

I would love to take this section to thank _hc, ebourg, andrewsh and seamlik for helping and mentoring me trough all this.

So if any of you want to help please kindly take on any of these tasks.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates.

,

Planet DebianDima Kogan: Are planes crashing any less than they used to?

Recently, I've been spending more of my hiking time looking for old plane crashes in the mountains. And I've been looking for data that helps me do that, for instance the last post. A question that came up in conversation is: "are crashes getting more rare?" And since I now have several datasets at my disposal, I can very easily come up with a crude answer.

The last post describes how to map the available NTSB reports describing aviation incidents. I was only using the post-1982 reports in that project, but here let's also look at the older reports. Today I can download both from their site:

$ wget https://app.ntsb.gov/avdata/Access/avall.zip
$ unzip avall.zip    # <------- Post 1982

$ wget https://app.ntsb.gov/avdata/PRE1982.zip
$ unzip PRE1982.zip  # <------- Pre 1982

I import the relevant parts of each of these into sqlite:

$ ( mdb-schema avall.mdb sqlite -T events;
    echo "BEGIN;";
    mdb-export -I sqlite avall.mdb events;
    echo "COMMIT;";
  ) | sqlite3 post1982.sqlite

$ ( mdb-schema PRE1982.MDB sqlite -T tblFirstHalf;
    echo "BEGIN;";
    mdb-export -I sqlite PRE1982.MDB tblFirstHalf;
    echo "COMMIT;";
  ) | sqlite3 pre1982.sqlite

And then I pull out the incident dates, and make a histogram:

$ cat <(sqlite3 pre1982.sqlite 'select DATE_OCCURRENCE from tblFirstHalf') \
      <(sqlite3 post1982.sqlite 'select ev_date from events') |
  perl -pe 's{^../../(..) .*}{$1 + (($1<40)? 2000: 1900)}e'   |
  feedgnuplot --histo 0 --binwidth 1 --xmin 1960 --xlabel Year \
              --title 'NTSB-reported incident counts by year'

ntsb-histogram-by-year.svg

I guess by that metric everything is getting safer. This clearly just counts NTSB incidents, and I don't do any filtering by the severity of the incident (not all reports describe crashes), but close-enough. The NTSB only deals with civilian incidents in the USA, and only after the early 1960s, it looks like. Any info about the military?

At one point I went through "Historic Aircraft Wrecks of Los Angeles County" by G. Pat Macha, and listed all the described incidents in that book. This histogram of that dataset looks like this:

macha-la-histogram-by-year.svg

Aaand there're a few internet resources that list out significant incidents in Southern California. For instance:

I visualize that dataset:

$ < [abc].htm perl -nE '/^ \s* 19(\d\d) | \d\d \s*(?:\s|-|\/)\s* \d\d \s*(?:\s|-|\/)\s* (\d\d)[^\d]/x || next; $y = 1900+($1 or $2); say $y unless $y==1910' |
  feedgnuplot --histo 0 --binwidth 5

carcomm-by-year.svg

So what did we learn? I guess overall crashes are becoming more rare. And there was a glut of military incidents in the 1940s and 1950s in Southern California (not surprising given all the military bases and aircraft construction facilities here at that time). And by one metric there were lots of incidents in the late 1970s/early 1980s, but they were much more interesting to this "carcomm" person, than they were to Pat Macha.

CryptogramMassive iPhone Hack Targets Uyghurs

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google's Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google's announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple's pushbacks.

Valerie AuroraWhy you shouldn’t trust people who support sexual predators

[CW: mention of child sexual abuse]

Should you trust people who support sexual predators? My answer is no. Here’s why:

Anyone who is ethically flexible enough to justify knowingly supporting a sexual predator is ethically flexible enough to justify harming the people who trust and support them.

This week’s news provides a useful case study.

After writing about how to avoid supporting sexual predators, I talked to some of the 250 people who signed a letter of support for Joi Ito to remain as head of MIT Media Lab. They signed this letter between August 26th and September 6th, when they were aware of the initial revelations that Ito and the ML had taken about $2 million from Jeffrey Epstein after his 2008 conviction for child sex offenses.

Here’s the dilemma these signatories were facing: Ito was powerful, and charming, and had inspired loyalty and support in them. The letter says, “We have experienced first-hand Joi’s integrity, and stand in testament to his overwhelmingly positive influence on our lives—and sincerely hope he remains our visionary director for many years to come.” When given evidence that Ito had knowingly supported a convicted serial child rapist, they chose to believe that there was some as-yet unknown explanation which would square with their image of Ito as a person of integrity and ethics. Others viewed taking Epstein’s money as some kind of moral imperative: the money was available, they could do good with it, no one was preventing them from taking it. They denied that Epstein accrued any advantage from the donations. Finally, many of the signatories also depend on Ito for a living; after all, as Upton Sinclair says, it is difficult to get a person to understand something when their salary depends upon their not understanding it.

These 250 people expected their public pledge of loyalty to be rewarded. Instead, on September 6th, we all learned that Ito and other ML staff had been deliberately covering up Epstein’s role in about $8 million in donations to the ML, in contravention of MIT’s explicit disqualification of Epstein as a donor. The article is filled with horrifying details, but most damning of all: Epstein visited the ML in 2015 to meet with Ito in person (a privilege accorded to him for his financial support). The women on the ML staff offered to help two extremely young women accompanying Epstein escape, fearing they were trafficked.

Ito knew Epstein was almost certainly still committing rape after 2008.

Needless to say, this not what the signatories of the letter of support expected. Less than 24 hours after this news broke, the number of signatories had dropped from 250 to 228, and this disclaimer was added: “This petition was drafted by students on August 26th, 2019, and signed by members of the broader Media Lab community in the days that followed, to show their support for Joi and his apology. Given when community members added their names to this petition, their signatures should not be read as continued support of Joi staying on as Media Lab Director following the most recent revelations in the September 6th New Yorker article by Ronan Farrow.”

What happened? This is a phenomenon I’ve seen before, from my time working in the Linux kernel community. It’s this: Every nasty horror show of an abuser is surrounded by a ring of charming enablers who mediate between the abuser and the rest of the world. They make the abuser’s actions more palatable, smooth over the disagreements, invent explanations: the abuser can’t help it, the abuser needs help, the abuser is doing more good than harm, the abuse isn’t real abuse, we’ll always have an abuser so might as well stick with the abuser we know, etc. And around the immediate circle of enablers is a wider circle of dozens and hundreds of kind, trusting, supportive people who believe, in spite of all the evidence, that keeping the abuser and their enablers in power is ethically justified, in some way they aren’t privileged to understand. They don’t fully understand why, but they trust the people in power and keep working on faith.

That first level of charming enabler surrounding the abuser is doing that work with full knowledge of how terrible the abuser is, and they are rationalizing their decision in some way. It might be pure self-interest, it might be in service of some supposed greater goal, it might be a deep psychological need to believe that the abuser can be reformed. Whatever it is, it is a rationalization, and they are daily acting in a way that the surrounding circle of kind, trusting people would consider wildly unethical.

Here’s the key: you can’t trust anyone in that inner circle of enablers. They are people who are ethically flexible enough to rationalize supporting an abuser. They can easily rationalize screwing over the kind people who trust them, as Ito did with the 250 signatories of a letter that said, “We are here for you, we support you, we will forever be grateful for your impact on our lives.” His supporters are finding out the hard way that this kind of devotion and love is only one-way.

I am lucky enough to be in a position where I can refuse to knowingly support sexual predators. I also refuse to associate with people who support sexual predators because I know I can’t trust them to act ethically. I encourage you to join me.

Planet DebianAndreas Metzler: exim update

Testing users might want to manually pull the latest (4.92.1-3) upload of Exim from sid instead of waiting for regular migration to testing. It fixes a nasty vulnerability.

,

CryptogramFriday Squid Blogging: Squid Perfume

It's not perfume for squids. Nor is it perfume made from squids. It's a perfume called Squid, "inspired by life in the sea."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianIustin Pop: Nationalpark Bike Marathon 2019

This is a longer story… but I think it’s interesting, nonetheless.

The previous 6 months

Due to my long-going foot injury, my training was sporadic at the end of 2018, and by February I realised I will have to stop most if not all training in order to have a chance of recovery. I knew that meant no bike races this year, and I was fine with that. Well, I had to.

The only compromise was that I wanted to do one race, the NBM short route, since that is really easy (both technique and endurance), and even untrained should be fine.

So my fitness (well, CTL) went down and down and down, and my weight didn’t improve either.

As April and May went by, my foot was getting better, but training on the indoor trainer was still causing problems and moving my recovery backwards, so easy times.

By June things were looking better - even was able to do a couple slow runs!, July started even better, but trainer sessions were still a no-go. Only early August I could reliably do a short trainer session without issues. The good side was that since around June I could bike to work and back without problems, but that’s a short commute.

But, I felt confident I could do ~50Km on a bike with some uphill, so I registered for the race.

In August I could also restart trainer sessions, and to my pleasant surprise, even harder ones. So, I started preparing for the race in the last 2 weeks before it :( Well, better than nothing.

Overall, my CTL went from 62-65 in August 2018, to 6 (six!!) in early May, and started increasing in June, reaching a peak of 23 on the day before the race. That’s almost three times lower… so my expectations for the race were low. Especially as the longest ride I did in these 6 months was one hour long or so, whereas the race is double this time.

The race week

Things were going quite well. I also started doing some of the Zwift Academy workouts, more precisely 1 and 2 which are at low power, and everything good.

On Wednesday however, I did workout number 3, which has two “as hard as possible” intervals. Which are exactly the ones that my foot can’t yet do, so it caused some pain, and some concern about the race.

Then we went to Scuol, and I didn’t feel very well on Thursday as I was driving. I thought some exercise would help, so I went for a short run, which reminded me I made my foot worse the previous day, and was even more concerned.

On Friday morning, instead of better, I felt terrible. All I wanted was to go back to bed and sleep the whole day, but I knew that would mean no race tomorrow. I thought - maybe some slight walking would be better for me than lie in bed… At least I didn’t have a running nose or couching, but this definitely felt like a cold.

We went up with the gondola, walked ~2Km, got back down, and I was feeling at least not worse. All this time, I was overly dressed and feeling cold, while everybody else was in t-shirt.

A bit of rest in the afternoon helped, I went and picked my race number and felt better. After dinner, as I was preparing my stuff for next morning, I started feeling a bit better about doing the race. “Did not start” was now out of the question, but whether it will be a DNF was not clear yet.

Race (day)

Thankfully the race doesn’t start early for this specific route, so the morning was relatively relaxed. But then of course I was late a few minutes, so I hurried on my bike to the train station, only to realise I’m among the early people. Loading bike, get on the bus (the train station in Scuol is off-line for repairs), long bus ride to start point, and then… 2 hours of waiting. And to think I thought I’m 5 minutes late :)

I spent the two hours just reading email and browsing the internet (and posting a selfie on FB), and then finally it was on.

And I was surprised how “on” the race was from the first 100 meters. Despite repeated announcements in those two hours that the first 2-3 km do not matter since they’re through the S-chanf village, people started going very hard as soon as there was some space.

So I find myself going 40km/h (forty!!!) on a mountain bike on relatively flat gravel road. This sounds unbelievable, right? But the data says:

  • race started at 1’660m altitude
  • after the first 4.4km, I was at 1’650m, with a total altitude gain of 37m (and thus a total descent of 47m); thus, not flat-flat, but not downhill
  • over this 4.4km, my average speed was 32.5km/h, and that includes starting from zero speed, and in the block (average speed for the first minute was 20km/h)

While 32.5km/h on an MTB sounds cool, the sad part was that I knew this was unsustainable, both from the pure speed point of view, and from the heart rate point of view. I was already at 148bpm after 2½ minutes, but then at minute 6 it went over 160bpm and stayed that way. That is above my LTHR (estimated by various Garmin devices), so I was dipping into reserves. VeloViewer estimates power output here was 300-370W in these initial minutes, which is again above my FTP, so…

But, it was fun. Then at 4.5 a bit of climb (800m long, 50 altitude, ~6.3%), after which it became mostly flow on gravel. And for the next hour, until the single long climb (towards Guarda), it was the best ride I had this year, and one of the best segments in races in general. Yes, there are a few short climbs here and there (e.g. a 10% one over ~700m, another ~11% one over 300m or so), but in general it’s slowly descending route from ~1700m altitude to almost 1400m (plus add in another ~120m gained), so ~420m descent over ~22km. This means, despite the short climbs, average speed is still god - a bit more than 25km/h, which made this a very, very nice segment. No foot pain, no exertion, mean heart rate 152bpm, which is fine. Estimated power is a bit high (mean 231W, NP: 271W ← this is strange, too high); I’d really like to have a power meter on my MTB as well.

Then, after about an hour, the climb towards Guarda starts. It’s an easy climb for a fit person, but as I said I was worse for fitness this year, and also my weight was not good. Data for this segment:

  • it took me 33m:48s
  • 281m altitude gained
  • 4.7km length
  • mean HR: 145bpm
  • mean cadence: 75rpm

I remember stopping to drink once, and maybe another time to rest for about half a minute, but not sure. I stopped in total 33s during this half hour.

Then slowly descending on nice roads towards the next small climb to Ftan, then another short climb (thankfully, I was pretty dead at this point) of 780m distance, 7m36s including almost a minute stop, then down, another tiny climb, then down for the finish.

At the finish, knowing that there’s a final climb after you descend into Scuol itself and before the finish, I gathered all my reserves to do the climb standing. Alas, it was a bit longer than I thought; I think I managed to do 75-80% of it standing, but then sat down. Nevertheless, a good short climb:

  • 22m altitude over 245m distance, done in 1m02s
  • mean grade 8.8%, max grade 13.9%
  • mean HR 161bpm, max HR 167bpm which actually was my max for this race
  • mean speed 14.0km/h
  • estimated mean power 433W, NP: 499w; seems optimistic, but I’ll take it :)

Not bad, not bad. I was pretty happy about being able to push this hard, for an entire minute, at the end of the race. Yay for 1m power?

And obligatory picture, which also shows the grade pretty well:

Final climb! And beating my PR by ~9% Final climb! And beating my PR by ~9%

I don’t know how the photographer managed to do it, but having those other people in the picture makes it look much better :)

Comparison with last year

Let’s first look at official race results:

  • 2018: 2h11m37s
  • 2019: 2h22m13s

That’s 8% slower. Honestly, I thought I will do much worse, given my lack of training. Or does a 2.5× lower CTL only result in 8% time loss?

Honestly, I don’t think so. I think what saved me this year was that—since I couldn’t do bike rides—I did much more cross-train as in core exercises. Nothing fancy, just planks, push-ups, yoga, etc. but it helped significantly. If my foot will be fine and I can do both for next year, I’ll be in a much better position.

And this is why the sub-title of this post is “Fitness has many meanings”. I really need to diversify my training in general, but I was thinking in a somewhat theoretical way about it; this race showed it quite practically.

If I look at Strava data, it gives an even more clear picture:

  • on the 1 hour long flat segment I was telling about, which I really loved, I got a PR beating previous year by 1 minute; Strava estimates 250W for this hour, which is what my FTP was last year;
  • on all the climbs, I was slower than last year, as expected, but on the longer climbs significantly so; and I was many times slower than even 2016, when I did the next longer route.

And I just realise, of the 10½m I took longer this year, 6½m I lost on the Guarda climb :)

So yes, you can’t discount fitness, but leg fitness is not everything, and Training Peaks it seems can’t show overall fitness.

At least I did beat my PR on the finishing climb (1m12s vs. 1m19s last year), because I had left aside those final reserves for it.

Next steps

Assuming I’m successful at dealing my foot issue, and that early next year I can restart consistent training, I’m not concerned. I need to put in regular session, I also need to put in long sessions. The success story here is clear, it all depends on willpower.

Oh, and losing ~10kg of fat wouldn’t be bad, like at all.

Cory DoctorowTalking RADICALIZED and MAKERS on Writers Voice

The Writers Voice podcast just published their interview with me about Radicalized; as a bonus, they include my decade-old interview about Makers in the recording!

MP3

CryptogramThe Doghouse: Crown Sterling

A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious -- and amusing -- examples of cryptographic "snake oil."

I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out.

Crown Sterling is complete and utter snake oil. The company sells "TIME AI," "the world's first dynamic 'non-factor' based quantum AI encryption software," "utilizing multi-dimensional encryption technology, including time, music's infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs." Those sentence fragments tick three of my snake-oil warning signs -- from 1999! -- right there: pseudo-math gobbledygook (warning sign #1), new mathematics (warning sign #2), and extreme cluelessness (warning sign #4).

More: "In March of 2019, Grant identified the first Infinite Prime Number prediction pattern, where the discovery was published on Cornell University's www.arXiv.org titled: 'Accurate and Infinite Prime Number Prediction from Novel Quasi-Prime Analytical Methodology.' The paper was co-authored by Physicist and Number Theorist Talal Ghannam PhD. The discovery challenges today's current encryption framework by enabling the accurate prediction of prime numbers." Note the attempt to leverage Cornell's reputation, even though the preprint server is not peer-reviewed and allows anyone to upload anything. (That should be another warning sign: undeserved appeals to authority.) PhD student Mark Carney took the time to refute it. Most of it is wrong, and what's right isn't new.

I first encountered the company earlier this year. In January, Tom Yemington from the company emailed me, asking to talk. "The founder and CEO, Robert Grant is a successful healthcare CEO and amateur mathematician that has discovered a method for cracking asymmetric encryption methods that are based on the difficulty of finding the prime factors of a large quasi-prime numbers. Thankfully the newly discovered math also provides us with much a stronger approach to encryption based on entangled-pairs of keys." Sounds like complete snake-oil, right? I responded as I usually do when companies contact me, which is to tell them that I'm too busy.

In April, a colleague at IBM suggested I talk with the company. I poked around at the website, and sent back: "That screams 'snake oil.' Bet you a gazillion dollars they have absolutely nothing of value -- and that none of their tech people have any cryptography expertise." But I thought this might be an amusing conversation to have. I wrote back to Yemington. I never heard back -- LinkedIn suggests he left in April -- and forgot about the company completely until it surfaced at Black Hat this year.

Robert Grant, president of Crown Sterling, gave a sponsored talk: "The 2019 Discovery of Quasi-Prime Numbers: What Does This Mean For Encryption?" I didn't see it, but it was widely criticized and heckled. Black Hat was so embarrassed that it removed the presentation from the conference website. (Parts of it remain on the Internet. Here's a short video from the company, if you want to laugh along with everyone else at terms like "infinite wave conjugations" and "quantum AI encryption." Or you can read the company's press release about what happened at Black Hat, or Grant's Twitter feed.)

Grant has no cryptographic credentials. His bio -- on the website of something called the "Resonance Science Foundation" -- is all over the place: "He holds several patents in the fields of photonics, electromagnetism, genetic combinatorics, DNA and phenotypic expression, and cybernetic implant technologies. Mr. Grant published and confirmed the existence of quasi-prime numbers (a new classification of prime numbers) and their infinite pattern inherent to icositetragonal geometry."

Grant's bio on the Crown Sterling website contains this sentence, absolutely beautiful in its nonsensical use of mathematical terms: "He has multiple publications in unified mathematics and physics related to his discoveries of quasi-prime numbers (a new classification for prime numbers), the world's first predictive algorithm determining infinite prime numbers, and a unification wave-based theory connecting and correlating fundamental mathematical constants such as Pi, Euler, Alpha, Gamma and Phi." (Quasi-primes are real, and they're not new. They're numbers with only large prime factors, like RSA moduli.)

Near as I can tell, Grant's coauthor is the mathematician of the company: "Talal Ghannam -- a physicist who has self-published a book called The Mystery of Numbers: Revealed through their Digital Root as well as a comic book called The Chronicles of Maroof the Knight: The Byzantine." Nothing about cryptography.

There seems to be another technical person. Ars Technica writes: "Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green -- a musician who was 'musical director for Davy Jones of The Monkees' -- was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare."

None of these people have demonstrated any cryptographic credentials. No papers, no research, no nothing. (And, no, self-publishing doesn't count.)

After the Black Hat talk, Grant -- and maybe some of those others -- sat down with Ars Technica and spun more snake oil. They claimed that the patterns they found in prime numbers allows them to break RSA. They're not publishing their results "because Crown Sterling's team felt it would be irresponsible to disclose discoveries that would break encryption." (Snake-oil warning sign #7: unsubstantiated claims.) They also claim to have "some very, very strong advisors to the company" who are "experts in the field of cryptography, truly experts." The only one they name is Larry Ponemon, who is a privacy researcher and not a cryptographer at all.

Enough of this. All of us can create ciphers that we cannot break ourselves, which means that amateur cryptographers regularly produce amateur cryptography. These guys are amateurs. Their math is amateurish. Their claims are nonsensical. Run away. Run, far, far, away.

But be careful how loudly you laugh when you do. Not only is the company ridiculous, it's litigious as well. It has sued ten unnamed "John Doe" defendants for booing the Black Hat talk. (It also sued Black Hat, which may have more merit. The company paid $115K to have its talk presented amongst actual peer-reviewed talks. For Black Hat to remove its nonsense may very well be a breach of contract.)

Maybe Crown Sterling can file a meritless lawsuit against me instead for this post. I'm sure it would think it'd result in all sorts of positive press coverage. (Although any press is good press, so maybe it's right.) But if I can prevent others from getting taken in by this stuff, it would be a good thing.

Planet DebianReproducible Builds: Reproducible Builds in August 2019

Welcome to the August 2019 report from the Reproducible Builds project!

In these monthly reports we outline the most important things that have happened in the world of Reproducible Builds and we have been up to.

As a quick recap of our project, whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed to end users or systems as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source thus allowing multiple third-parties to come to a consensus on whether a build was changed or even compromised.

In August’s month’s report, we cover:

  • Media coverage & eventsWebmin, CCCamp, etc.
  • Distribution workThe first fully-reproducible package sets, openSUSE update, etc
  • Upstream newslibfaketime updates, gzip, ensuring good definitions, etc.
  • Software developmentMore work on diffoscope, new variations in our testing framework, etc.
  • Misc newsFrom our mailing list, etc.
  • Getting in touchHow to contribute, etc

If you are interested in contributing to our project, please visit our Contribute page on our website.


Media coverage & events

A backdoor was found in Webmin a popular web-based application used by sysadmins to remotely manage Unix-based systems. Whilst more details can be found on upstream’s dedicated exploit page, it appears that the build toolchain was compromised. Especially of note is that the exploit “did not show up in any Git diffs” and thus would not have been found via an audit of the source code. The backdoor would allow a remote attacker to execute arbitrary commands with superuser privileges on the machine running Webmin. Once a machine is compromised, an attacker could then use it to launch attacks on other systems managed through Webmin or indeed any other connected system. Techniques such as reproducible builds can help detect exactly these kinds of attacks that can lay dormant for years. (LWN comments)

In a talk titled There and Back Again, Reproducibly! Holger Levsen and Vagrant Cascadian presented at the 2019 edition of the Linux Developer Conference in São Paulo, Brazil on Reproducible Builds.

LWN posted and hosted an interesting summary and discussion on Hardening the file utility for Debian. In July, Chris Lamb had cross-posted his reply to the “Re: file(1) now with seccomp support enabled” thread, originally started on the debian-devel mailing list. In this post, Chris refers to our strip-nondeterminism tool not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to fix this issue which was causing a huge number of regressions in our testing framework.

The Chaos Communication Camp — an international, five-day open-air event for hackers that provides a relaxed atmosphere for free exchange of technical, social, and political ideas — hosted its 2019 edition where there were many discussions and meet-ups at least partly related to Reproducible Builds. This including the titular Reproducible Builds Meetup session which was attended by around twenty-five people where half of them were new to the project as well as a session dedicated to all Arch Linux related issues.


Distribution work

In Debian, the first “package sets” — ie. defined subsets of the entire archive — have become 100% reproducible including as the so-called “essential” set for the bullseye distribution on the amd64 and the armhf architectures. This is thanks to work by Chris Lamb on bash, readline and other low-level libraries and tools. Perl still has issues on i386 and arm64, however.

Dmitry Shachnev filed a bug report against the debhelper utility that speaks to issues around using the date from the debian/changelog file as the source for the SOURCE_DATE_EPOCH environment variable as this can lead to non-intuitive results when package is automatically rebuilt via so-called binary (NB. not “source”) NMUs. A related issue was later filed against qtbase5-dev by Helmut Grohne as this exact issue led to an issue with co-installability across architectures.

Lastly, 115 reviews of Debian packages were added, 45 were updated and 244 were removed this month, appreciably adding to our knowledge about identified issues. Many issue types were updated by Chris Lamb, including embeds_build_data_via_node_preamble, embeds_build_data_via_node_rollup, captures_build_path_in_beam_cma_cmt_files, captures_varying_number_of_build_path_directory_components (discussed later), timezone_specific_files_due_to_haskell_devscripts, etc.

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. New issues were found from enabling Link Time Optimization (LTO) in this distribution’s Tumbleweed branch. This affected, for example, nvme-cli as well as perl-XML-Parser and pcc with packaging issues.


Upstream news


Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. In August we wrote a large number of such patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb made the following changes:

  • Improvements:
    • Don’t fallback to an unhelpful raw hexdump when, for example, readelf(1) reports an minor issue in a section in an ELF binary. For example, when the .frames section is of the NOBITS type its contents are apparently “unreliable” and thus readelf(1) returns 1. (#58, #931962)
    • Include either standard error or standard output (not just the latter) when an external command fails. []
  • Bug fixes:
    • Skip calls to unsquashfs when we are neither root nor running under fakeroot. (#63)
    • Ensure that all of our artificially-created subprocess.CalledProcessError instances have output instances that are bytes objects, not str. []
    • Correct a reference to parser.diff; diff in this context is a Python function in the module. []
    • Avoid a possible traceback caused by a str/bytes type confusion when handling the output of failing external commands. []
  • Testsuite improvements:

    • Test for 4.4 in the output of squashfs -version, even though the Debian package version is 1:4.3+git190823-1. []
    • Apply a patch from László Böszörményi to update the squashfs test output and additionally bump the required version for the test itself. (#62 & #935684)
    • Add the wabt Debian package to the test-dependencies so that we run the WebAssembly tests on our continuous integration platform, etc. []
  • Improve debugging:
    • Add the containing module name to the (eg.) “Using StaticLibFile for ...” debugging messages. []
    • Strip off trailing “original size modulo 2^32 671” (etc.) from gzip compressed data as this is just a symptom of the contents itself changing that will be reflected elsewhere. (#61)
    • Avoid a lack of space between “... with return code 1” and “Standard output”. []
    • Improve debugging output when instantantiating our Comparator object types. []
    • Add a literal “eg.” to the comment on stripping “original size modulo...” text to emphasise that the actual numbers are not fixed. []
  • Internal code improvements:
    • No need to parse the section group from the class name; we can pass it via type built-in kwargs argument. []
    • Add support to Difference.from_command_exc and friends to ignore specific returncodes from the called program and treat them as “no” difference. []
    • Simplify parsing of optional command_args argument to Difference.from_command_exc. []
    • Set long_description_content_type to text/x-rst to appease the PyPI.org linter. []
    • Reposition a comment regarding an exception within the indented block to match Python code convention. []

In addition, Mattia Rizzolo made the following changes:

  • Now that we install wabt, expect its tools to be available. []
  • Bump the Debian backport check. []

Lastly, Vagrant Cascadian updated diffoscope to versions 120, 121 and 122 in the GNU Guix distribution.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Chris Lamb made the following changes.

  • Add support for enabling and disabling specific normalizers via the command line. (#10)
  • Drop accidentally-committed warning emitted on every fixture-based test. []
  • Reintroduce the .ar normalizer [] but disable it by default so that it can be enabled with --normalizers=+ar or similar. (#3)
  • In verbose mode, print the normalizers that strip-nondeterminism will apply. []

In addition, there was some movement on an issue in the Archive::Zip Perl module that strip-nondeterminism uses regarding the lack of support for bzip compression that was originally filed in 2016 by Andrew Ayer.

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

This month Vagrant Cascadian suggested and subsequently implemented that we additionally test a varying build directory of different string lengths (eg. /path/to/123 vs /path/to/123456 but we also vary the number of directory components within this, eg. /path/to/dir vs. /path/to/parent/subdir. Curiously, whilst it was a priori believed that was rather unlikely to yield differences, Chris Lamb has managed to identify approximately twenty packages that are affected by this issue.

It was also noticed that our testing of the Coreboot free software firmware fails to build the toolchain since we switched to building on the Debian buster distribution. The last successful build was on August 7th but all newer builds have failed.

In addition, the following code changes were performed in the last month:

  • Chris Lamb: Ensure that the size the log for the second build in HTML pages was also correctly formatted (eg. “12KB” vs “12345”). []

  • Holger Levsen:

  • Mathieu Parent: Update the contact details for the Debian PHP Group. []

  • Mattia Rizzolo:

The usual node maintenance was performed by Holger Levsen [][] and Vagrant Cascadian [].


Misc news

There was a yet more effort put into our our website this month, including misc copyediting by Chris Lamb [], Mathieu Parent referencing his fix for php-pear [] and Vagrant Cascadian updating a link to his homepage. [].

On our mailing list this month Santiago Torres Arias started a Setting up a MS-hosted rebuilder with in-toto metadata thread regarding Microsoft’s interest in setting up a rebuilder for Debian packages touching on issues of transparency logs and the integration of in-toto by the Secure Systems Lab at New York University. In addition, Lars Wirzenius continued conversation regarding various questions about reproducible builds and their bearing on building a distributed continuous integration system.

Lastly, in a thread titled Reproducible Builds technical introduction tutorial Jathan asked whether anyone had some “easy” Reproducible Builds tutorials in slides, video or written document format.


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa, Mathieu Parent and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Worse Than FailureError'd: Does Your Child Say "WTF" at Home?

Abby wrote, "I'm tempted to tell the school that my child mostly speaks Sanskrit."

 

"First of all, I have 58,199 rewards points, so I'm a little bit past joining, second, I'm pretty sure Bing Rewards was rebranded as Microsoft Rewards a while ago, and third...SERPBubbleXL...wat?" writes Zander.

 

"I guess, for T-Mobile, time really is money," Greg writes.

 

Hans K. wrote, "I guess it's sort of fitting, but in a quiz about Generics in Java, I was left a little bit confused.

 

"Wait, so if I do, um, nothing, am I allowed to make further changes or any new appointment?" Jeff K. writes.

 

Soumya wrote, "Yeah...I'm not a big fan of Starbucks' reward program..."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowWhat a Prehistoric Monument Reveals about the Value of Maintenance

Members of Long Now London chalking the White Horse of Uffington, a 3000-year-old prehistoric hill figure in England. Photo by Peter Landers.

Imagine, if you will, that you could travel back in time three thousand years to the late Bronze Age, with a bird’s eye view of a hill near the present-day village of Uffington, in Oxfordshire, England. From that vantage, you’d see the unmistakable outlines of a white horse etched into the hillside. It is enormous — roughly the size of a football field — and visible from 20 miles away.

Now, fast forward. Bounding through the millennia, you’d see groups of people arrive from nearby villages at regular intervals, making their way up the hill to partake in good old fashioned maintenance. Using hammers and buckets of chalk, they scour the hillside to ensure the giant pictogram is not obscured. Without this regular maintenance, the hill figure would not last more than twenty years before becoming entirely eroded and overgrown. After the work is done, a festival is held.

Entire civilizations rise and fall. The White Horse of Uffington remains. Scribes and historians make occasional note of the hill figure, such as in the Welsh Red Book of Hergest in 01382 (“Near to the town of Abinton there is a mountain with a figure of a stallion upon it, and it is white. Nothing grows upon it.”) or by the Oxford archivist Francis Wise in 01736 (“The ceremony of scouring the Horse, from time immemorial, has been solemnized by a numerous concourse of people from all the villages roundabout.”). Easily recognizable by air, the horse is temporarily hidden by turf during World War II to confuse Luftwaffe pilots during bombing raids. Today, the National Trust preserves the site, overseeing a regular act of maintenance 3,000 years in the making.

Long Now London chalking the White Horse. Photo by Peter Landers.

Earlier this summer, members of Long Now London took a field trip to Uffington to participate in the time-honored ceremony. Christopher Daniel, the lead organizer of Long Now London, says the idea to chalk the White Horse came from a conversation with Sarah Davis of Longplayer about the maintenance of art, places and meaning across generations and millennia.

“Sitting there, performing the same task as people in 01819, 00819 and around 800 BCE, it is hard not to consider the types and quantities of meaning and ceremony that may have been attached to those actions in those times,” Daniel says.

The White Horse of Uffington in 01937. Photo by Paul Nash.

Researchers still do not know why the horse was made. Archaeologist David Miles, who was able to date the horse to the late Bronze Age using a technique called optical stimulated luminescence, told The Smithsonian that the figure of the horse might be related to early Celtic art, where horses are depicted pulling the chariot of the sun across the sky. From the bottom of the Uffington hill, the sun appears to rise behind the horse.

“From the start the horse would have required regular upkeep to stay visible,” Emily Cleaver writes in The Smithsonian. “It might seem strange that the horse’s creators chose such an unstable form for their monument, but archaeologists believe this could have been intentional. A chalk hill figure requires a social group to maintain it, and it could be that today’s cleaning is an echo of an early ritual gathering that was part of the horse’s original function.”

In her lecture at Long Now earlier this summer, Monica L. Smith, an archaeologist at UCLA, highlighted the importance of ritual sites like Stonehenge and Göbekli Tepe in the eventual formation of cities.

“The first move towards getting people into larger and larger groups was probably something that was a ritual impetus,” she said. “The idea of coming together and gathering with a bunch of strangers was something that is evident in the earliest physical ritual structures that we have in the world today.”

Photo by Peter Landers.

For Christopher Daniel, the visit to Uffington underscored that there are different approaches to making things last. “The White Horse requires rather more regular maintenance than somewhere like Stonehenge,” he said. “But thankfully the required techniques and materials are smaller, simpler and much closer to hand.”

Though it requires considerably less resources to maintain, and is more symbolic than functional, the Uffington White Horse nonetheless offers a lesson in maintaining the infrastructure of cities today. “As humans, we are historically biased against maintenance,” Smith said in her Long Now lecture. “And yet that is exactly what infrastructure needs.”

The Golden Gate Bridge in San Francisco. Photo by Rich Niewiroski Jr.

When infrastructure becomes symbolic to a built environment, it is more likely to be maintained. Smith gave the example of San Francisco’s Golden Gate Bridge to illustrate this point. Much like the White Horse, the Golden Gate Bridge undergoes a willing and regular form of maintenance. “Somewhere between five to ten thousand gallons of paint a year, and thirty painters, are dedicated to keeping the Golden Gate Bridge golden,” Smith said.

Photos by Peter Landers.

For members of Long Now London, chalking the White Horse revealed that participating in acts of maintenance can be deeply meaningful. “It felt at once both quite ordinary and utterly sublime,” Daniel said. “The physical activity itself is in many ways straightforward. It is the context and history that elevate those actions into what we found to be a profound experience. It was also interesting to realize that on some level it does not matter why we do this. What matters most is that it is done.”

Daniel hopes Long Now London will carry out this “secular pilgrimage” every year. 

“Many of the oldest protected routes across Europe are routes of pilgrimage,” he says. “They were stamped out over centuries by people carrying or searching for meaning. I want the horse chalking to carry meaning across both time and space. If even just a few of us go to the horse each year with this intent, it becomes a tradition. Once something becomes a tradition, it attracts meaning, year by year, generation by generation. On this first visit to the horse, one member brought his kids. A couple of other members said they want to bring theirs in the future. This relatively simple act becomes something we do together—something we remember as much for the communal spirit as for the activity itself. In so doing, we layer new meaning onto old as we bash new chalk into old.”


Learn More

Worse Than FailureCodeSOD: Give Your Date a Workout

Bob E inherited a site which helps amateur sports clubs plan their recurring workouts/practices during the season. To do this, given the start date of the season, and the number of weeks, it needs to figure out all of the days in that range.

function GenWorkoutDates()
{

   global $SeasonStartDate, $WorkoutDate, $WeeksInSeason;

   $TempDate = explode("/", $SeasonStartDate);

   for ($i = 1; $i <= $WeeksInSeason; $i++)
   {
     for ($j = 1; $j <= 7; $j++)
     {

	   $MonthName = substr("JanFebMarAprMayJunJulAugSepOctNovDec", $TempDate[0] * 3 - 3, 3);

       $WorkoutDate[$i][$j] = $MonthName . " " . $TempDate[1] . "  ";
       $TempDate[1] += 1;


       switch ( $TempDate[0] )
	   {
     case 9:
	   case 4:
	   case 6:
	   case 11:
	     $DaysInMonth = 30;
	     break;

	   case 2:
     	 $DaysInMonth = 28;

	     switch ( $TempDate[2] )
	     {
	     case 2012:
	     case 2016:
	     case 2020:
	     	$DaysInMonth = 29;
	        break;

	     default:
	       $DaysInMonth = 28;
	       break;
	     }

	     break;

	   default:
	     $DaysInMonth = 31;
	     break;
	   }

	   if ($TempDate[1] > $DaysInMonth)
	   {
	     $TempDate[1] = 1;
	     $TempDate[0] += 1;
	     if ($TempDate[0] > 12)
	     {
	       $TempDate[0] = 1;
	       $TempDate[2] += 1;
	     }
	   }
     }
   }
}

I do enjoy that PHP’s string-splitting function is called explode. That’s not a WTF. More functions should be called explode.

This method of figuring out the month name, though:

$MonthName = substr("JanFebMarAprMayJunJulAugSepOctNovDec", $TempDate[0] * 3 - 3, 3);

I want to hate it, but I’m impressed with it.

From there, we have lovely hard-coded leap years, the “Thirty days has September…” poem implemented as a switch statement, and then that lovely rollover calculation for the end of a month (and the end of the year).

“I’m not a PHP developer,” Bob writes. “But I know how to use Google.” After some googling, he replaced this block of code with a 6-line version that uses built-in date handling functions.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Cory DoctorowCritical essays (including mine) discuss Toronto’s plan to let Google build a surveillance-based “smart city” along its waterfront

Sidewalk Labs is Google’s sister company that sells “smart city” technology; its showcase partner is Toronto, my hometown, where it has made a creepy shitshow out of its freshman outing, from the mass resignations of its privacy advisors to the underhanded way it snuck in the right to take over most of the lakeshore without further consultations (something the company straight up lied about after they were outed). Unsurprisingly, the city, the province, the country, and the company are all being sued over the plan.

Toronto Life has run a great, large package of short essays by proponents and critics of the project, from Sidewalk Labs CEO Dan Doctoroff (no, really, that’s his name) to former privacy commissioner Ann Cavoukian (who evinces an unfortunate belief in data-deidentification) to city councillor and former Greenpeace campaigner Gord Perks to urban guru Richard Florida to me.

I wrote about the prospect that a city could be organized around the principle that people are sensors, not things to be sensed — that is, imagine an internet of things that doesn’t relegate the humans it notionally serves to the status of “thing.”

Our cities are necessarily complex, and they benefit from sensing and control. From census tracts to John Snow’s 19th-century map of central London cholera infections, we have been gathering telemetry on the performance of our cities in order to tune and optimize them for hundreds of years. As cities advance, they demand ever-higher degrees of sensing and actuating. But smart cities have to be built by cities themselves, democratically controlled and publicly owned. Reinventing company towns with high-tech fillips is not a path to a brighter future. It’s a new form of digital feudalism.

Humans are excellent sensors. We’re spectacular at deciding what we want for dinner, which seat on the subway we prefer, which restaurants we’re likely to enjoy and which strangers we want to talk to at parties. What if people were the things that smart cities were designed to serve, rather than the data that smart cities lived to process? Here’s how that could work. Imagine someone ripped all the surveillance out of Android and all the anti-user controls out of iOS and left behind nothing on your phone but code that serves you, not manufacturers or advertisers. It could still collect data—where you are, who you talk to, what you say—but it would be a roach motel for that data, which would check in to your device but not check out. It wouldn’t be available to third parties without your ongoing consent.

A phone that knows about you—but doesn’t tell anyone what it knows about you—would be your interface to a better smart city. The city’s systems could stream data to your device, which could pick the relevant elements out of the torrent: the nearest public restroom, whether the next bus has a seat for you, where to get a great sandwich.


A smart city should serve its users, not mine their data
[Cory Doctorow/Toronto Life]

The Sidewalk Wars [Toronto Life]

(Image: Cryteria, CC-BY, modified)

Planet DebianTim Retout: PA Consulting

In early October, I will be saying goodbye to my colleagues at CV-Library after 7.5 years, and joining PA Consulting in London as a Principal Consultant.

Over the course of my time at CV-Library I have got married, had a child, and moved from Southampton to Bedford. I am happy to have played a part in the growth of CV-Library as a leading recruitment brand in the UK, especially helping to make the site more reliable - I can tell more than a few war stories.

Most of all I will remember the people. I still have much to learn about management, but working with such an excellent team, the years passed very quickly. I am grateful to everyone, and wish them all every future success.

CryptogramCredit Card Privacy

Good article in the Washington Post on all the surveillance associated with credit card use.

Worse Than FailureCodeSOD: UnINTentional Errors

Data type conversions are one of those areas where we have rich, well-supported, well-documented features built into most languages. Thus, we also have endless attempts for people to re-implement them. Or worse, wrap a built-in method in a way which makes everything less clear.

Mindy encountered this.

/* For converting (KEY_MSG_INPUT) to int format. */
public static int numberToIntFormat(String value) {
  int returnValue = -1;    	
  if (!StringUtil.isNullOrEmpty(value)) {
    try {
      int temp = Integer.parseInt(value);
      if (temp > 0) {
        returnValue = temp;
      }
    } catch (NumberFormatException e) {}
  }    	
  return returnValue;
}

The isNullOrEmpty check is arguably pointless, here. Any invalid input string, including null or empty ones, would cause parseInt to throw a NumberFormatException, which we’re already catching. Of course, we’re catching and ignoring it.

That’s assuming that StringUtil.isNullOrEmpty does what we think it does, since while there are third party Java libraries that offer that functionality, it’s not a built-in class (and do we really think the culprit here was using libraries?). Who knows what it actually does.

And, another useful highlight: note how we check if (temp > 0)? Well, this is a problem. Not only does the downstream code handle negative numbers, –1 is a perfectly reasonable value, which means when this method takes -10 and returns -1, what it’s actually done is passed incorrect but valid data back up the chain. And since any errors were swallowed, no one knows if this was intentional or not.

This method wasn’t called in any context relating to KEY_MSG_INPUT, but it was called everywhere, as it’s one of those utility methods that finds new uses any time someone wants to convert a string into a number. Due to its use in pretty much every module, fixing this is considered a "high risk" change, and has been scheduled for sometime in the 2020s.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Krebs on Security‘Satori’ IoT Botnet Operator Pleads Guilty

A 21-year-old man from Vancouver, Wash. has pleaded guilty to federal hacking charges tied to his role in operating the “Satori” botnet, a crime machine powered by hacked Internet of Things (IoT) devices that was built to conduct massive denial-of-service attacks targeting Internet service providers, online gaming platforms and Web hosting companies.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

Kenneth Currin Schuchman pleaded guilty to one count of aiding and abetting computer intrusions. Between July 2017 and October 2018, Schuchman was part of a conspiracy with at least two other unnamed individuals to develop and use Satori in large scale online attacks designed to flood their targets with so much junk Internet traffic that the targets became unreachable by legitimate visitors.

According to his plea agreement, Schuchman — who went by the online aliases “Nexus” and “Nexus-Zeta” — worked with at least two other individuals to build and use the Satori botnet, which harnessed the collective bandwidth of approximately 100,000 hacked IoT devices by exploiting vulnerabilities in various wireless routers, digital video recorders, Internet-connected security cameras, and fiber-optic networking devices.

Satori was originally based on the leaked source code for Mirai, a powerful IoT botnet that first appeared in the summer of 2016 and was responsible for some of the largest denial-of-service attacks ever recorded (including a 620 Gbps attack that took KrebsOnSecurity offline for almost four days).

Throughout 2017 and into 2018, Schuchman worked with his co-conspirators — who used the nicknames “Vamp” and “Drake” — to further develop Satori by identifying and exploiting additional security flaws in other IoT systems.

Schuchman and his accomplices gave new monikers to their IoT botnets with almost each new improvement, rechristening their creations with names including “Okiru,” and “Masuta,” and infecting up to 700,000 compromised systems.

The plea agreement states that the object of the conspiracy was to sell access to their botnets to those who wished to rent them for launching attacks against others, although it’s not clear to what extent Schuchman and his alleged co-conspirators succeeded in this regard.

Even after he was indicted in connection with his activities in August 2018, Schuchman created a new botnet variant while on supervised release. At the time, Schuchman and Drake had something of a falling out, and Schuchman later acknowledged using information gleaned by prosecutors to identify Drake’s home address for the purposes of “swatting” him.

Swatting involves making false reports of a potentially violent incident — usually a phony hostage situation, bomb threat or murder — to prompt a heavily-armed police response to the target’s location. According to his plea agreement, the swatting that Schuchman set in motion in October 2018 resulted in “a substantial law enforcement response at Drake’s residence.”

As noted in a September 2018 story, Schuchman was not exactly skilled in the art of obscuring his real identity online. For one thing, the domain name used as a control server to synchronize the activities of the Satori botnet was registered to the email address nexuczeta1337@gmail.com. That domain name was originally registered to a “ZetaSec Inc.” and to a “Kenny Schuchman” in Vancouver, Wash.

People who operate IoT-based botnets maintain and build up their pool of infected IoT systems by constantly scanning the Internet for other vulnerable systems. Schuchman’s plea agreement states that when he received abuse complaints related to his scanning activities, he responded in his father’s identity.

“Schuchman frequently used identification devices belonging to his father to further the criminal scheme,” the plea agreement explains.

While Schuchman may be the first person to plead guilty in connection with Satori and its progeny, he appears to be hardly the most culpable. Multiple sources tell KrebsOnSecurity that Schuchman’s co-conspirator Vamp is a U.K. resident who was principally responsible for coding the Satori botnet, and as a minor was involved in the 2015 hack against U.K. phone and broadband provider TalkTalk.

Multiple sources also say Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

The investigation into Schuchman and his alleged co-conspirators is being run out the FBI field office in Alaska, spearheaded by some of the same agents who helped track down and ultimately secure guilty pleas from the original co-authors of the Mirai botnet.

It remains to be seen what kind of punishment a federal judge will hand down for Schuchman, who reportedly has been diagnosed with Asperger Syndrome and autism. The maximum penalty for the single criminal count to which he’s pleaded guilty is 10 years in prison and fines of up to $250,000.

However, it seems likely his sentencing will fall well short of that maximum: Schuchman’s plea deal states that he agreed to a recommended sentence “at the low end of the guideline range as calculated and adopted by the court.”

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Planet DebianCandy Tsai: Beyond Outreachy: Final Interviews and Internship Review

The last few weeks (week 11 – week 13) of Outreachy were probably the hardest weeks. I had to do 3 informational interviews with the goal of getting a better picture of the open source/free software industry.

The thought of talking to someone I don’t even know just overwhelms me. So this assignment just leaves me scared to death. Pressing that “Send Email” button to these interviewees required me to summon up all of my courage but it was totally worth it. I really appreciate their time for chatting with me.

On the other hand, it’s hard to believe the internship is coming to an end! Good news is that I will be sticking around Debci after this.

Informational Interviews

The theme for week 11 was “Making connections”, so I had to reach out to 3 people that is beyond my network for an informational interview. I’d rather just call it an informational chat so it doesn’t sound too scary. My goal is to know better about how companies involved with open source survive and how others are working remotely. Therefore, my criteria for the interviewees were really simple but not so easy to find:

  • Lives in Taiwan
  • Works remotely
  • Their company is dedicated to open source/free software

At last I was really lucky to have them for my final assignment:

  • Andrew Lee: also part of the Debian community, has been working on open source for more than 20 years in Taiwan, works at Collabora, an open source consulting company
  • James Tien: works at Automattic, a company known for working on WordPress, link to his blog here, it’s in Chinese
  • Gordon Tai: works at Ververica, a company known for working on Apache Flink

A big thanks to them and to terceiro who guided me through this. During my search, it was hard to find someone working for a local company here in Taiwan that fulfilled my criteria.

I have organized and summarized below:

Staying in Open Source

  • Passion is needed for coding and open source, you have to really enjoy it to stay in the long run
  • Opportunities come unexpectedly, you never know when or how they would come to you
  • Write “code”

Remote work

  • People can still sense your up and downs through your chat messages and facial expressions in video calls
  • Communication is much more important than the actual code itself, sometimes you spend more time speaking out than coding down
  • You can use a pomodora clock to help focus or try working different hours
  • Try working in different environments: cafe shop, under the tree, in the forest, beside the ocean etc.
  • Exercise, exercise, exercise!

These above were very general but it was the stories and experiences that I heard that were special. It is for you to find out by doing your own informational interviews!

Internship Review

Last but not least, here’s a wrap-up of my internship in QA format. Hope that this helps anyone that wants to participate in future rounds get a better picture of how the Outreachy Internship was with Debian Debci.

What communication skills have you learned during the internship?

Asking questions and leaving comments. Since I am not a user of Debci, I started with absolutely zero knowledge. I even had to write a blog post to help me clarify what those terminology were for and come back to it if I forget in the future. I asked lots of questions and luckily my mentors were really patient. As we only have a video chat once per week, we discussed mostly through comments in the merge request or issue most of the time. Sometimes I find it hard for me to convey my thoughts with just words (or images), so this was a really good practice.

What technical skills have you learned during the internship?

I only started writing Ruby because of this internship. Also, I wrote my first VagrantFile. In general, I think getting familiar with the code base was the best part.

How did your Outreachy mentor help you along the way?

My mentor reviewed my code thoroughly and guided my through the whole internship. We did pair programming sessions and that was really helpful.

What was one amazing thing that happened during the internship?

The informational interview was pretty horrifying and at the same time amazing. The idea never really came to me that people would really take the time and talk to someone they don’t know. I am really grateful for their time. Their personal stories were really inspiring and motivating too.

How did Outreachy help you feel more confident in making open source and free software contributions?

In my opinion, Outreachy’s initial contribution phase is really important. It kind of forces candidates to at least reach out and take the first step. Even if you didn’t get accepted in the end, you still went from 0 to 1. That is when you find out that the community is actually pretty welcoming to newcomers. So for me, it wasn’t about being more confident, but rather a not so scared case.

What parts of your project did you complete?

I added a self service section where people can request their own test through the Debci UI without fumbling through CURL commands. Also added a VagrantFile for future newcomers to setup the project more easily. Hope it works for them because I’ve only tested on my computer. We’ll see then.

What are the next steps for you to complete the project?

I’m sticking around and at least until I finish the parts that I started because I think it was fun and people actually made some requests related to this. It’s always exciting to see what you are building is wanted by the users.

Really appreciate the opportunity that Outreachy has been offering to interns! Assuming that you have read through this post, you probably are interested in Outreachy. Please do come and apply if you are interested or recommend it to others!

Cory DoctorowThey told us DRM would give us more for less, but they lied

My latest Locus Magazine column is DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

For 40 years, University of Chicago-style market orthodoxy has promised widespread prosperity as a natural consequence of turning everything into unfettered, unregulated, monopolistic businesses. For 40 years, everyone except the paymasters who bankrolled the University of Chicago’s priesthood have gotten poorer.

Today, DRM stands as a perfect example of everything terrible about monopolies, surveillance, and shareholder capitalism.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

DRM Broke Its Promise [Locus/Cory Doctorow]

(Image: Cryteria, CC-BY, modified)

,

Krebs on SecuritySpam In your Calendar? Here’s What to Do.

Many spam trends are cyclical: Spammers tend to switch tactics when one method of hijacking your time and attention stops working. But periodically they circle back to old tricks, and few spam trends are as perennial as calendar spam, in which invitations to click on dodgy links show up unbidden in your digital calendar application from Apple, Google and Microsoft. Here’s a brief primer on what you can do about it.

Image: Reddit

Over the past few weeks, a good number of readers have written in to say they feared their calendar app or email account was hacked after noticing a spammy event had been added to their calendars.

The truth is, all that a spammer needs to add an unwelcome appointment to your calendar is the email address tied to your calendar account. That’s because the calendar applications from Apple, Google and Microsoft are set by default to accept calendar invites from anyone.

Calendar invites from spammers run the gamut from ads for porn or pharmacy sites, to claims of an unexpected financial windfall or “free” items of value, to outright phishing attacks and malware lures. The important thing is that you don’t click on any links embedded in these appointments. And resist the temptation to respond to such invitations by selecting “yes,” “no,” or “maybe,” as doing so may only serve to guarantee you more calendar spam.

Fortunately, the are a few simple steps you can take that should help minimize this nuisance. To stop events from being automatically added to your Google calendar:

-Open the Calendar application, and click the gear icon to get to the Calendar Settings page.
-Under “Event Settings,” change the default setting to “No, only show invitations to which I have responded.”

To prevent events from automatically being added to your Microsoft Outlook calendar, click the gear icon in the upper right corner of Outlook to open the settings menu, and then scroll down and select “View all Outlook settings.” From there:

-Click “Calendar,” then “Events from email.”
-Change the default setting for each type of reservation settings to “Only show event summaries in email.”

For Apple calendar users, log in to your iCloud.com account, and select Calendar.

-Click the gear icon in the lower left corner of the Calendar application, and select “Preferences.”
-Click the “Advanced” tab at the top of the box that appears.
-Change the default setting to “Email to [your email here].”

Making these changes will mean that any events your email provider previously added to your calendar automatically by scanning your inbox for certain types of messages from common events — such as making hotel, dining, plane or train reservations, or paying recurring bills — may no longer be added for you. Spammy calendar invitations may still show up via email; in the event they do, make sure to mark the missives as spam.

Have you experienced a spike in calendar spam of late? Or maybe you have another suggestion for blocking it? If so, sound off in the comments below.

Planet DebianNorbert Preining: Debian Activities of the last few months

I haven’t written about specific Debian activities in recent times, but I haven’t been lazy. In fact I have been very active with a lot of new packages I am contributing to.

TeX and Friends

Lots of updates since we first released TeX Live 2019 for Debian, too many to actually mention. We also have bumped the binary package with backports of fixes for dvipdfmx and other programs. Another item that is still pending is the separation of dvisvgm into a separate package (currently in the NEW queue). Biber has been updated to match the version of biblatex shipped in the TeX Live packages.

Calibre

Calibre development is continuing as usual, with lots of activity for getting Calibre ready for Python3. To prepare for this move, I have taken over the Python mechanize package which has been not updated for many years. At the moment it is already possible to build a Calibre package for Python3, but unfortunately by now practically all external plugins are still based on Python2 and thus fail with Python3. As a consequence I will keep Calibre at Python2 version for the time being, and hope that Calibre officially switches to Python3, which would trigger a conversion of the plugins, too, before Bulleye (the next Debian release) is released with the aim to get rid of Python2.

Cinnamon

The packages of Cinnamon 4.0 I have prepared together with the Cinnamon Team have been uploaded to sid, and I have uploaded packages of Cinnamon 4.2 to experimental. We plan to move the 4.2 packages to sid after the 4.0 packages have entered testing.

Onedrive

Onedrive didn’t cut it into the release of buster, in particular because the release masters weren’t happy with an upgrade request I made to get a new version (scheduled to enter testing 1 day after the freeze day!) with loads of fixes into buster. So I decided to remove onedrive altogether from Buster, better nothing than something broken. It is a bit a pain for me – but users are advised to get the source code from Github and install a self compiled version – this is definitely safer.


All in all quite a lot of work. Enjoy.

Worse Than FailureCodeSOD: Boxing with the InTern

A few years ago, Naomi did an internship with Initech. Before her first day, she was very clear on what her responsibilities would be: she'd be on a team modernizing an older product called "Gem" (no relation to Ruby libraries).

By the time her first day rolled around, however, Initech had new priorities. There were a collection of fires on some hyperspecific internal enterprise tool, and everyone was running around and screaming about the apocalypse while dealing with that. Except Naomi, because nobody had any time to bring the intern up to speed on this disaster. Instead, she was given a new priority: just maintain Gem. And no, she wouldn't have a mentor. For the next six months, Naomi was the Gem support team.

"Start by looking at the code quality metrics," was the advice she was given.

It was bad advice. First, while Initech had installed an automated code review tool in their source control system, they weren't using the tool. It had started crashing instead of outputting a report six years ago. Nobody had noticed, or perhaps nobody had cared. Or maybe they just didn't like getting bad news, because once Naomi had the tool running again, the report was full of bad news.

A huge mass of the code was reimplemented copies of the standard library, "tuned for performance", which meant instead of a sensible implementation it was a pile of 4,000 line functions wrapping around massive switch statements. The linter didn't catch that they were parsing XML using regular expressions, but Naomi spotted that and wisely decided not to touch that bit.

What she did find, and fix, was this pattern:

private Boolean isSided; // dozens more properties public GemGeometryEntryPoint(GemGeometryEntryPoint gemGeometryEntryPoint) { this.isSided = gemGeometryEntryPoint.isSided == null ? null : new Boolean(gemGeometryEntryPoint.isSided); // and so on, for those dozens of properties }

Java has two boolean types. The Boolean reference type, and boolean primitive type. The boolean is not a full-fledged object, and thus is smaller in memory and faster to allocate. The Boolean is a full class implementation, with all the overhead contained within. A Java developer will generally need to use both, as if you want a list of boolean values, you need to "box" any primitives into Boolean objects.

I say generally need both, because Naomi's predecessors decided that worrying about boxing was complicated, so they only used the reference types. There wasn't a boolean or an int to be found, just Booleans and Integers. Maybe they just thought "primitive" meant "legacy"?

You can't convert a null into a boxed type, so new Boolean(null) would throw an exception. Thus, the ternary check in the code above. At no point did anyone think that "hey, we're doing a null check on pretty much every variable access" mean that there was something wrong in the code.

The bright side to this whole thing was that the unit tests were exemplary. A few hours with sed meant that Naomi was able to switch most everything to primitive types, confirm that she hadn't introduced any regressions in the process, and even demonstrated that using primitives greatly improved performance, as it cut down on heap memory allocations. The downside was replacing all those ternaries with lines like this.isSided = other.gemGeometryEntryPoint.isSided didn't look nearly as impressive.

Of course, changing that many lines of code in a single commit triggered some alarms, which precipitated a mini-crisis as no one knew what to do when the intern submitted a 15,000 line commit.

Naomi adds: "Mabye null was supposed to represent FILE_NOT_FOUND?"

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Planet DebianJunichi Uekawa: I have an issue remembering where I took notes.

I have an issue remembering where I took notes. In the past it was all in emacs. Now it's somewhere in one of the web services.

Planet DebianSean Whitton: Debian Policy call for participation -- September 2019

There hasn’t been much activity lately, but no shortage of interesting and hopefully-accessible Debian Policy work. Do write to debian-policy@lists.debian.org if you’d like to participate but are struggling to figure out how.

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Krebs on SecurityFeds Allege Adconion Employees Hijacked IP Addresses for Spamming

Federal prosecutors in California have filed criminal charges against four employees of Adconion Direct, an email advertising firm, alleging they unlawfully hijacked vast swaths of Internet addresses and used them in large-scale spam campaigns. KrebsOnSecurity has learned that the charges are likely just the opening salvo in a much larger, ongoing federal investigation into the company’s commercial email practices.

Prior to its acquisition, Adconion offered digital advertising solutions to some of the world’s biggest companies, including Adidas, AT&T, Fidelity, Honda, Kohl’s and T-Mobile. Amobee, the Redwood City, Calif. online ad firm that acquired Adconion in 2014, bills itself as the world’s leading independent advertising platform. The CEO of Amobee is Kim Perell, formerly CEO of Adconion.

In October 2018, prosecutors in the Southern District of California named four Adconion employees — Jacob Bychak, Mark ManoogianPetr Pacas, and Mohammed Abdul Qayyum —  in a ten-count indictment on charges of conspiracy, wire fraud, and electronic mail fraud. All four men have pleaded not guilty to the charges, which stem from a grand jury indictment handed down in June 2017.

‘COMPANY A’

The indictment and other court filings in this case refer to the employer of the four men only as “Company A.” However, LinkedIn profiles under the names of three of the accused show they each work(ed) for Adconion and/or Amobee.

Mark Manoogian is an attorney whose LinkedIn profile states that he is director of legal and business affairs at Amobee, and formerly was senior business development manager at Adconion Direct; Bychak is listed as director of operations at Adconion Direct; Quayyum’s LinkedIn page lists him as manager of technical operations at Adconion. A statement of facts filed by the government indicates Petr Pacas was at one point director of operations at Company A (Adconion).

According to the indictment, between December 2010 and September 2014 the defendants engaged in a conspiracy to identify or pay to identify blocks of Internet Protocol (IP) addresses that were registered to others but which were otherwise inactive.

The government alleges the men sent forged letters to an Internet hosting firm claiming they had been authorized by the registrants of the inactive IP addresses to use that space for their own purposes.

“Members of the conspiracy would use the fraudulently acquired IP addresses to send commercial email (‘spam’) messages,” the government charged.

HOSTING IN THE WIND

Prosecutors say the accused were able to spam from the purloined IP address blocks after tricking the owner of Hostwinds, an Oklahoma-based Internet hosting firm, into routing the fraudulently obtained IP addresses on their behalf.

Hostwinds owner Peter Holden was the subject of a 2015 KrebsOnSecurity story titled, “Like Cutting Off a Limb to Save the Body,” which described how he’d initially built a lucrative business catering mainly to spammers, only to later have a change of heart and aggressively work to keep spammers off of his network.

That a case of such potential import for the digital marketing industry has escaped any media attention for so long is unusual but not surprising given what’s at stake for the companies involved and for the government’s ongoing investigations.

Adconion’s parent Amobee manages ad campaigns for some of the world’s top brands, and has every reason not to call attention to charges that some of its key employees may have been involved in criminal activity.

Meanwhile, prosecutors are busy following up on evidence supplied by several cooperating witnesses in this and a related grand jury investigation, including a confidential informant who received information from an Adconion employee about the company’s internal operations.

THE BIGGER PICTURE

According to a memo jointly filed by the defendants, “this case spun off from a larger ongoing investigation into the commercial email practices of Company A.” Ironically, this memo appears to be the only one of several dozen documents related to the indictment that mentions Adconion by name (albeit only in a series of footnote references).

Prosecutors allege the four men bought hijacked IP address blocks from another man tied to this case who was charged separately. This individual, Daniel Dye, has a history of working with others to hijack IP addresses for use by spammers.

For many years, Dye was a system administrator for Optinrealbig, a Colorado company that relentlessly pimped all manner of junk email, from mortgage leads and adult-related services to counterfeit products and Viagra.

Optinrealbig’s CEO was the spam king Scott Richter, who later changed the name of the company to Media Breakaway after being successfully sued for spamming by AOL, MicrosoftMySpace, and the New York Attorney General Office, among others. In 2008, this author penned a column for The Washington Post detailing how Media Breakaway had hijacked tens of thousands of IP addresses from a defunct San Francisco company for use in its spamming operations.

Dye has been charged with violations of the CAN-SPAM Act. A review of the documents in his case suggest Dye accepted a guilty plea agreement in connection with the IP address thefts and is cooperating with the government’s ongoing investigation into Adconion’s email marketing practices, although the plea agreement itself remains under seal.

Lawyers for the four defendants in this case have asserted in court filings that the government’s confidential informant is an employee of Spamhaus.org, an organization that many Internet service providers around the world rely upon to help identify and block sources of malware and spam.

Interestingly, in 2014 Spamhaus was sued by Blackstar Media LLC, a bulk email marketing company and subsidiary of Adconion. Blackstar’s owners sued Spamhaus for defamation after Spamhaus included them at the top of its list of the Top 10 world’s worst spammers. Blackstar later dropped the lawsuit and agreed to paid Spamhaus’ legal costs.

Representatives for Spamhaus declined to comment for this story. Responding to questions about the indictment of Adconion employees, Amobee’s parent company SingTel referred comments to Amobee, which issued a brief statement saying, “Amobee has fully cooperated with the government’s investigation of this 2017 matter which pertains to alleged activities that occurred years prior to Amobee’s acquisition of the company.”

ONE OF THE LARGEST SPAMMERS IN HISTORY?

It appears the government has been investigating Adconion’s email practices since at least 2015, and possibly as early as 2013. The very first result in an online search for the words “Adconion” and “spam” returns a Microsoft Powerpoint document that was presented alongside this talk at an ARIN meeting in October 2016. ARIN stands for the American Registry for Internet Numbers, and it handles IP addresses allocations for entities in the United States, Canada and parts of the Caribbean.

As the screenshot above shows, that Powerpoint deck was originally named “Adconion – Arin,” but the file has since been renamed. That is, unless one downloads the file and looks at the metadata attached to it, which shows the original filename and that it was created in 2015 by someone at the U.S. Department of Justice.

Slide #8 in that Powerpoint document references a case example of an unnamed company (again, “Company A”), which the presenter said was “alleged to be one of the largest spammers in history,” that had hijacked “hundreds of thousands of IP addresses.”

A slide from an ARIN presentation in 2016 that referenced Adconion.

There are fewer than four billion IPv4 addresses available for use, but the vast majority of them have already been allocated. In recent years, this global shortage has turned IP addresses into a commodity wherein each IP can fetch between $15-$25 on the open market.

The dearth of available IP addresses has created boom times for those engaged in the acquisition and sale of IP address blocks. It also has emboldened scammers and spammers who specialize in absconding with and spamming from dormant IP address blocks without permission from the rightful owners.

In May, KrebsOnSecurity broke the news that Amir Golestan — the owner of a prominent Charleston, S.C. tech company called Micfo LLC — had been indicted on criminal charges of fraudulently obtaining more than 735,000 IP addresses from ARIN and reselling the space to others.

KrebsOnSecurity has since learned that for several years prior to 2014, Adconion was one of Golestan’s biggest clients. More on that in an upcoming story.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.700.2.0

armadillo image

A new RcppArmadillo release based on a new Armadillo upstream release arrived on CRAN, and will get to Debian shortly. It brings continued improvements for sparse matrices and a few other things; see below for more details. I also appear to have skipped blogging about the preceding 0.9.600.4.0 release (which was actually extra-rigorous with an unprecedented number of reverse-depends runs) so I included its changes (with very nice sparse matrix improvements) as well.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 656 other packages on CRAN.

Changes in RcppArmadillo version 0.9.700.2.0 (2019-09-01)

  • Upgraded to Armadillo release 9.700.2 (Gangster Democracy)

    • faster handling of cubes by vectorise()

    • faster faster handling of sparse matrices by nonzeros()

    • faster row-wise index_min() and index_max()

    • expanded join_rows() and join_cols() to handle joining up to 4 matrices

    • expanded .save() and .load() to allow storing sparse matrices in CSV format

    • added randperm() to generate a vector with random permutation of a sequence of integers

  • Expanded the list of known good gcc and clang versions in configure.ac

Changes in RcppArmadillo version 0.9.600.4.0 (2019-07-14)

  • Upgraded to Armadillo release 9.600.4 (Napa Invasion)

    • faster handling of sparse submatrices

    • faster handling of sparse diagonal views

    • faster handling of sparse matrices by symmatu() and symmatl()

    • faster handling of sparse matrices by join_cols()

    • expanded clamp() to handle sparse matrices

    • added .clean() to replace elements below a threshold with zeros

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Carter: Free Software Activities (2019-08)

Ah, spring time at last. The last month I caught up a bit with my Debian packaging work after the Buster freeze, release and subsequent DebConf. Still a bit to catch up on (mostly kpmcore and partitionmanager that’s waiting on new kdelibs and a few bugs). Other than that I made two new videos, and I’m busy with renovations at home this week so my home office is packed up and in the garage. I’m hoping that it will be done towards the end of next week, until then I’ll have little screen time for anything that’s not work work.

2019-08-01: Review package hipercontracer (1.4.4-1) (mentors.debian.net request) (needs some work).

2019-08-01: Upload package bundlewrap (3.6.2-1) to debian unstable.

2019-08-01: Upload package gnome-shell-extension-dash-to-panel (20-1) to debian unstable.

2019-08-01: Accept MR!2 for gamemode, for new upstream version (1.4-1).

2019-08-02: Upload package gnome-shell-extension-workspaces-to-dock (51-1) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-hide-activities (0.00~git20131024.1.6574986-2) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-trash (0.2.0-git20161122.ad29112-2) to debian unstable.

2019-08-04: Upload package toot (0.22.0-1) to debian unstable.

2019-08-05: Upload package gamemode (gamemode-1.4.1+git20190722.4ecac89-1) to debian unstable.

2019-08-05: Upload package calamares-settings-debian (10.0.24-2) to debian unstable.

2019-08-05: Upload package python3-flask-restful (0.3.7-3) to debian unstable.

2019-08-05: Upload package python3-aniso8601 (7.0.0-2) to debian unstable.

2019-08-06: Upload package gamemode (1.5~git20190722.4ecac89-1) to debian unstable.

2019-08-06: Sponsor package assaultcube (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-06: Sponsor package assaultcube-data (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-07: Request more info on Debian bug #825185 (“Please which tasks should be installed at a default installation of the blend”).

2019-08-07: Close debian bug #689022 in desktop-base (“lxde: Debian wallpaper distorted on 4:3 monitor”).

2019-08-07: Close debian bug #680583 in desktop-base (“please demote librsvg2-common to Recommends”).

2019-08-07: Comment on debian bug #931875 in gnome-shell-extension-multi-monitors (“Error loading extension”) to temporarily avoid autorm.

2019-08-07: File bug (multimedia-devel)

2019-08-07: Upload package python3-grapefruit (0.1~a3+dfsg-7) to debian unstable (Closes: #926414).

2019-08-07: Comment on debian bug #933997 in gamemode (“gamemode isn’t automatically activated for rise of the tomb raider”).

2019-08-07: Sponsor package assaultcube-data (1.2.0.2.1-2) for debian unstable (e-mail request).

2019-08-08: Upload package calamares (3.2.12-1) to debian unstable.

2019-08-08: Close debian bug #32673 in aalib (“open /dev/vcsa* write-only”).

2019-08-08: Upload package tanglet (1.5.4-1) to debian unstable.

2019-08-08: Upload package tmux-theme-jimeh (0+git20190430-1b1b809-1) to debian unstable (Closes: #933222).

2019-08-08: Close debian bug #927219 (“amdgpu graphics fail to be configured”).

2019-08-08: Close debian bugs #861065 and #861067 (For creating nextstep task and live media).

2019-08-10: Sponsor package scons (3.1.1-1) for debian unstable (mentors.debian.org request) (Closes RFS: #932817).

2019-08-10: Sponsor package fractgen (2.1.7-1) for debian unstable (mentors.debian.net request).

2019-08-10: Sponsor package bitwise (0.33-1) for debian unstable (mentors.debian.net request). (Closes RFS: #934022).

2019-08-10: Review package python-pyspike (0.6.0-1) (mentors.debian.net request) (needs some additional work).

2019-08-10: Upload package connectagram (1.2.10-1) to debian unstable.

2019-08-11: Review package bitwise (0.40-1) (mentors.debian.net request) (need some further work).

2019-08-11: Sponsor package sane-backends (1.0.28-1~experimental1) to debian experimental (mentors.debian.net request).

2019-08-11: Review package hcloud-python (1.4.0-1) (mentors.debian.net).

2019-08-13: Review package bitwise (0.40-1) (e-mail request) (needs some further work).

2019-08-15: Sponsor package bitwise (0.40-1) for debian unstable (email request).

2019-08-19: Upload package calamares-settings-debian (10.0.20-1+deb10u1) to debian buster (CVE #2019-13179).

2019-08-19: Upload package gnome-shell-extension-dash-to-panel (21-1) to debian unstable.

2019-08-19: Upload package flask-restful (0.3.7-4) to debian unstable.

2019-08-20: Upload package python3-grapefruit (0.1~a3+dfsg-8) to debian unstable (Closes: #934599).

2019-08-20: Sponsor package runescape (0.6-1) for debian unstable (mentors.debian.net request).

2019-08-20: Review package ukui-menu (1.1.12-1) (needs some mor work) (mentors.debian.net request).

2019-08-20: File ITP #935178 for bcachefs-tools.

2019-08-21: Fix two typos in bcachefs-tools (Github bcachefs-tools PR: #20).

2019-08-25: Published Debian Package of the Day video #60: 5 Fonts (highvoltage.tv / YouTube).

2019-08-26: Upload new upstream release of speedtest-cli (2.1.2-1) to debian unstable (Closes: #934768).

2019-08-26: Upload new package gnome-shell-extension-draw-on-your-screen to NEW for debian untable. (ITP: #925518)

2019-08-27: File upstream bug for btfs so that python2 depencency can be dropped from Debian package (BTFS: #53).

2019-08-28: Published Debian Package Management #4: Maintainer Scripts (highvoltage.tv / YouTube).

2019-08-28: File upstream feature request in Calamares unpackfs module to help speed up installations (Calamares: #1229).

2019-08-28: File upstream request at smlinux/rtl8723de driver for license clarification (RTL8723DE: #49).

Planet DebianMike Gabriel: My Work on Debian LTS/ELTS (August 2019)

In August 2019, I have worked on the Debian LTS project for 24 hours (of 24.75 hours planned) and on the Debian ELTS project for another 2 hours (of 12 hours planned) as a paid contributor.

LTS Work

  • Upload fusiondirectory 1.0.8.2-5+deb8u2 to jessie-security (1 CVE, DLA 1875-1 [1])
  • Upload gosa 2.7.4+reloaded2+deb8u4 to jessie-security (1 CVE, DLA 1876-1 [2])
  • Upload gosa 2.7.4+reloaded2+deb8u5 to jessie-security (1 CVE, DLA 1905-1 [3])
  • Upload libav 6:11.12-1~deb8u8 to jessie-security (5 CVEs, DLA 1907-1 [4])
  • Investigate on CVE-2019-13627 (libgcrypt20). Upstream patch applies, build succeeds, but some tests fail. More work required on this.
  • Triage 14 packages with my LTS frontdesk hat on during the last week of August
  • Do a second pair of eyes review on changes uploaded with dovecot 1:2.2.13-12~deb8u7
  • File a merge request against security-tracker [5], add --minor option to contact-maintainers script.

ELTS Work

  • Investigate on CVE-2019-13627 (libgcrypt11). More work needed to assess if libgrypt11 in wheezy is affected by CVE-2019-13627.

References

Planet DebianJulien Danjou: Dependencies Handling in Python

Dependencies Handling in Python

Dependencies are a nightmare for many people. Some even argue they are technical debt. Managing the list of the libraries of your software is a horrible experience. Updating them — automatically? — sounds like a delirium.

Stick with me here as I am going to help you get a better grasp on something that you cannot, in practice, get rid of — unless you're incredibly rich and talented and can live without the code of others.

First, we need to be clear of something about dependencies: there are two types of them. Donald Stuff wrote better than I would about the subject years ago. To make it simple, one can say that they are two types of code packages depending on  external code: applications and libraries.

Libraries Dependencies

Python libraries should specify their dependencies in a generic way. A library should not require requests 2.1.5: it does not make sense. If every library out there needs a different version of requests, they can't be used at the same time.

Libraries need to declare dependencies based on ranges of version numbers. Requiring requests>=2 is correct. Requiring requests>=1,<2 is also correct if you know that requests 2.x does not work with the library. The problem that your version range specification is solving is the API compatibility issue between your code and your dependencies — nothing else. That's a good reason for libraries to use Semantic Versioning whenever possible.

Therefore, dependencies should be written in setup.py as something like:

from setuptools import setup

setup(
    name="MyLibrary",
    version="1.0",
    install_requires=[
        "requests",
    ],
    # ...
)

This way, it is easy for any application to use the library and co-exist with others.

Applications Dependencies

An application is just a particular case of libraries. They are not intended to be reused (imported) by other libraries of applications — though nothing would prevent it in practice.

In the end, that means that you should specify the dependencies the same way that you would do for a library in the application's setup.py.

The main difference is that an application is usually deployed in production to provide its service. Deployments need to be reproducible. For that, you can't solely rely on setup.py: the requested range of the dependencies are too broad. You're at the mercy of random version changes at any time when re-deploying your application.

You, therefore, need a different version management mechanism to handle deployment than just setup.py.

pipenv has an excellent section recapping this in its documentation. It splits dependency types into abstract and concrete dependencies: abstract dependencies are based on ranges (e.g., libraries) whereas concrete dependencies are specified with precise versions (e.g., application deployments) — as we've just seen here.

Handling Deployment

The requirements.txt file has been used to solve application deployment reproducibility for a long time now. Its format is usually something like:

requests==3.1.5
foobar==2.0

Each library sees itself specified to the micro version. That makes sure each of your deployment is going to install the same version of your dependency. Using a requirements.txt is a simple solution and a first step toward reproducible deployment. However, it's not enough.

Indeed, while you can specify which version of requests you want, if requests depends on urllib3, that could make pip install urllib 2.1 or urllib 2.2. You can't know which one will be installed, which does not make your deployment 100% reproducible.

Of course, you could duplicate all requests dependencies yourself in your requirements.txt, but that would be madness!

Dependencies Handling in PythonAn application dependency tree can be quite deep and complex sometimes.

There are various hacks available to fix this limitation, but the real saviors here are pipenv and poetry. The way they solve it is similar to many package managers in other programming languages. They generate a lock file that contains the list of all installed dependencies (and their own dependencies, etc.) with their version numbers. That makes sure the deployment is 100% reproducible.

Check out their documentation on how to set up and use them!

Handling Dependencies Updates

Now that you have your lock file that makes sure your deployment is reproducible in a snap, you've another problem. How do you make sure that your dependencies are up-to-date? There is a real security concern about this, but also bug fixes and optimizations that you might miss by staying behind.

If your project is hosted on GitHub, Dependabot is an excellent solution to solve this issue. Enabling this application on your repository creates automatically pull requests whenever a new version of the library listed in your lock file is available. For example, if you've deployed your application with redis 3.3.6, Dependabot will create a pull request updating to redis 3.3.7 as soon as it gets released. Furthermore, Dependabot supports requirements.txt, pipenv, and poetry!

Dependencies Handling in PythonDependabot updating jinja2 for you

Automatic Deployment Update

You're almost there. You have a bot that is letting you know that a new version of a library your project needs is available.

Once the pull request is created, your continuous integration system is going to kick in, deploy your project, and runs the test. If everything works fine, your pull request is ready to be merged. But are you really needed in this process?

Unless you have a particular and personal aversion on specific version numbers —"Gosh I hate versions that end with a 3. It's always bad luck."— or unless you have zero automated testing, you, human, is useless. This merge can be fully automatic.

This is where Mergify comes into play. Mergify is a GitHub application allowing to define precise rules about how to merge your pull requests. Here's a rule that I use in every project:

pull_requests_rules:
  - name: automatic merge from dependabot
    conditions:
      - author~=^dependabot(|-preview)\[bot\]$
      - label!=work-in-progress
      - "status-success=ci/circleci: pep8"
      - "status-success=ci/circleci: py37"
    actions:
      merge:
        method: merge
Dependencies Handling in PythonMergify reports when the rule fully matches

As soon as your continuous integration system passes, Mergify merges the pull request for you.

Dependencies Handling in Python

You can then automatically trigger your deployment hooks to update your production deployment and get the new library version installed right away. This leaves your application always up-to-date with newer libraries and not lagging behind several years of releases.

If anything goes wrong, you're still able to revert the commit from Dependabot — which you can also automate if you wish with a Mergify rule.

Beyond

This is to me the state of the art of dependency management lifecycle right now. And while this applies exceptionally well to Python, it can be applied to many other languages that use a similar pattern — such as Node and npm.

Worse Than FailureClassic WTF: Hyperlink 2.0

It's Labor Day in the US, where we celebrate the workers of the world by having a barbecue. Speaking of work, in these days of web frameworks and miles of unnecessary JavaScript to do basic things on the web, let's look back at a simpler time, where we still used server-side code and miles of unnecessary JavaScript to do basic things on the web. Original. --Remy

For those of you who haven't upgraded to Web 2.0 yet, today's submission from Daniel is a perfect example of what you're missing out on. Since the beginning of the Web (the "1.0 days"), website owners have always wanted to know who was visiting their website, how often, and when. Back then, this was accomplished by recording each website "hit" in a log file and running a report on the log later.

But the problem with this method in Web 2.0 is that people don't use logs anymore; they use blogs, and everyone knows that blogs are a pretty stupid way of tracking web traffic. Fortunately, Daniel's colleagues developed an elegant, clever, and -- most importantly -- "AJAX" way of solving this problem. Instead of being coded in HTML pages, all hyperlinks are assigned a numeric identifier and kept in a database table. This identifier is then used on the HTML pages within an anchor tag:

<a href="Javascript: followLink(124);">View Products</a>

When the user clicks on the hyperlink, the followLink() Javascript function is executed and the following occur:

  • a translucent layer (DIV) is placed over the entire page, causing it to appear "grayed out", and ...
  • a "please wait" layer is placed on top of that, with an animated pendulum swinging back and forth, then ...
  • the XmlHttpRequest object is used to call the "GetHyperlink" web service which, in turn ...
  • opens a connection to the database server to ...
  • log the request in the RequestedHyperlinks table and ...
  • retrieves the URL from the Hyperlinks table, then ...
  • returns it to the client script, which then ...
  • sets the window.location property to the URL retrieved, which causes ...
  • the user to be redirected to the appropriate page

Now that's two-point-ohey.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: rra-c-util 8.0

This is a roll-up of a lot of changes to my utility package for C (and increasingly for Perl). It's been more than a year since the last release, so it's long-overdue.

Most of the changes in this release are to the Perl test libraries and accompanying tests. Test::RRA now must be imported before Test::More so that it can handle the absence of Test::More (such as on Red Hat systems with perl but not perl-core installed). The is_file_contents function in Test::RRA now handles Windows and other systems without a diff program. And there are more minor improvements to the various tests written in Perl.

The Autoconf probe RRA_LIB_KRB5_OPTIONAL now correctly handles the case where Kerberos libraries are not available but libcom_err is, rather than incorrectly believing that Kerberos libraries were present.

As of this release, rra-c-util now tests the Perl test programs that it includes, which requires it to build and test a dummy Perl module. This means the build system now requires Perl 5.6.2 and the Module::Build module.

You can get the latest version from the rra-c-util distribution page.

,

Planet DebianThorsten Alteholz: My Debian Activities in August 2019

FTP master

This month the numbers went up again and I accepted 389 packages and rejected 43. The overall number of packages that got accepted was 460.

Debian LTS

This was my sixty second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 21.75h. During that time I did LTS uploads of:

  • [DLA 1887-1] freetype security update for one CVE
  • [DLA 1889-1] python3.4 security update for one CVE
  • [DLA 1893-1] cups security update for two CVEs
  • [DLA 1895-1] libmspack security update for one CVE
  • [DLA 1894-1] libapache2-mod-auth-openidc security update for one CVE
  • [DLA 1897-1] tiff security update for one CVE
  • [DLA 1902-1] djvulibre security update for four CVEs
  • [DLA 1904-1] libextractor security update for one CVE
  • [DLA 1906-1] python2.7 security update for one CVE

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the fifteenth ELTS month.

During my allocated time I uploaded:

  • ELA-155-1 of cups
  • ELA-157-1 of djvulibre
  • ELA-158-1 of python2.7

I spent some time to work on tiff3 only to find that the affected features are not yet available.

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new packages of …

I also uploaded new upstream versions of …

I improved packaging of …

On my Go challenge I uploaded golang-github-gin-contrib-static, golang-github-gin-contrib-cors, golang-github-yourbasic-graph, golang-github-cnf-structhash, golang-github-deanthompson-ginpprof, golang-github-jarcoal-httpmock, golang-github-gin-contrib-gzip, golang-github-mcuadros-go-gin-prometheus, golang-github-abdullin-seq, golang-github-centurylinkcloud-clc-sdk, golang-github-ziutek-mymysql, golang-github-terra-farm-udnssdk, golang-github-ensighten-udnssdk, golang-github-sethvargo-go-fastly.

I again reuploaded some go packages (golang-github-go-xorm-core, golang-github-jarcoal-httpmock, golang-github-mcuadros-go-gin-prometheus, golang-github-deanthompson-ginpprof, golang-github-gin-contrib-cors, golang-github-gin-contrib-gzip, golang-github-gin-contrib-static, golang-github-cyberdelia-heroku-go, golang-github-corpix-uarand, golang-github-cnf-structhash, golang-github-rs-zerolog, golang-gopkg-ldap.v3, golang-github-yourbasic-graph, golang-github-ovh-go-ovh, , that would not migrate due to being binary uploads before.

I also sponsored the following packages: golang-github-jesseduffield-gocui, printrun, cura-engine, theme-d, theme-d-gnome.

The DOPOM package for this month was gengetopt.

LongNowThe Amazon is not the Earth’s Lungs

An aerial view of forest fire of the Amazon taken with a drone is seen from an Indigenous territory in the state of Mato Grosso, in Brazil, August 23, 2019, obtained by Reuters on August 25, 2019. Marizilda Cruppe/Amnesty International/Handout via REUTERS.

In the wake of the troubling reports about fires in Brazil’s Amazon rainforest, much misinformation spread across social media. On Facebook posts and news reports, the Amazon was described as being the “lungs of the Earth.” Peter Brannen, writing in The Atlantic, details why that isn’t the case—not to downplay the impact of the fires, but to educate audiences on how the various systems of our planet interact:

The Amazon is a vast, ineffable, vital, living wonder. It does not, however, supply the planet with 20 percent of its oxygen.

As the biochemist Nick Lane wrote in his 2003 book Oxygen, “Even the most foolhardy destruction of world forests could hardly dint our oxygen supply, though in other respects such short-sighted idiocy is an unspeakable tragedy.”

The Amazon produces about 6 percent of the oxygen currently being made by photosynthetic organisms alive on the planet today. But surprisingly, this is not where most of our oxygen comes from. In fact, from a broader Earth-system perspective, in which the biosphere not only creates but also consumes free oxygen, the Amazon’s contribution to our planet’s unusual abundance of the stuff is more or less zero. This is not a pedantic detail. Geology provides a strange picture of how the world works that helps illuminate just how bizarre and unprecedented the ongoing human experiment on the planet really is. Contrary to almost every popular account, Earth maintains an unusual surfeit of free oxygen—an incredibly reactive gas that does not want to be in the atmosphere—largely due not to living, breathing trees, but to the existence, underground, of fossil fuels.

Read Brannen’s piece in full here.

Planet DebianPetter Reinholdtsen: Norwegian movies that might be legal to share on the Internet

While working on identifying and counting movies that can be legally shared on the Internet, I also looked at the Norwegian movies listed in IMDb. So far I have identified 54 candidates published before 1940 that might no longer be protected by norwegian copyright law. Of these, only 29 are available at least in part from the Norwegian National Library. It can be assumed that the remaining 25 movies are lost. It seem most useful to identify the copyright status of movies that are not lost. To verify that the movie is really no longer protected, one need to verify the list of copyright holders and figure out if and when they died. I've been able to identify some of them, but for some it is hard to figure out when they died.

This is the list of 29 movies both available from the library and possibly no longer protected by copyright law. The year range (1909-1979 on the first line) is year of publication and last year with copyright protection.

1909-1979 ( 70 year) NSB Bergensbanen 1909 - http://www.imdb.com/title/tt0347601/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons likfærd - http://www.imdb.com/title/tt9299304/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons begravelse - http://www.imdb.com/title/tt9299300/
1912-1998 ( 86 year) Roald Amundsens Sydpolsferd (1910-1912) - http://www.imdb.com/title/tt9237500/
1913-2006 ( 93 year) Roald Amundsen på sydpolen - http://www.imdb.com/title/tt0347886/
1917-1987 ( 70 year) Fanden i nøtten - http://www.imdb.com/title/tt0346964/
1919-2018 ( 99 year) Historien om en gut - http://www.imdb.com/title/tt0010259/
1920-1990 ( 70 year) Kaksen på Øverland - http://www.imdb.com/title/tt0011361/
1923-1993 ( 70 year) Norge - en skildring i 6 akter - http://www.imdb.com/title/tt0014319/
1925-1997 ( 72 year) Roald Amundsen - Ellsworths flyveekspedition 1925 - http://www.imdb.com/title/tt0016295/
1925-1995 ( 70 year) En verdensreise, eller Da knold og tott vaskede negrene hvite med 13 sæpen - http://www.imdb.com/title/tt1018948/
1926-1996 ( 70 year) Luftskibet 'Norge's flugt over polhavet - http://www.imdb.com/title/tt0017090/
1926-1996 ( 70 year) Med 'Maud' over Polhavet - http://www.imdb.com/title/tt0017129/
1927-1997 ( 70 year) Den store sultan - http://www.imdb.com/title/tt1017997/
1928-1998 ( 70 year) Noahs ark - http://www.imdb.com/title/tt1018917/
1928-1998 ( 70 year) Skjæbnen - http://www.imdb.com/title/tt1002652/
1928-1998 ( 70 year) Chefens cigarett - http://www.imdb.com/title/tt1019896/
1929-1999 ( 70 year) Se Norge - http://www.imdb.com/title/tt0020378/
1929-1999 ( 70 year) Fra Chr. Michelsen til Kronprins Olav og Prinsesse Martha - http://www.imdb.com/title/tt0019899/
1930-2000 ( 70 year) Mot ukjent land - http://www.imdb.com/title/tt0021158/
1930-2000 ( 70 year) Det er natt - http://www.imdb.com/title/tt1017904/
1930-2000 ( 70 year) Over Besseggen på motorcykel - http://www.imdb.com/title/tt0347721/
1931-2001 ( 70 year) Glimt fra New York og den Norske koloni - http://www.imdb.com/title/tt0021913/
1932-2007 ( 75 year) En glad gutt - http://www.imdb.com/title/tt0022946/
1934-2004 ( 70 year) Den lystige radio-trio - http://www.imdb.com/title/tt1002628/
1935-2005 ( 70 year) Kronprinsparets reise i Nord Norge - http://www.imdb.com/title/tt0268411/
1935-2005 ( 70 year) Stormangrep - http://www.imdb.com/title/tt1017998/
1936-2006 ( 70 year) En fargesymfoni i blått - http://www.imdb.com/title/tt1002762/
1939-2009 ( 70 year) Til Vesterheimen - http://www.imdb.com/title/tt0032036/
To be sure which one of these can be legally shared on the Internet, in addition to verifying the right holders list is complete, one need to verify the death year of these persons:
Bjørnstjerne Bjørnson (dead 1910) - http://www.imdb.com/name/nm0085085/
Gustav Adolf Olsen (missing death year) - http://www.imdb.com/name/nm0647652/
Gustav Lund (missing death year) - http://www.imdb.com/name/nm0526168/
John W. Brunius (dead 1937) - http://www.imdb.com/name/nm0116307/
Ola Cornelius (missing death year) - http://www.imdb.com/name/nm1227236/
Oskar Omdal (dead 1927) - http://www.imdb.com/name/nm3116241/
Paul Berge (missing death year) - http://www.imdb.com/name/nm0074006/
Peter Lykke-Seest (dead 1948) - http://www.imdb.com/name/nm0528064/
Roald Amundsen (dead 1928) - https://www.imdb.com/name/nm0025468/
Sverre Halvorsen (dead 1936) - http://www.imdb.com/name/nm1299757/
Thomas W. Schwartz (missing death year) - http://www.imdb.com/name/nm2616250/

Perhaps you can help me figuring death year of those missing it, or right holders if some are missing in IMDb? It would be nice to have a definite list of Norwegian movies that are legal to share on the Internet.

This is the list of 25 movies not available from the library and possibly no longer protected by copyright law:

1907-2009 (102 year) Fiskerlivets farer - http://www.imdb.com/title/tt0121288/
1912-2018 (106 year) Historien omen moder - http://www.imdb.com/title/tt0382852/
1912-2002 ( 90 year) Anny - en gatepiges roman - http://www.imdb.com/title/tt0002026/
1916-1986 ( 70 year) The Mother Who Paid - http://www.imdb.com/title/tt3619226/
1917-2018 (101 year) En vinternat - http://www.imdb.com/title/tt0008740/
1917-2018 (101 year) Unge hjerter - http://www.imdb.com/title/tt0008719/
1917-2018 (101 year) De forældreløse - http://www.imdb.com/title/tt0007972/
1918-2018 (100 year) Vor tids helte - http://www.imdb.com/title/tt0009769/
1918-2018 (100 year) Lodsens datter - http://www.imdb.com/title/tt0009314/
1919-2018 ( 99 year) Æresgjesten - http://www.imdb.com/title/tt0010939/
1921-2006 ( 85 year) Det nye year? - http://www.imdb.com/title/tt0347686/
1921-1991 ( 70 year) Under Polarkredsens himmel - http://www.imdb.com/title/tt0012789/
1923-1993 ( 70 year) Nordenfor polarcirkelen - http://www.imdb.com/title/tt0014318/
1925-1995 ( 70 year) Med 'Stavangerfjord' til Nordkap - http://www.imdb.com/title/tt0016098/
1926-1996 ( 70 year) Over Atlanterhavet og gjennem Amerika - http://www.imdb.com/title/tt0017241/
1926-1996 ( 70 year) Hallo! Amerika! - http://www.imdb.com/title/tt0016945/
1926-1996 ( 70 year) Tigeren Teodors triumf - http://www.imdb.com/title/tt1008052/
1927-1997 ( 70 year) Rød sultan - http://www.imdb.com/title/tt1017979/
1927-1997 ( 70 year) Søndagsfiskeren Flag - http://www.imdb.com/title/tt1018002/
1930-2000 ( 70 year) Ro-ro til fiskeskjær - http://www.imdb.com/title/tt1017973/
1933-2003 ( 70 year) I kongens klær - http://www.imdb.com/title/tt0024164/
1934-2004 ( 70 year) Eventyret om de tre bukkene bruse - http://www.imdb.com/title/tt1007963/
1934-2004 ( 70 year) Pål sine høner - http://www.imdb.com/title/tt1017966/
1937-2007 ( 70 year) Et mesterverk - http://www.imdb.com/title/tt1019937/
1938-2008 ( 70 year) En Harmony - http://www.imdb.com/title/tt1007975/

Several of these movies completely lack right holder information in IMDb and elsewhere. Without access to a copy of the movie, it is often impossible to get the list of people involved in making the movie, making it impossible to figure out the correct copyright status.

Not listed here are the movies still protected by copyright law. Their copyright terms varies from 79 to 144 years, according to the information I have available so far. One of the non-lost movies might change status next year, Mustads Mono from 1920. The next one might be Hvor isbjørnen ferdes from 1935 in 2024.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianPaul Wise: FLOSS Activities August 2019

Changes

Issues

Review

Administration

  • Debian: restart dead stunnels
  • Debian wiki: unblacklist IP addresses, whitelist email addresses, whitelist email domains

Communication

Sponsors

The purple-discord, libgoogle-protocolbuffers-perl work was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianRuss Allbery: C TAP Harness 4.5

Peter Paris requested that C TAP Harness support being built as C++ code. I've not been a big fan of doing this with pure C code since I find some of the requirements of C++ mildly irritating, but Peter's initial patch also fixed one type error in a malloc uncovered because of one of C++'s rules requiring the return of malloc be cast. It turned out to be a mostly harmless error since the code was allocating a larger struct than it needed to, but it's still evidence that there's some potential here for catching bugs.

That said, adding an explicit cast to every malloc isn't likely to catch bugs. That's just having to repeat oneself in every allocation, and you're nearly as likely to repeat yourself incorrectly.

However, if one is willing to use a macro instead of malloc directly, this is fixable, and I'm willing to do that since I was already using a macro for allocation to do error handling. So I've modified the code to pass in the type of object to allocate instead of the size, and then used a macro to add the return cast. This makes for somewhat cleaner code and also makes it possible to build the code as pure C++. I also added some functions to the TAP generator library, bcalloc_type and breallocarray_type, that take the same approach. (I didn't remove the old functions, to maintain backward compatibility.)

I'm reasonably happy with the results, although it's a bit of a hassle and I'm not sure if I'm going to go to the trouble in all of my other C packages. But I'm at least considering it. (Of course, I'm also considering rewriting them all in Rust, and considering my profound lack of time to do either of these things.)

You can get the latest release from the C TAP Harness distribution page.

,

Planet DebianSylvain Beucler: Debian LTS and ELTS - August 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

Yes, that changed since last month, as I was offered to work on ELTS :)

In August, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 21.75h for LTS (out of 30 max) and 14h for ELTS (max).

Interestingly I was able to factor out some time between LTS and ELTS while working on vim and tomcat for both suites.

LTS - Jessie

  • squirrelmail: CVE-2019-12970: locate patch, refresh previous fix with new upstream-blessed version, security upload
  • vim: CVE-2017-11109, CVE-2017-17087, CVE-2019-12735: analyze and reproduce issues (one of them not fully exploitable), fix new and postponed issues, security upload
  • tomcat8: improve past patch to fix the test suite, report and refresh test certificates
  • tomcat8: CVE-2016-5388, CVE-2018-8014, CVE-2019-0221: requalify old not-affected issue, fix new and postponed issues, security upload

Documentation:

  • wiki: document good upload/test practices (pbuilder and lintian+debdiff+piuparts); request for comments
  • www.debian.org: import missing DLA-1810 (tomcat7/CVE-2019-0221)
  • freeimage: update dla-needed.txt status

ELTS - Wheezy

  • Get acquainted with the new procedures and setup build/test environments
  • vim: CVE-2017-17087, CVE-2019-12735: analyze and reproduce issues (one of them not fully exploitable), fix new and pending issues, security upload
  • tomcat7: CVE-2016-5388: requalify old not-affected issue, security upload

Documentation:

  • raise concern about missing dependency in our list of supported packages
  • user documentation: doc fix apt-key list -> apt-key finger
  • triage: mark a few CVE as EOL, fix-up missing fixed versions in data/ELA/list (not automated anymore following the oldoldstable -> oldoldold(!)stable switch)

While not part of Debian strictly speaking, ELTS strives for the same level of transparency, see in particular the Git repositories: https://salsa.debian.org/freexian-team/extended-lts

Sam VargheseAustralian politicians are in it for the money

Australian politicians are in the game for one thing: money. Most of them are so incompetent that they would not be paid even half of what they earn were they to try for jobs in the private sector.

That’s why former members of the Victorian state parliament, who were voted out at the last election in 2018, are struggling to find jobs.

Apparently, some have been told by recruitment agencies that they “don’t know where to fit you”, according to a news report from the Melbourne tabloid Herald Sun.

People who enter politics are paid well in Australia, far above what people are paid by the private sector, unless one is very high up in the hierarchy.

Politicians get where they are by doing favours for people in high places and moving up the greasy pole.

They get all kinds of fancy allowances and benefits. They have no scruples about taking from the public purse whenever they can without getting caught.

They are the worst kind of scum.

Australia is a highly over-governed place, with three levels of government: the national parliament, the parliaments in the different states and territories and the local governments.

At each level there is plenty of scope for fattening one’s own lamb. There are a handful of people who have some kind of vocation for public service; the rest are out to grab whatever they can before they are voted out.

Nobody should have any pity for people of this kind given what they do when they are in office. About the only thing they do is to prepare things so that they will have a job here, there or anywhere when they finally get thrown out of politics.

Some get lanced so early in their political lives that they are unprepared. Perhaps they should be put to work as garbage collectors. But one doubts they would have the physical and mental fortitude to get through such a job.

Planet DebianChris Lamb: Free software activities in August 2019

Here is my monthly update covering most of what I have been doing in the free software world during August 2019 (previous month):

  • Opened pull requests to make the build reproducible for Mozilla's Bleach [...] and the re2c regular expression library [...].

Tails

For the Tails privacy-oriented operating system, I was made a number of updates as part of the pkg-privacy-tools team in Debian:

  • onionshare:

    • Package new upstream version 2.1. [...]
    • Correct spelling, format and syntax errors in manpage.
    • Update debian/copyright; socks.py no longer in upstream.
    • Misc updates:
      • Drop "ancient" X-Python3-Version specifier (satisfied in oldoldstable).
      • Move to debhelper compatibility level 12 and use the debhelper-compat virtual package, dropping debian/compat.
    • debian/watch: Ignore dev releases and move to version 4 format.
  • monkeysphere:

    • Prevent a FTBFS by updating the tests to accommodate an updated GnuPG in stretch now producing a different output. (#934034).

    • I also filed a "proposed update" to actually update the package in the stretch distribution. (#934775)

  • onioncircuits: Update continuous integration tests to the Python 3.x version of Dogtail. (#935174)

  • seahorse-nautilus: (Almost) no-change upload to unstable to ensure migration to the testing distribution as binaries were uploaded with previous 3.11.92-3 release. [...]

  • obfs4proxy: Move to using the debian-compat virtual package, level 12. [...]


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month:


I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

Improvements:

  • Don't fallback to an unhelpful raw hexdump when, for example, readelf(1) reports an minor issue in a section in an ELF binary. For example, when the .frames section is of the NOBITS type its contents are apparently "unreliable" and thus readelf(1) returns 1. (#58, #931962)
  • Include either standard error or standard output (not just the latter) when an external command fails. [...]

Bug fixes:

  • Skip calls to unsquashfs when we are neither root nor running under fakeroot. (#63)
  • Ensure that all of our artificially-created subprocess.CalledProcessError instances have output instances that are bytes objects, not str. [...]
  • Correct a reference to parser.diff; diff in this context is a Python function in the module. [...]
  • Avoid a possible traceback caused by a str/bytes type confusion when handling the output of failing external commands. [...]

Testsuite improvements:

  • Test for 4.4 in the output of squashfs -version, even though the Debian package version is 1:4.3+git190823-1. [...]
  • Apply a patch from László Böszörményi to update the squashfs test output and additionally bump the required version for the test itself. (#62 & #935684)
  • Add the wabt Debian package to the test-dependencies so that we run the WebAssembly tests on our continuous integration platform, etc. [...]

Improve debugging:

  • Add the containing module name to the (eg.) Using StaticLibFile for ... debugging messages. [...]
  • Strip off trailing "original size modulo 2^32 671" (etc.) from gzip compressed data as this is just a symptom of the contents itself changing that will be reflected elsewhere. (#61)
  • Avoid a lack of space between "... with return code 1" and "Standard output". [...]
  • Improve debugging output when instantantiating our Comparator object types. [...]
  • Add a literal "eg." to the comment on stripping "original size modulo..." text to emphasise that the actual numbers are not fixed. [...]

Internal code improvements:

  • No need to parse the section group from the class name; we can pass it via type built-in kwargs argument. [...]
  • Add support to Difference.from_command_exc and friends to ignore specific returncodes from the called program and treat them as "no" difference. [...]
  • Simplify parsing of optional command_args argument to Difference.from_command_exc. [...]
  • Set long_description_content_type to text/x-rst to appease the PyPI.org linter. [...]
  • Reposition a comment regarding an exception within the indented block to match Python code convention. [...]


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add support for enabling and disabling specific normalizers via the command line. (#10)
  • Drop accidentally-committed warning emitted on every fixture-based test. [...]
  • Reintroduce the .ar normalizer [...] but disable it by default so that it can be enabled with --normalizers=+ar or similar. (#3)
  • In verbose mode, print the normalizers that strip-nondeterminism will apply. [...]

Debian

Lintian

More hacking on the Lintian static analysis tool for Debian packages, including uploading versions 2.17.0, 2.18.0 and 2.19.0:

New features:

Bug fixes:

Other:


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, etc.

  • Investigated and triaged cent, clamav, enigmail, freeradius, ghostscript, libcrypto++, musl, open-cobol, pango1.0, php5, python-django, python-werkzeug, radare2, salt, subversion, suricata, u-boot, xtrlock & yara.

  • Updated our lts-cve-triage.py script to correct undefined reference to colored when standard output is not a terminal [...] and address a number of flake8 issues [...].

  • Worked on a number of interations towards a comprehensive patch to xtrlock to address an issue whereby multitouch events (such as on a tablet or many modern laptops) are not correct locked. Whilst originally filed by a user as #830726 whilst triaging issues for this package I was able to reproduce it. I thus requested and was granted my first CVE number (CVE-2016-10894) and hope to upload a patched version early next month.

  • Issued DLA 1896-1 for to fix a remote arbitrary code vulnerability in commons-beanutils, a set of tools and utilities for manipulating JavaBeans.

  • Issued DLA 1872-1 for the Django web development framework correcting two denial of service vulnerabilities and requiring a backport of upstream's patch series. I also fixed these issues in the buster distribution as well as an SQL injection possibility and potential memory exhaustion issues.

You can find out more about the project in the following video:


Debian uploads


FTP Team

As a Debian FTP assistant I ACCEPTed 28 packages: bitshuffle, golang-github-abdullin-seq, golang-github-centurylinkcloud-clc-sdk, golang-github-cnf-structhash, golang-github-deanthompson-ginpprof, golang-github-ensighten-udnssdk, golang-github-gin-contrib-cors, golang-github-gin-contrib-gzip, golang-github-gin-contrib-static, golang-github-hansrodtang-randomcolor, golang-github-jarcoal-httpmock, golang-github-mcuadros-go-gin-prometheus, golang-github-mitchellh-go-linereader, golang-github-nesv-go-dynect, golang-github-sethvargo-go-fastly, golang-github-terra-farm-udnssdk, golang-github-yourbasic-graph, golang-github-ziutek-mymysql, golang-gopkg-go-playground-colors.v1, gulkan, kdeplasma-applets-xrdesktop, libcds, libinputsynth, openvr, parfive, transip, znc & znc-push.

,

CryptogramFriday Squid Blogging: Why Mexican Jumbo Squid Populations Have Declined

A group of scientists conclude that it's shifting weather patterns and ocean conditions.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDWhat does it mean to become a TED Fellow?

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Every year, TED begins a new search looking for the brightest thinkers and innovators to be part of the TED Fellows program. With nearly 500 visionaries representing 300 different disciplines, these extraordinary individuals are making waves, disrupting the status quo and creating real impact.

Through a rigorous application process, we narrow down our candidate pool of thousands to just 20 exceptional people. (Trust us, this is not easy to do.) You may be wondering what makes for a good application (read more about that here), but just as importantly: What exactly does it mean to be a TED Fellow? Yes, you’ll work hand-in-hand with the Fellows team to give a TED Talk on stage, but being a Fellow is so much more than that. Here’s what happens once you get that call.

1. You instantly have a built-in support system.

Once selected, Fellows become part of our active global community. They are connected to a diverse network of other Fellows who they can lean on for support, resources and more. To get a better sense of who these people are (fishing cat conservationists! space environmentalists! police captains!), take a closer look at our class of 2019 Fellows, who represent 12 countries across four continents. Their common denominator? They are looking to address today’s most complex challenges and collaborate with others — which could include you.

2. You can participate in TED’s coaching and mentorship program.

To help Fellows achieve an even greater impact with their work, they are given the opportunity to participate in a one-of-a-kind coaching and mentoring initiative. Collaboration with a world-class coach or mentor helps Fellows maximize effectiveness in their professional and personal lives and make the most of the fellowship.

The coaches and mentors who support the program are some of the world’s most effective and intuitive individuals, each inspired by the TED mission. Fellows have reported breakthroughs in financial planning, organizational effectiveness, confidence and interpersonal relationships thanks to coaches and mentors. Head here to learn more about this initiative. 

3. You’ll receive public relations guidance and professional development opportunities, curated through workshops and webinars. 

Have you published exciting new research or launched a groundbreaking project? We partner with a dedicated PR agency to provide PR training and valuable media opportunities with top tier publications to help spread your ideas beyond the TED stage. The TED Fellows program has been recognized by PR News for our “PR for Fellows” program.

In addition, there are vast opportunities for Fellows to hone their skills and build new ones through invigorating workshops and webinars that we arrange throughout the year. We also maintain a Fellows Blog, where we continue to spotlight Fellows long after they give their talks.

***

Over the last decade, our program has helped Fellows impact the lives of more than 180 million people. Success and innovation like this doesn’t happen in a vacuum — it’s sparked by bringing Fellows together and giving them this kind of support. If this sounds like a community you want to join, apply to become a TED Fellow by August 27, 2019 11:59pm UTC.

Sociological ImagesSurviving Student Debt

Recent estimates indicate that roughly 45 million students in the United States have incurred student loans during college. Democratic candidates like Senators Elizabeth Warren and Bernie Sanders have proposed legislation to relieve or cancel  this debt burden. Sociologist Tressie McMillan Cottom’s congressional testimony on behalf of Warren’s student loan relief plan last April reveals the importance of sociological perspectives on the debt crisis. Sociologists have recently documented the conditions driving student loan debt and its impacts across race and gender. 

College debt is the new black.
Photo Credit: Mike Rastiello, Flickr CC

In recent decades, students have enrolled in universities at increasing rates due to the “education gospel,” where college credentials are touted as public goods and career necessities, encouraging students to seek credit. At the same time, student loan debt has rapidly increased, urging students to ask whether the risks of loan debt during early adulthood outweigh the reward of a college degree. Student loan risks include economic hardship, mental health problems, and delayed adult transitions such as starting a family.Individual debt has also led to disparate impacts among students of color, who are more likely to hail from low-income families. Recent evidence suggests that Black students are more likely to drop out of college due to debt and return home after incurring more debt than their white peers. Racial disparities in student loan debt continue into their mid-thirties and impact the white-Black racial wealth gap.

365.75
Photo Credit: Kirstie Warner, Flickr CC

Other work reveals gendered disparities in student debt. One survey found that while women were more likely to incur debt than their male peers, men with higher levels of student debt were more likely to drop out of college than women with similar amounts of debt. The authors suggest that women’s labor market opportunities — often more likely to require college degrees than men’s — may account for these differences. McMillan Cottom’s interviews with 109 students from for-profit colleges uncovers how Black, low-income women in particular bear the burden of student loans. For many of these women, the rewards of college credentials outweigh the risks of high student loan debt.

Amber Joy is a PhD candidate in the Department of Sociology at the University of Minnesota. Her current research interests include punishment, policing, victimization, youth, and the intersections of race, gender, and sexuality. Her dissertation explores youth responses to sexual violence within youth correctional facilities.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityPhishers are Angling for Your Cloud Providers

Many companies are now outsourcing their marketing efforts to cloud-based Customer Relationship Management (CRM) providers. But when accounts at those CRM providers get hacked or phished, the results can be damaging for both the client’s brand and their customers. Here’s a look at a recent CRM-based phishing campaign that targeted customers of Fortune 500 construction equipment vendor United Rentals.

Stamford, Ct.-based United Rentals [NYSE:URI] is the world’s largest equipment rental company, with some 18,000 employees and earnings of approximately $4 billion in 2018. On August 21, multiple United Rental customers reported receiving invoice emails with booby-trapped links that led to a malware download for anyone who clicked.

While phony invoices are a common malware lure, this particular campaign sent users to a page on United Rentals’ own Web site (unitedrentals.com).

A screen shot of the malicious email that spoofed United Rentals.

In a notice to customers, the company said the unauthorized messages were not sent by United Rentals. One source who had at least two employees fall for the scheme forwarded KrebsOnSecurity a response from UR’s privacy division, which blamed the incident on a third-party advertising partner.

“Based on current knowledge, we believe that an unauthorized party gained access to a vendor platform United Rentals uses in connection with designing and executing email campaigns,” the response read.

“The unauthorized party was able to send a phishing email that appears to be from United Rentals through this platform,” the reply continued. “The phishing email contained links to a purported invoice that, if clicked on, could deliver malware to the recipient’s system. While our investigation is continuing, we currently have no reason to believe that there was unauthorized access to the United Rentals systems used by customers, or to any internal United Rentals systems.”

United Rentals told KrebsOnSecurity that its investigation so far reveals no compromise of its internal systems.

“At this point, we believe this to be an email phishing incident in which an unauthorized third party used a third-party system to generate an email campaign to deliver what we believe to be a banking trojan,” said Dan Higgins, UR’s chief information officer.

United Rentals would not name the third party marketing firm thought to be involved, but passive DNS lookups on the UR subdomain referenced in the phishing email (used by UL for marketing since 2014 and visible in the screenshot above as “wVw.unitedrentals.com”) points to Pardot, an email marketing division of cloud CRM giant Salesforce.

Companies that use cloud-based CRMs sometimes will dedicate a domain or subdomain they own specifically for use by their CRM provider, allowing the CRM to send emails that appear to come directly from the client’s own domains. However, in such setups the content that gets promoted through the client’s domain is actually hosted on the cloud CRM provider’s systems.

Salesforce told KrebsOnSecurity that this was not a compromise of Pardot, but of a Pardot customer account that was not using multi-factor authentication.

“UR uses a third party marketing agency that utilizes the Pardot platform,” said Salesforce spokesman Bradford Burns. “The third party marketing agency is who was compromised, not a Pardot employee.”

This attack comes on the heels of another targeted phishing campaign leveraging Pardot that was documented earlier this month by Netskope, a cloud security firm. Netskope’s Ashwin Vamshi said users of cloud CRM platforms have a high level of trust in the software because they view the data and associated links as internal, even though they are hosted in the cloud.

“A large number of enterprises provide their vendors and partners access to their CRM for uploading documents such as invoices, purchase orders, etc. (and often these happen as automated workflows),” Vamshi wrote. “The enterprise has no control over the vendor or partner device and, more importantly, over the files being uploaded from them. In many cases, vendor- or partner-uploaded files carry with them a high level of implicit trust.”

Cybercriminals increasingly are targeting cloud CRM providers because compromised accounts on these systems can be leveraged to conduct extremely targeted and convincing phishing attacks. According to the most recent stats (PDF) from the Anti-Phishing Working Group, software-as-a-service providers (including CRM and Webmail providers) were the most-targeted industry sector in the first quarter of 2019, accounting for 36 percent of all phishing attacks.

Image: APWG

Update, 2:55 p.m. ET: Added comments and responses from Salesforce.

Planet DebianDimitri John Ledkov: How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client

Overivew

TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1.

As announced on the 15th of October 2018 Apple, Google, and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well.

To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that do not support TLS v1.2.

How to disable TLS v1.0 and TLS v1.1 in Google Chrome on Ubuntu

  1. Create policy directory
    sudo mkdir -p /etc/opt/chrome/policies/managed
  2. Create /etc/opt/chrome/policies/managed/mintlsver.json with
    {
        "SSLVersionMin" : "tls1.2"

How to disable TLS v1.0 and TLS v1.1 in Firefox on Ubuntu

  1. Navigate to about:config in the URL bar
  2. Search for security.tls.version.min setting
  3. Set it to 3, which stand for minimum TLS v1.2

How to disable TLS v1.0 and TLS v1.1 in OpenSSL

  1. Edit /etc/ssl/openssl.cnf
  2. After oid_section stanza add
    # System default
    openssl_conf = default_conf
  3. After oid_section stanza add
    [default_conf]
    ssl_conf = ssl_sect

    [ssl_sect]
    system_default = system_default_sect

    [system_default_sect]
    MinProtocol = TLSv1.2
    CipherString = DEFAULT@SECLEVEL=2
  4.  Save the file

How to disable TLS v1.0 and TLS v1.1 in GnuTLS

  1. Create config directory
    sudo mkdir -p /etc/gnutls/
  2. Create /etc/gnutls/default-priorities with
    SYSTEM=SECURE192:-VERS-ALL:+VERS-TLS1.3:+VERS-TLS1.2 
After performing above tasks most common applications will use TLS v1.2+

I have set these defaults on my systems, and I occasionally hit websites that only support TLS v1.0 and I report them. Have you found any websites and systems you use that do not support TLS v1.2 yet?

Planet DebianJonathan Dowland: PhD Stage 1 Progression Report

As promised, here's the report I wrote for my PhD Stage 1 progression in the hope that it is useful or interesting to someone. I've made some very small modifications to the submitted copy in order to remove some personal information.

I'll reiterate something from when I published my proposal:

A document produced for one institution's expectations might not be directly applicable to another. … You don't have any idea whether it has been judged to be particularly good or bad one by those who received it (you can make your own judgements).

CryptogramAttacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that's not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn't imagine that they would be necessary. The results are predictable.

The paper: "Practical Enclave Malware with Intel SGX."

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel's threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user's behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

Worse Than FailureError'd: Resistant to Change

Tom H. writes, "They got rid of their old, outdated fax machine, but updating their website? Yeah, that might take a while."

 

"In casinos, they say the house always wins. In this case, when I wanted to cash in my winnings, I gambled and lost against Windows 7 Professional," Michelle M. wrote.

 

Martin writes, "Wow! It's great to see Apple is going the extra mile by protecting my own privacy from myself!"

 

"Yes, Amazon Photos, with my mouse clicks, I will fix you," wrote Amos B.

 

"When searches go wrong at AliExpress they want you to know these three things," Erwan R. wrote.

 

Chris A. writes, "It's like Authy is saying 'I have no idea what you just did, but, on the bright side, there weren`t any errors!'"

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Krebs on SecurityRansomware Bites Dental Data Backup Firm

PerCSoft, a Wisconsin-based company that manages a remote data backup service relied upon by hundreds of dental offices across the country, is struggling to restore access to client systems after falling victim to a ransomware attack.

West Allis, Wis.-based PerCSoft is a cloud management provider for Digital Dental Record (DDR), which operates an online data backup service called DDS Safe that archives medical records, charts, insurance documents and other personal information for various dental offices across the United States.

The ransomware attack hit PerCSoft on the morning of Monday, Aug. 26, and encrypted dental records for some — but not all — of the practices that rely on DDS Safe.

PercSoft did not respond to requests for comment. But Brenna Sadler, director of  communications for the Wisconsin Dental Association, said the ransomware encrypted files for approximate 400 dental practices, and that somewhere between 80-100 of those clients have now had their files restored.

Sadler said she did not know whether PerCSoft and/or DDR had paid the ransom demand, what ransomware strain was involved, or how much the attackers had demanded.

But updates to PerCSoft’s Facebook page and statements published by both PerCSoft and DDR suggest someone may have paid up: The statements note that both companies worked with a third party software company and were able to obtain a decryptor to help clients regain access to files that were locked by the ransomware.

Update: Several sources are now reporting that PerCSoft did pay the ransom, although it is not clear how much was paid. One member of a private Facebook group dedicated to IT professionals serving the dental industry shared the following screenshot, which is purportedly from a conversation between PerCSoft and an affected dental office, indicating the cloud provider was planning to pay the ransom:

Another image shared by members of that Facebook group indicates the ransomware that attacked PerCSoft is an extremely advanced and fairly recent strain known variously as REvil and Sodinokibi.

Original story:

However, some affected dental offices have reported that the decryptor did not work to unlock at least some of the files encrypted by the ransomware. Meanwhile, several affected dentistry practices said they feared they might be unable to process payroll payments this week as a result of the attack.

Cloud data and backup services are a prime target of cybercriminals who deploy ransomware. In July, attackers hit QuickBooks cloud hosting firm iNSYNQ, holding data hostage for many of the company’s clients. In February, cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider Dataresolution.net took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files. In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual.

It remains unclear whether PerCSoft or DDR — or perhaps their insurance provider — paid the ransom demand in this attack. But new reporting from independent news outlet ProPublica this week sheds light on another possible explanation why so many victims are simply coughing up the money: Their insurance providers will cover the cost — minus a deductible that is usually far less than the total ransom demanded by the attackers.

More to the point, ProPublica found, such attacks may be great for business if you’re in the insurance industry.

“More often than not, paying the ransom is a lot cheaper for insurers than the loss of revenue they have to cover otherwise,” said Minhee Cho, public relations director of ProPublica, in an email to KrebsOnSecurity. “But, by rewarding hackers, these companies have created a perverted cycle that encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.”

“In fact, it seems hackers are specifically extorting American companies that they know have cyber insurance,” Cho continued. “After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware.”

Read the full ProPublica piece here. And if you haven’t already done so, check out this outstanding related reporting by ProPublica from earlier this year on how security firms that help companies respond to ransomware attacks also may be enabling and emboldening attackers.

Planet DebianDirk Eddelbuettel: anytime 0.3.6

A fresh and very exciting release of the anytime package is arriving on CRAN right now. This is the seventeenth release, and it comes pretty much exactly one month after the preceding 0.3.5 release.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release updates a number of things (see below for details). For users, maybe the most important change is that we now also convert single-digit months, i.e. a not-quite ISO input like “2019-7-5” passes. This required adding %e as a month format; I had overlooked this detail in the (copious) Boost date_time documentation. Another nice change is that we now use standard S3 dispatching rather a manual approach as we probably should have for a long time :-) but better late than never. The code change was actually rather minimal and done in a few minutes. Another change is a further extended use of unit testing via the excellent tinytest package which remains a joy to use. We also expanded the introductory pdf vignette; the benchmark comparisons we included look pretty decent for anytime which still combines ease of use and versability with performance.

Lastly, a somewhat sad “lowlight”. We submitted the package to the Journal of Open Source Software who then told us within days of the unworthyness of anytime for lack of research focus. Needless to see, we disagree. So here is plea: If you use anytime in a research setting, would you mind adding to the this very issue ticket and saying so? This may permit us a somewhat more emphatic data-driven riposte to the editors. Many thanks in advance for considering this.

The full list of changes follows.

Changes in anytime version 0.3.6 (2019-08-29)

  • Added, and then removed, required file for JOSS; added 'unworthy' badge as we earned a desk reject (cf #1605 there).

  • Renamed internal helper function format() to fmt() to avoid clashes with base::format() (Dirk in #104).

  • Use S3 dispatch and generics for key functions (Dirk in #106).

  • Continued to tweak tests as we find some of the rhub platform to behave strangely (Dirk via commits as well as #107).

  • Added %e format for single-digit day parsing by Boost (Dirk addressing at least #24, #70 and #99).

  • Expansed and updated vignette with benchmark comparisons.

  • Updated unit tests using tinytest which remains a pleasure to use; versioned Suggests: is now '>= 1.0.0'.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramAI Emotion-Detection Arms Race

Voice systems are increasingly using AI techniques to determine emotion. A new paper describes an AI-based countermeasure to mask emotion in spoken words.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

Academic paper.

Worse Than FailureCodeSOD: Bassackwards Compatibility

A long time ago, you built a web service. It was long enough ago that you chose XML as your serialization format. It worked fine, but before long, customers started saying that they’d really like to use JSON, so now you need to expose a slightly different, JSON-powered version of your API. To make it easy, you release a JSON client developers can drop into their front-ends.

Conor is one of those developers, and while examining the requests the client sent, he discovered a unique way of making your XML web-service JSON-friendly.

{"fetch":"<fetch version='1.0'><entity><entityDescriptor id='10'/>…<loadsMoreXML/></entity></fetch>"}

Simplicity itself!

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet DebianSteve McIntyre: If you can't stand the heat, get out of the kitchen...

Wow, we had a hot weekend in Cambridge. About 40 people turned up to our place in Cambridge for this year's OMGWTFBBQ. Last year we were huddling under the gazebos for shelter from torrential rain; this year we again had all the gazebos up, but this time to hide from the sun instead. We saw temperatures well into the 30s, which is silly for Cambridge at the end of August.

I think it's fair to say that everybody enjoyed themselves despite the ludicrous heat levels. We had folks from all over the UK, and Lars and Soile travelled all the way from Helsinki in Finland to help him celebrate his birthday!

cake!

We had a selection of beers again from the nice folks at Milton Brewery:
is 3 firkins enough?

Lars made pancakes, Paul made bread, and people brought lots of nice food and drink with them too.

Many thanks to a number of awesome friendly companies for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

Planet DebianJulien Danjou: The Art of PostgreSQL is out!

The Art of PostgreSQL is out!

If you remember well, a couple of years ago, I wrote about Mastering PostgreSQL, a fantastic book written by my friend Dimitri Fontaine.

Dimitri is a long-time PostgreSQL core developer — for example, he wrote the extension support in PostgreSQL — no less. He is featured in my book Serious Python, where he advises on using databases and ORM in Python.

Today, Dimitri comes back with the new version of this book, named The Art of PostgreSQL.

The Art of PostgreSQL is out!As a bonus, here's a picture of me and Dimitri having fun in a PostgreSQL meetup!

I love the motto of this book: Turn Thousands of Lines of Code into Simple Queries. I have spent all my career working with code that talks to databases, and I can't count the number of times where I've seen people write lengthy, slow code in their pet language rather than a single well-thought SQL query which would do a better job.

The Art of PostgreSQL is out!

This is exactly what this book is about.

That's why it's my favorite SQL book. I learned so many things from it. In many cases, I've been able to divide by 10 the size of the code I had to write in Python to implement a feature. All I had to do is to browse the book to discover the right PostgreSQL feature and write a single SQL query. The right query that does the job for me.

Less code, fewer bugs, more happiness!

The book also features interviews with great PostgreSQL users and developers — hey, no wonder where Dimitri got this idea, right? ;-)

The Art of PostgreSQL is out!

I loved those interviews. What's better than reading Kris Jenkins explaining how Clojure and PostgreSQL play nice together, or Markus Winand (from the famous use-the-index-luke.com) talking about the relationship developers have with their database. :-)

No need to say that you should get your hands on this right now. Dimitri just made a launch offer where he offers a 15% discount on the book until the end of this month! You can also read the free chapter to get an idea of what you'll get.

Last thing: it's DRM-free and money-back guaranteed. You can get this book with your eyes closed.

The Art of PostgreSQL is out!

Google AdsenseSimplifying our content policies for publishers

One of our top priorities is to sustain a healthy digital advertising ecosystem, one that works for everyone: users, advertisers and publishers. On a daily basis, teams of Google engineers, policy experts, and product managers combat and stop bad actors. Just last year, we removed 734,000 publishers and app developers from our ad network and ads from nearly 28 million pages that violated our publisher policies.

But we’re not just stopping bad actors. Just as critical to our mission is the work we do every day to help good publishers in our network succeed. One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so they are easier to understand and follow. That’s why we'll be simplifying the way our content policies are presented to publishers, and standardizing content policies across our publisher products.

A simplified publisher experience
In September, we’ll update the way our publisher content policies are presented with a clear outline of the types of content where advertising is not allowed or will be restricted.

Our Google Publisher Policies will outline the types of content that are not allowed to show ads through any of our publisher products. This includes policies against illegal content, dangerous or derogatory content, and sexually explicit content, among others.

Our Google Publisher Restrictions will detail the types of content, such as alcohol or tobacco, that don’t violate policy, but that may not be appealing for all advertisers. Publishers will not receive a policy violation for trying to monetize this content, but only some advertisers and advertising products—the ones that choose this kind of content—will bid on it. As a result, Google Ads will not appear on this content and this content will receive less advertising than non-restricted content will. 

The Google Publisher Policies and Google Publisher Restrictions will apply to all publishers, regardless of the products they use—AdSense, AdMob or Ad Manager.

These changes are the next step in our ongoing efforts to make it easier for publishers to navigate our policies so their businesses can continue to thrive with the help of our publisher products.


Posted by:
Scott Spencer, Director of Sustainable Ads


CryptogramThe Myth of Consumer-Grade Security

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that's not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: "After all, we are not talking about protecting the nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications."

The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.

This wasn't true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn't have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military. Today, the predominant encryption algorithm for commercial applications -- Advanced Encryption Standard (AES) -- is approved by the National Security Agency (NSA) to secure information up to the level of Top Secret. The Department of Defense's classified analogs of the Internet­ -- Secret Internet Protocol Router Network (SIPRNet), Joint Worldwide Intelligence Communications System (JWICS) and probably others whose names aren't yet public -- use the same Internet protocols, software, and hardware that the rest of the world does, albeit with additional physical controls. And the NSA routinely assists in securing business and consumer systems, including helping Google defend itself from Chinese hackers in 2010.

Yes, there are some military applications that are different. The US nuclear system Barr mentions is one such example -- and it uses ancient computers and 8-inch floppy drives. But for pretty much everything that doesn't see active combat, it's modern laptops, iPhones, the same Internet everyone else uses, and the same cloud services.

This is also true for corporate applications. Corporations rarely use customized encryption to protect their operations. They also use the same types of computers, networks, and cloud services that the government and consumers use. Customized security is both more expensive because it is unique, and less secure because it's nonstandard and untested.

During the Cold War, the NSA had the dual mission of attacking Soviet computers and communications systems and defending domestic counterparts. It was possible to do both simultaneously only because the two systems were different at every level. Today, the entire world uses Internet protocols; iPhones and Android phones; and iMessage, WhatsApp and Signal to secure their chats. Consumer-grade encryption is the same as military-grade encryption, and consumer security is the same as national security.

Barr can't weaken consumer systems without also weakening commercial, government, and military systems. There's one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It's time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

This essay previously appeared on Lawfare.com.

LongNowThe Vineyard Gazette on Revive & Restore’s Heath Hen De-extinction Efforts

 The world’s last heath hen went extinct in Martha’s Vineyard in 01932. The Revive & Restore team recently paid a visit there to discuss their efforts to bring the species back.

Members of the Revive & Restore team next to a statue of Booming Ben, the last heath hen.

From the Vineyard Gazette:

Buried deep within the woods of the Manuel Correllus State Forest is a statue of Booming Ben, the world’s final heath hen. Once common all along the eastern seaboard, the species was hunted to near-extinction in the 1870s. Although a small number of the birds found refuge on Martha’s Vineyard, they officially disappeared in 1932 — with Booming Ben, the last of their kind, calling for female mates who were no longer there to hear him.

“There is no survivor, there is no future, there is no life to be recreated in this form again,” Gazette editor Henry Beetle Hough wrote. “We are looking upon the uttermost finality which can be written, glimpsing the darkness which will not know another ray of light. We are in touch with the reality of extinction.”

The statue memorializes that reality.

Since 2013, however, a group of cutting-edge researchers with the group Revive and Restore have been hard at work to bring back the heath hen as part of an ambitious avian de-extinction project. The project got started when Ryan Phelan, who co-founded Revive and Restore with her husband, scientist and publisher of the Whole Earth Catalogue, Stewart Brand, began to think broadly about the goals for their organization.

“We started by saying what’s the most wild idea possible?” Ms. Phelan said. “What’s the most audacious? That would be bringing back an extinct species.”

Read the piece in full here.

Worse Than FailureTeleported Release

Matt works at an accounting firm, as a data engineer. He makes reports for people who don’t read said reports. Accounting firms specialize in different areas of accountancy, and Matt’s firm is a general firm with mid-size clients.

The CEO of the firm is a legacy from the last century. The most advanced technology on his desk is a business calculator and a pencil sharpener. He still doesn’t use a cellphone. But he does have a son, who is “tech savvy”, which gives the CEO a horrible idea of how things work.

Usually, this is pretty light, in that it’s sorting Excel files or sorting the output of an existing report. Sometimes the requests are bizarre or utter nonsense. And, because the boss doesn’t know what the technical folks are doing, some of the IT staff may be a bit lazy about following best practices.

This means that most of Matt’s morning is spent doing what is essentially Tier 1 support before he gets into doing his real job. Recently, there was a worse crunch, as actual support person Lucinda was out for materinity leave, and Jackie, the one other developer, was off on vacation on a foreign island with no Internet. Matt was in the middle of eating a delicious lunch of take-out lo mein when his phone rang. He sighed when he saw the number.

“Matt!” the CEO exclaimed. “Matt! We need to do a build of the flagship app! And a deploy!”

The app was rather large, and a build could take upwards of 45 minutes, depending on the day and how the IT gods were feeling. But the process was automated, the latest changes all got built and deployed each night. Anything approved was released within 24 hours. With everyone out of the office, there hadn’t been any approved changes for a few weeks.

Matt checked the Github to see if something went wrong with the automated build. Everything was fine.

“Okay, so I’m seeing that everything built on GitHub and everything is available in production,” Matt said.

“I want you to do a manual build, like you used to.”

“If I were to compile right now, it could take quite awhile, and redeploying runs the risk of taking our clients offline, and nothing would be any different.”

“Yes, but I want a build that has the changes which Jackie was working on before she left for vacation.”

Matt checked the commit history, and sure enough, Jackie hadn’t committed any changes since two weeks before leaving on vacation. “It doesn’t looked like she pushed those changes to Github.”

“Githoob? I thought everything was automated. You told me the process was automated,” the CEO said.

“It’s kind of like…” Matt paused to think of an analogy that could explain this to a golden retriever. “Your dishwasher, you could put a timer on it to run it every night, but if you don’t load the dishwasher first, nothing gets cleaned.”

There was a long pause as the CEO failed to understand this. “I want Jackie’s front-page changes to be in the demo I’m about to do. This is for Initech, and there’s millions of dollars riding on their account.”

“Well,” Matt said, “Jackie hasn’t pushed- hasn’t loaded her metaphorical dishes into the dishwasher, so I can’t really build them.”

“I don’t understand, it’s on her computer. I thought these computers were on the cloud. Why am I spending all this money on clouds?”

“If Jackie doesn’t put it on the cloud, it’s not there. It’s uh… like a fax machine, and she hasn’t sent us the fax.”

“Can’t you get it off her laptop?”

“I think she took it home with her,” Matt said.

“So?”

“Have you ever seen Star Trek? Unless Scotty can teleport us to Jackie’s laptop, we can’t get at her files.”

The CEO locked up on that metaphor. “Can’t you just hack into it? I thought the NSA could do that.”

“No-” Matt paused. Maybe Matt could try and recreate the changes quickly? “How long before this meeting?” he asked.

“Twenty minutes.”

“Just to be clear, you want me to do a local build with files I don’t have by hacking them from a computer which may or may not be on and connected to the Internet, and then complete a build process which usually takes 45 minutes- at least- deploy to production, so you can do a demo in twenty minutes?”

“Why is that so difficult?” the CEO demanded.

“I can call Jackie, and if she answers, maybe we can figure something out.”

The CEO sighed. “Fine.”

Matt called Jackie. She didn’t answer. Matt left a voicemail and then went back to eating his now-cold lo mein.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityCybersecurity Firm Imperva Discloses Breach

Imperva, a leading provider of Internet firewall services that help Web sites block malicious cyberattacks, alerted customers on Tuesday that a recent data breach exposed email addresses, scrambled passwords, API keys and SSL certificates for a subset of its firewall users.

Redwood Shores, Calif.-based Imperva sells technology and services designed to detect and block various types of malicious Web traffic, from denial-of-service attacks to digital probes aimed at undermining the security of Web-based software applications.

Image: Imperva

Earlier today, Imperva told customers that it learned on Aug. 20 about a security incident that exposed sensitive information for some users of Incapsula, the company’s cloud-based Web Application Firewall (WAF) product.

“On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017,” wrote Heli Erickson, director of analyst relations at Imperva.

“We want to be very clear that this data exposure is limited to our Cloud WAF product,” Erickson’s message continued. “While the situation remains under investigation, what we know today is that elements of our Incapsula customer database from 2017, including email addresses and hashed and salted passwords, and, for a subset of the Incapsula customers from 2017, API keys and customer-provided SSL certificates, were exposed.”

Companies that use the Incapsula WAF route all of their Web site traffic through the service, which scrubs the communications for any suspicious activity or attacks and then forwards the benign traffic on to its intended destination.

Rich Mogull, founder and vice president of product at Kansas City-based cloud security firm DisruptOps, said Imperva is among the top three Web-based firewall providers in business today.

According to Mogull, an attacker in possession of a customer’s API keys and SSL certificates could use that access to significantly undermine the security of traffic flowing to and from a customer’s various Web sites.

At a minimum, he said, an attacker in possession of these key assets could reduce the security of the WAF settings and exempt or “whitelist” from the WAF’s scrubbing technology any traffic coming from the attacker. A worst-case scenario could allow an attacker to intercept, view or modify traffic destined for an Incapsula client Web site, and even to divert all traffic for that site to or through a site owned by the attacker.

“Attackers could whitelist themselves and begin attacking the site without the WAF’s protection,” Mogull told KrebsOnSecurity. “They could modify any of the security Incapsula security settings, and if they got [the target’s SSL] certificate, that can potentially expose traffic. For a security-as-a-service provider like Imperva, this is the kind of mistake that’s up there with their worst nightmare.”

Imperva urged all of its customers to take several steps that might mitigate the threat from the data exposure, such as changing passwords for user accounts at Incapsula, enabling multi-factor authentication, resetting API keys, and generating/uploading new SSL certificates.

Alissa Knight, a senior analyst at Aite Group, said the exposure of Incapsula users’ scrambled passwords and email addresses was almost incidental given that the intruders also made off with customer API keys and SSL certificates.

Knight said although we don’t yet know the cause of this incident, such breaches at cloud-based firms often come down to small but ultimately significant security failures on the part of the provider.

“The moral of the story here is that people need to be asking tough questions of software-as-a-service firms they rely upon, because those vendors are being trusted with the keys to the kingdom,” Knight said. “Even if the vendor in question is a cybersecurity company, it doesn’t necessarily mean they’re eating their own dog food.”

CryptogramThe Threat of Fake Academic Research

Interesting analysis of the possibility, feasibility, and efficacy of deliberately fake scientific research, something I had previously speculated about.

Worse Than FailureCodeSOD: Yesterday's Enterprise

I bumped into a few co-workers (and a few readers- that was a treat!) at Abstractions last week. My old co-workers informed me that the mainframe system, which had been “going away any day now” since about 1999, had finally gone away, as of this year.

A big part of my work at that job had been about running systems in parallel with the mainframe in some fashion, which meant I made a bunch of “datapump” applications which fed data into or pulled data out of the mainframe. Enterprise organizations often don’t know what their business processes are: the software which encodes the process is older than most anyone working in the organization, and it must work that way, because that’s the process (even though no one knows why).

Robert used to work for a company which offers an “enterprise” product, and since they know that their customers don’t actually know what they’re doing, this product can run in parallel with their existing systems. Of course, running in parallel means that you need to synchronize with the existing system.

So, for example, there were two systems. One we’ll call CQ and one we’ll call FP. Let’s say FP has the latest data. We need a method which updates CQ based on the state of FP. This is that method.

private boolean updateCQAttrFromFPAttrValue(CQRecordAttribute cqAttr, String cqtype,
        Attribute fpAttr)
        throws Exception
    {
        AppLogService.debug("Invoking " + this.getClass().getName()
            + "->updateCSAttrFromFPAttrValue()");

        String csAttrName = cqAttr.getName();
        String csAttrtype = cqAttr.getAttrType();
        String str = avt.getFPAttributeValueAsString(fpAttr);
        if (str == null)
            return false;

        if (csAttrtype.compareTo(CQConstants.CQ_SHORT_STRING_TYPE) != 0
            || csAttrtype.compareTo(CQConstants.CQ_MULTILINE_STRING_TYPE) != 0)
        {
            String csStrValue = cqAttr.getStringValue();
            if (str == null) {
                return false;
            }
            if (csStrValue != null) {
                if (str.compareTo(csStrValue) == 0) // No need to update. Still
                // same values
                {
                    return false;
                }
            }
            cqAttr.setStringValue(str);
            AppLogService.debug("CQ Attribute Name- " + csAttrName + ", Type- "
                + csAttrtype + ", Value- " + str);
            AppLogService.debug("Exiting " + this.getClass().getName()
                + "->updateCSAttrFromFPAttrValue()");
            return true;
        }
        if (csAttrtype.compareTo(CQConstants.CQ_SHORT_STRING_TYPE) == 0) {
            AttributeChoice_type0 choicetype = fpAttr
                .getAttributeChoice_type0();
            if (choicetype.getCheckBox() != null) {
                boolean val = choicetype.getCheckBox().getValue();

                if (val) {
                    str = "1";
                }

                if (str.equals(cqAttr.getStringValue())) {
                    return false;
                }

                cqAttr.setStringValue(str);

                AppLogService.debug("CS Attribute Name- " + csAttrName
                    + ", Type- " + csAttrtype + ", Value- " + str);
                AppLogService.debug("Exiting " + this.getClass().getName()
                    + "->updateCQAttrFromFPAttrValue()");
                return true;
            }
            return false;
        }
        if (csAttrtype.compareTo(CQConstants.CQ_DATE_TYPE) == 0) {
            AttributeChoice_type0 choicetype = fpAttr
                .getAttributeChoice_type0();
            if (choicetype.getDate() != null) {
                Calendar cald = choicetype.getDate().getValue();
                if (cald == null) {
                    return false;
                } else {
                    SimpleDateFormat fmt = new SimpleDateFormat(template
                        .getAdapterdateformat());
                    cqAttr.setStringValue(fmt.format(cald.getTime()));
                }
                AppLogService.debug("CS Attribute Name- " + csAttrName
                    + ", Type- " + csAttrtype + ", Value- " + str);
                AppLogService.debug("Exiting " + this.getClass().getName()
                    + "->updateCSAttrFromFPAttrValue()");
                return true;
            }
            return false;
        }

        AppLogService.debug("Exiting " + this.getClass().getName()
            + "->updateCSAttrFromFPAttrValue()");
        return false;
    }

For starters, I have to say that the method name is a thing of beauty: updateCQAttrFromFPAttrValue. It’s meaningful if you know the organizational jargon, but utterly opaque to everyone else in the world. Of course, this is the last time the code is clear even to those folks, as the very first line is a log message which outputs the wrong method name: updateCSAttrFromFPAttrValue. After that, all of our cqAttr properties get stuffed into csAttr variables.

And the fifth line: String str = avt.getFPAttributeValueAsString(fpAttr);

avt stands for “attribute value translator”, and yes, everything is string-ly typed, because of course it is.

That gets us five lines in, and it’s all downhill from there. Judging from all the getCheckBox() calls, we’re interacting with UI components directly, pretty much every logging message outputs the wrong method name, except the rare case where it doesn’t.

And as ugly and awful as this code is, it’s strangely familiar. Oh, I’ve never seen this particular bit of code before. But I have seen the code my old job wrote to keep the mainframe in sync with the Oracle ERP and the home-grown Access databases and internally developed dashboards and… it all looked pretty much like this.

The code you see here? This is the code that runs the world. This is what gets invoices processed, credit cards billed, inventory shipped, factories staffed, and hazardous materials accounted for.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Chaotic IdealismHow I Live Now

Years ago, when I was a biomedical engineering major and I thought I was going to be employable, I lived in an apartment and had a car and did all those things non-disabled people do. And I was stressed out, really stressed out, living on the edge of independence and just teetering, trying to keep my balance.

Eventually I switched majors from BME to psychology–an easier program, and one that interested me.

The car didn’t last long, totaled thanks to my poor reflexes and lack of the sort of short-notice judgment that makes me a dangerous driver. My driver’s license ran out; now I just have a state ID. I moved closer to WSU, but my executive function was still bad, and it was hard for me to get to class. They sent a van across the street to pick me up. I forgot to study; they provided one of their testing rooms, distraction-free, so I would have somewhere away from the temptations of my apartment to study. They interceded with professors and got me extra time.

I was taking classes part-time, with intensive help from the department of disability services; I couldn’t sustain full-time work. If Wright State hadn’t been willing to go out of its way for me, I’d never have gotten a degree at all. I was diagnosed with dysthymia as well as episodic major depression, which explained why I never seemed to get my energy back after an episode.

I graduated. GPA 3.5, respectable. Dreaming of graduate school. Blew the practice GRE out of the water.

I tried to get a job. I worked with my job coach for more than a year. I wanted a graduate assistantship, but nobody wanted me. We looked at jobs that would let me use my education, but nobody was hiring. Eventually we branched out into more low-level work–hospital receptionist, dog kennel attendant, pharmacy technician. They were all part-time; by that point I knew better than to assume I could stick it out for a 40-hour work week.

The pharmacy tech job almost succeeded, but the boss couldn’t work with the assisted transport service that could only deliver me between the hours of 9 and 3–plus, they’d assured me it was part time, only to schedule me for 35 hours. I can only assume they hired “part-time” workers to avoid paying them benefits.

I signed up with Varsity Tutors to teach math, science, writing, and statistics. I enjoyed the work, especially when I got to use my writing ability to help someone communicate clearly, or made statistics understandable to someone unused to thinking about math. But it wasn’t steady work; you were competing with all the other tutors. You had to accept a new assignment within seconds, even before you knew what it was or whether you could teach that topic, because if you didn’t someone else would click on it first. Students paid a huge fee–$50 an hour or thereabouts–of which we only got about $10. Sometimes, when I grabbed a job that involved teaching something I myself hadn’t learned yet, I had to spend hours preparing for a one-hour session–and no, preparation hours aren’t paid.

I grew tired of cheating the customers; I’m not worth a $50-an-hour tutoring fee, and practically all of the money went to the company for doing nothing more than maintaining a Web service to match tutors and clients. And since I’d paid, out of my own pocket, for a tablet, Web cam, and internet connection, I hadn’t actually made any money anyway. I suppose I would have, if I’d stuck with it, but I just don’t like feeling so dishonest. It’s been more than a year since I last had contact with them, so I can say that. No more non-disclosure agreement. I’m sure they haven’t changed, though.

I was running out of money. My disability payments couldn’t pay for my rent. Eventually, a friend who was remodeling a house in a Cincinnati suburb offered me a rented room, within my means, and I accepted.

For a year, I lived in a room of a house undergoing remodeling. Eventually, I moved downstairs, into a finished basement room. College loan companies bombarded me with mail, demanding money I didn’t have. With the US government becoming increasingly unstable, I worried that if I even tried to work, I might lose Medicaid, and without a Medicaid buy-in available, I would have to choose between working and taking my medication (note: I cannot work if I am not taking my meds; in fact, I am in deadly danger if I do not take my meds). It didn’t help that my area has no particularly good public transport service, and the assisted transport service is–as always–unreliable and cannot be used to get to work.

Eventually I gave in. I applied for permanent disability discharge of my student loans, and was granted it. I feel dishonest–again–for not being able to predict, when I got my degree, that it wouldn’t make me employable. But there it is. The world doesn’t like to hire people who are different, or who need accommodations, or who can’t fit into the machinery of society.

But a person can’t just sit around. I do a lot of volunteer work now. I’m the primary researcher for ASAN’s disability day of mourning web site; I spend an hour or more every day monitoring the news, keeping records, and writing bios of disabled people murdered by their families and caregivers. I’ve kept up with my own Autism Memorial site, too, and the list is nearly 500 names long now. Seems like a lot, but my spreadsheet of disabled homicide victims in general is approaching five thousand.

Two days a week, I volunteer at the library. I put away books, straighten shelves, help patrons find things. The board of directors of the library fired all the pages years ago as a cost-cutting measure, so it’s volunteers like me that keep the books on the shelves while the employees are stuck manning the checkout desk or the book return. I find the work very meaningful, especially in the current political climate; libraries are wonderful, subversive places that teach a person to think on their own.

In the backyard of the house, I’m growing a garden. Gardening is new to me, but last year I had an overabundance of cherry tomatoes, and this year I’m growing tomatoes, eppers, cucumbers, carrots, sunflowers, and various herbs. I keep the lawn mowed and the bushes trimmed. The garden is a good thing, because lately my food stamps have been cut and I can’t really afford produce anymore.

My housemate’s girlfriend moved in with him last summer. She’s a sweet teacher with two guinea pigs and a love of stories. On Fridays, we drive for an hour to go play D&D with friends, and I bake cookies. I’ve learned to bake cookies over the last few years; at first it was just frozen cookie dough, then from scratch. I’ve gotten pretty good at it.

After my cat Tiny died of kidney failure, Christy got more vocal and demanding. She yells at me now when she wants attention, and climbs up on my bed to snuggle with me. She seems to think she needs to do the job of two cats. She’s getting older now, less able to climb to the top of the furniture or snatch a fly out of the air with her paws; but she still gets the kitty crazies, running around and skating on the rag rugs I made to keep the concrete floor from being quite so chilly.

I’m still myself–idealistic, protective, with a deep need to be useful. Living now is easier than it used to be when I had college loans; I just don’t buy anything I don’t absolutely need, help where I can, and let the rest go. I still have to deal with depression and with the executive dysfunction and weird brain of autism, but that’s a part of me, and I see no sense in looking down on myself just because I’m disabled.

I worry about the future. Just when it’s becoming crucial, our country’s dropping the ball on climate change. Our president is erratic, untrustworthy, and unethical. Authoritarianism looms large on the horizon. I do my best as a private citizen to help change things–with a focus on preserving democracy–but it’s still frightening, because disabled people are always the ones who get hurt first, right along with the poor and the minorities. I have quite a few deaths in ICE detainment in that database of mine, all of disabled immigrants. Why do people have to hate each other so much? Life is not a zero-sum game; if we help others, we ourselves benefit. We have so much to give; why are we refusing to share?

I find meaning in life from all the little things I do do make the world a little better, even if it’s just making cookies or showing a kid where to find the “Harry Potter” books. I used to think I might do something grand with my life, but now I don’t really think so. I think maybe a better world is made up of a lot of little people, all doing little things, all pushing in the right direction, until the sheer weight of numbers can move mountains.

CryptogramDetecting Credit Card Skimmers

Modern credit card skimmers hidden in self-service gas pumps communicate via Bluetooth. There's now an app that can detect them:

The team from the University of California San Diego, who worked with other computer scientists from the University of Illinois, developed an app called Bluetana which not only scans and detects Bluetooth signals, but can actually differentiate those coming from legitimate devices -- like sensors, smartphones, or vehicle tracking hardware -- from card skimmers that are using the wireless protocol as a way to harvest stolen data. The full details of what criteria Bluetana uses to differentiate the two isn't being made public, but its algorithm takes into account metrics like signal strength and other telltale markers that were pulled from data based on scans made at 1,185 gas stations across six different states.

LongNowDavid Byrne Launches New Online Magazine, Reasons to Be Cheerful

In his Long Now talk earlier this summer, David Byrne announced that he would soon launch a new website called Reasons to Be Cheerful. The premise, Byrne said, was to document stories and projects that give cause for optimism in troubles times. He was after solutions-oriented efforts that provided tangible lessons that could be broadly utilized in different parts of the world.

“I didn’t want something that would only be applied to one culture,” Byrne said.

Reasons to Be Cheerful has now officially launched. Here is Byrne on the project from the press release:

It often seems as if the world is going straight to Hell. I wake up in the morning, I look at the paper, and I say to myself, “Oh no!” Often I’m depressed for half the day. I imagine some of you feel the same.

Recently, I realized this isn’t helping. Nothing changes when you’re numb. So, as a kind of remedy, and possibly as a kind of therapy, I started collecting good news. Not schmaltzy, feel-good news, but stuff that reminded me, “Hey, there’s positive stuff going on! People are solving problems and it’s making a difference!”

I began telling others about what I’d found.

Their responses were encouraging, so I created a website called Reasons to be Cheerful and started writing. Later on, I realized I wanted to make the endeavor a bit more formal. So we got a team together and began commissioning stories from other writers and redesigned the website. Today, we’re relaunching Reasons to be Cheerful as an ongoing editorial project.

We’re telling stories that reveal that there are, in fact, a surprising number of reasons to feel cheerful — that provide a more optimistic and, we believe, more accurate depiction of the world. We hope to balance out some of the amplified negativity and show that things might not be as bad as we think. Stop by whenever you need a reminder.

Learn More

  • Byrne also released a trailer for the website, which you can watch below:
  • Watch David Byrne’s Long Now talk here.

Worse Than FailureCodeSOD: Checksum Yourself Before you Wrecksum Yourself

Mistakes happen. Errors crop up. Since we know this, we need to defend against it. When it comes to things like account numbers, we can make a rule about which numbers are valid by using a checksum. A simple checksum might be, "Add the digits together, and repeat until you get a single digit, which, after modulus with a constant, must be zero." This means that most simple data-entry errors will result in an invalid account number, but there's still a nice large pool of valid numbers to draw from.

James works for a company that deals with tax certificates, and thus needs to generate numbers which meet a similar checksum rule. Unfortunately for James, this is how his predecessor chose to implement it:

while (true) { digits = ""; for (int i = 0; i < certificateNumber.ToString().Length; i++) { int doubleDigit = Convert.ToInt32(certificateNumber.ToString().Substring(i, 1)) * 2; digits += (doubleDigit.ToString().Length > 1 ? Convert.ToInt32(doubleDigit.ToString().Substring(0, 1)) + Convert.ToInt32(doubleDigit.ToString().Substring(1, 1)) : Convert.ToInt32(doubleDigit.ToString().Substring(0, 1))); } int result = digits.ToString().Sum(c => c - '0'); if ((result % 10) == 0) break; else certificateNumber++; }

Whitespace added to make the ternary vaguely more readable.

We start by treating the number as a string, which allows us to access each digit individually, and as we loop, we'll grab a digit and double it. That, unfortunately, gives us a number, which is a big problem. There's absolutely no way to tell if a number is two digits long without turning it back into a string. Absolutely no way! So that's what we do. If the number is two digits, we'll split it back up and add those digits together.

Which again, gives us one of those pesky numbers. So once we've checked every digit, we'll convert that number back to a useful string, then Sum the characters in the string to produce a result. A result which, we hope, is divisible by 10. If not, we check the next number. Repeat and repeat until we get a valid result.

The worst part is, though, is that you can see from the while loop that this is just dropped into a larger method. This isn't a single function which generates valid certificate numbers. This is a block that gets dropped in line. Similar, but slightly different blocks are dropped in when numbers need to be validated. There's no single isValidCertificate method.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramFriday Squid Blogging: Vulnerabilities in Squid Server

It's always nice when I can combine squid and security:

Multiple versions of the Squid web proxy cache server built with Basic Authentication features are currently vulnerable to code execution and denial-of-service (DoS) attacks triggered by the exploitation of a heap buffer overflow security flaw.

The vulnerability present in Squid 4.0.23 through 4.7 is caused by incorrect buffer management which renders vulnerable installations to "a heap overflow and possible remote code execution attack when processing HTTP Authentication credentials."

"When checking Basic Authentication with HttpHeader::getAuth, Squid uses a global buffer to store the decoded data," says MITRE's description of the vulnerability. "Squid does not check that the decoded length isn't greater than the buffer, leading to a heap-based buffer overflow with user controlled data."

The flaw was patched by the web proxy's development team with the release of Squid 4.8 on July 9.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Valerie AuroraHow to avoid supporting sexual predators

[TW: child sex abuse]

Recently, I received an email from a computer security company asking for more information on why I refuse to work with them. My reason? The company was founded by a registered child sex offender who still serves as its CTO, which I found out during my standard client research process.

My first reaction was, “Do I really need to explain why I won’t work with you???” but as I write this, we’re at the part of the Jeffrey Epstein news cycle where we are learning about the people in computer science who supported Epstein—after Epstein pleaded guilty to two counts of “procuring prostitution with a child under 18,” registered as a sex offender, and paid restitution to dozens of victims. As someone who outed her own father as a serial child molester, I can tell you that it is quite common for people to support and help known sexual predators in this way.

I would like to share how I actively avoid supporting sexual predators, as someone who provides diversity and inclusion training, mostly to software companies:

  1. When a new client approaches me, I find the names of the CEO, CTO, COO, board members, and founders—usually on the “About Us” or “Who We Are” or “Founders” page of the company’s web site. Crunchbase and LinkedIn are also useful for this step.
  2. For each of the CEO, CTO, COO, board members, and/or founders, I search their name plus “allegations,” “sexism,” “sexual assault,” “sexual harassment,” and “women.” I do this for the company name too.
  3. If I find out any executives, board members, or founders have been credibly accused of sexual harassment or assault, I refuse to work with that company.
  4. I look up the funders of the company on Crunchbase. If any of their funders are listed on Sexism and Racism in Venture Capital, I give the company extra scrutiny.
  5. If the company agreed to take funding from a firm (or person) after knowing the lead partner(s) were sexual harassers or predators, I refuse to work with that company.

If you don’t have time to do this personally, I recommend hiring or contracting with someone to do it for you.

That’s just part of my research process (I search for other terms, such as “racism”). This has saved me from agreeing to help make money for a sexual predator or harasser many times. Specifically, I’ve turned down 13 out of 303 potential clients for this reason, or about 4% of clients who approached me. To be sure, it has also cost me money—I’d estimate at least $50,000—but I’d like to believe that my reputation and conscience are worth more than that. If you’re not in a position where you can say no to supporting a sexual predator, you have my sympathy and respect, and I hope you can find a way out sooner or later.

Your research process will look different depending on your situation, but the key elements will be:

  1. Assume that sexual predators exist in your field and you don’t know who all of them are.
  2. When you are asked to work with or support someone new, do research to find out if they are a sexual predator.
  3. When you find out someone is probably a sexual predator, refuse to support them.

What do I do if, say, the CEO has been credibly accused of sexual harassment or assault but the company has taken appropriate steps to make amends and heal the harm done to the victims? I don’t know, because I can’t remember a potential client who did that. I’ve had plenty that published a non-apology, forced victims to sign NDAs for trivial sums of money, or (very rarely) fired the CEO but allowed them to keep all or most of their equity, board seat, voting rights, etc. That’s not enough, because the CEO hasn’t shown remorse, made amends, or removed themselves from positions of power.

I don’t think all sexual predators should be ostracized completely, but I do think everyone has a moral responsibility not to help known sexual predators back into positions of power and influence without strong evidence of reform. Power and influence are privileges which should only be granted to people who are unlikely to abuse them, not rights which certain people “deserve” as long as they claim to have reformed. Someone with a history of sexually predatory behavior should be assumed to be dangerous unless exhaustively proven otherwise. One sign of complete reform is that the former sexual predator will themselves avoid and reject situations in which power and access would make sexual abuse easy to resume.

In this specific case, the CTO of this company maintains a public web site which briefly and vaguely mentions the harm done to victims of sex abuse—and then devotes the majority of the text to passionately advocating for the repeal of sex offender registry laws because of the incredible harm they do to the health and happiness of convicted sex offenders. So, no, I don’t think he has changed meaningfully, he is not a safe person to be around, he should not be the CTO of a computer security company, and I should not help him gain more wealth.

Don’t be the person helping the sexual predator insinuate themself back into a position with easy access to victims. If your first instinct is to feel sorry for the powerful and predatory, you need to do some serious work on your sense of empathy. Plenty of people have shared what it’s like to be the victim of sexual harassment and assault; go read their stories and try to imagine the suffering they’ve been through. Then compare that to the suffering of people who occasionally experience moderate consequences for sexually abusing people with less power than themselves. I hope you will adjust your empathy accordingly.

Sociological ImagesFamily Matters

The ‘power elite’ as we conceive it, also rests upon the similarity of its personnel, and their personal and official relations with one another, upon their social and psychological affinities. In order to grasp the personal and social basis of the power elite’s unity, we have first to remind ourselves of the facts of origin, career, and style of life of each of the types of circle whose members compose the power elite.

— C. Wright Mills. 1956. The Power Elite. Oxford University Press

President John F. Kennedy addresses the Prayer Breakfast in 1961. Wikimedia Commons.

A big question in political sociology is “what keeps leaders working together?” The drive to stay in public office and common business interests can encourage elites to cooperate, but politics is still messy. Different constituent groups and social movements demand that representatives support their interests, and the U.S. political system was originally designed to use this big, diverse set of factions to keep any single person or party from becoming too powerful.

Sociologists know that shared culture, or what Mills calls a “style of life,” is really important among elites. One of my favorite profiles of a style of life is Jeff Sharlet’s The Family, a look at how one religious fellowship has a big influence on the networks behind political power in the modern world. The book is a gripping case of embedded reporting that shows how this elite culture works. It also has a new documentary series:

When we talk about the religious right in politics, it is easy to jump to images of loud, pro-life protests and controversial speakers. What interests me about the Family is how the group has worked so hard to avoid this contentious approach. Instead, everything is geared toward simply getting newcomers to think of themselves as elites, bringing leaders together, and keeping them connected. A major theme in the first episode of the series is just how simple the theology is (“Jesus plus nothing”) and how quiet the group is, even drawing comparisons to the mafia.

Vipassana Meditation in Chiang Mai, Thailand. Source: Matteo, Flickr CC.

Sociologists see similar trends in other elite networks. In research on how mindfulness and meditation caught on in the corporate world, Jaime Kucinskas calls this “unobtrusive organizing.” Both the Family and the mindfulness movement show how leaders draw on core theological ideas in Christianity and Buddhism, but also modify those ideas to support their relationships in business and government. Rather than challenging those institutions, adapting and modifying these traditions creates new opportunities for elites to meet, mingle, and coordinate their work.

When we study politics and culture, it is easy to assume that core beliefs make people do things by giving them an agenda to follow. These cases are important because they show how that’s not always the point; sometimes core beliefs just shape how people do things in the halls of power.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramLicense Plate "NULL"

There was a DefCon talk by someone with the vanity plate "NULL." The California system assigned him every ticket with no license plate: $12,000.

Although the initial $12,000-worth of fines were removed, the private company that administers the database didn't fix the issue and new NULL tickets are still showing up.

The unanswered question is: now that he has a way to get parking fines removed, can he park anywhere for free?

And this isn't the first time this sort of thing has happened. Wired has a roundup of people whose license places read things like "NOPLATE," "NO TAG," and "XXXXXXX."

Worse Than FailureError'd: One Size Fits All

"Multi-platform AND multi-gender! Who knew SSDs could be so accomodating?" Felipe C. wrote.

 

"This is a progress indicator from a certain Australian "Enterprise" ERP vendor. I suspect their sales guys use it to claim that their software updates over 1000% faster than their competitors," Erin D. writes.

 

Bruce W. writes, "I guess LinkedIn wants me to know that I'm not as popular as I think."

 

"According to Icinga's Round Trip Average calculation, one of our servers must have been teleported about a quarter of the way to the center of the Milky Way. The good news is that I have negative packet loss on that route. Guess the packets got bored on the way," Mike T. writes.

 

"From undefined to invalid, this bankruptcy site has it all...or is it nothing?" Pascal writes.

 

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityBreach at Hy-Vee Supermarket Chain Tied to Sale of 5M+ Stolen Credit, Debit Cards

On Tuesday of this week, one of the more popular underground stores peddling credit and debit card data stolen from hacked merchants announced a blockbuster new sale: More than 5.3 million new accounts belonging to cardholders from 35 U.S. states. Multiple sources now tell KrebsOnSecurity that the card data came from compromised gas pumps, coffee shops and restaurants operated by Hy-Vee, an Iowa-based company that operates a chain of more than 245 supermarkets throughout the Midwestern United States.

Hy-Vee, based in Des Moines, announced on Aug. 14 it was investigating a data breach involving payment processing systems that handle transactions at some Hy-Vee fuel pumps, drive-thru coffee shops and restaurants.

The restaurants affected include Hy-Vee Market Grilles, Market Grille Expresses and Wahlburgers locations that the company owns and operates. Hy-Vee said it was too early to tell when the breach initially began or for how long intruders were inside their payment systems.

But typically, such breaches occur when cybercriminals manage to remotely install malicious software on a retailer’s card-processing systems. This type of point-of-sale malware is capable of copying data stored on a credit or debit card’s magnetic stripe when those cards are swiped at compromised payment terminals. This data can then be used to create counterfeit copies of the cards.

Hy-Vee said it believes the breach does not affect payment card terminals used at its grocery store checkout lanes, pharmacies or convenience stores, as these systems rely on a security technology designed to defeat card-skimming malware.

“These locations have different point-of-sale systems than those located at our grocery stores, drugstores and inside our convenience stores, which utilize point-to-point encryption technology for processing payment card transactions,” Hy-Vee said. “This encryption technology protects card data by making it unreadable. Based on our preliminary investigation, we believe payment card transactions that were swiped or inserted on these systems, which are utilized at our front-end checkout lanes, pharmacies, customer service counters, wine & spirits locations, floral departments, clinics and all other food service areas, as well as transactions processed through Aisles Online, are not involved.”

According to two sources who asked not to be identified for this story — including one at a major U.S. financial institution — the card data stolen from Hy-Vee is now being sold under the code name “Solar Energy,” at the infamous Joker’s Stash carding bazaar.

An ad at the Joker’s Stash carding site for “Solar Energy,” a batch of more than 5 million credit and debit cards sources say was stolen from customers of supermarket chain Hy-Vee.

Hy-Vee said the company’s investigation is continuing.

“We are aware of reports from payment processors and the card networks of payment data being offered for sale and are working with the payment card networks so that they can identify the cards and work with issuing banks to initiate heightened monitoring on accounts,” Hy-Vee spokesperson Tina Pothoff said.

The card account records sold by Joker’s Stash, known as “dumps,” apparently stolen from Hy-Vee are being sold for prices ranging from $17 to $35 apiece. Buyers typically receive a text file that includes all of their dumps. Those individual dumps records — when encoded onto a new magnetic stripe on virtually anything the size of a credit card — can be used to purchase stolen merchandise in big box stores.

As noted in previous stories here, the organized cyberthieves involved in stealing card data from main street merchants have gradually moved down the food chain from big box retailers like Target and Home Depot to smaller but far more plentiful and probably less secure merchants (either by choice or because the larger stores became a harder target).

It’s really not worth spending time worrying about where your card number may have been breached, since it’s almost always impossible to say for sure and because it’s common for the same card to be breached at multiple establishments during the same time period.

Just remember that while consumers are not liable for fraudulent charges, it may still fall to you the consumer to spot and report any suspicious charges. So keep a close eye on your statements, and consider signing up for text message notifications of new charges if your card issuer offers this service. Most of these services also can be set to alert you if you’re about to miss an upcoming payment, so they can also be handy for avoiding late fees and other costly charges.

Rondam RamblingsFedex: three months and counting

It has now been three months since we shipped a package via Fedex that turned out to be undeliverable (we sent it signature-required, and the recipient, unbeknownst to us, had moved).  We expected that in a situation like that, the package would simply be returned to us, but it wasn't because we paid cash for the original shipment and (again, unbeknownst to us) the shipping cost doesn't include

CryptogramModifying a Tesla to Become a Surveillance Platform

From DefCon:

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car's built-in cameras­ -- the same dash and rearview cameras providing a 360-degree view used for Tesla's Autopilot and Sentry features­ -- into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla's display and the user's phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver's nearby home.

Worse Than FailureKeeping Busy

Djungarian Hamster Pearl White run wheel

In 1979, Argle was 18, happy to be working at a large firm specializing in aerospace equipment. There was plenty of opportunity to work with interesting technology and learn from dozens of more senior programs—well, usually. But then came the day when Argle's boss summoned him to his cube for something rather different.

"This is a listing of the code we had prior to the last review," the boss said, pointing to a stack of printed Fortran code that was at least 6 inches thick. "This is what we have now." He gestured to a second printout that was slightly thicker. "I need you to read through this code and, in the old code, mark lines with 'WAS' where there was a change and 'IS' in the new listing to indicate what it was changed to."

Argle frowned at the daunting paper mountains. "I'm sorry, but, why do you need this exactly?"

"It's for FAA compliance," the boss said, waving his hand toward his cubicle's threshold. "Thanks!"

Weighed down with piles of code, Argle returned to his cube with a similarly sinking heart. At this place and time, he'd never even heard of UNIX, and his coworkers weren't likely to know anything about it, either. Their development computer had a TMS9900 CPU, the same one in the TI-99 home computer, and it ran its own proprietary OS from Texas Instruments. There was no diff command or anything like it. The closest analog was a file comparison program, but it only reported whether two files were identical or not.

Back at his cube, Argle stared at the printouts for a while, dreading the weeks of manual, mind-numbing dullness that loomed ahead of him. There was no way he'd avoid errors, no matter how careful he was. There was no way he'd complete this to every stakeholder's satisfaction. He was staring imminent failure in the face.

Was there a better way? If there weren't already a program for this kind of thing, could he write his own?

Argle had never heard of the Hunt–McIlroy algorithm, but he thought he might be able to do line comparisons between files, then hunt ahead in one file or the other until he re-synched again. He asked one of the senior programmers for the files' source code. Within one afternoon of tinkering, he'd written his very own diff program.

The next morning, Argle handed his boss 2 newly printed stacks of code, with "WAS -->" and "IS -->" printed neatly on all the relevant lines. As the boss began flipping through the pages, Argle smiled proudly, anticipating the pleasant surprise and glowing praise to come.

Quite to Argle's surprise, his boss fixed him with a red-faced, accusing glare. "Who said you could write a program?!"

Argle was speechless at first. "I was hired to program!" he finally blurted. "Besides, that's totally error-free! I know I couldn't have gotten everything correct by hand!"

The boss sighed. "I suppose not."

It wasn't until Argle was much older that his boss' reaction made any sense to him. The boss' goal hadn't been "compliance." He simply hadn't had anything constructive for Argle to do, and had thought he'd come up with a brilliant way to keep the new young hire busy and out of his hair for a few weeks.

Writer's note: Through the ages and across time, absolutely nothing has changed. In 2001, I worked at a (paid, thankfully) corporate internship where I was asked to manually browse through a huge network share and write down what every folder contained, all the way through thousands of files and sub-folders. Fortunately, I had heard of the dir command in DOS. Within 30 minutes, I proudly handed my boss the printout of the output—to his bemusement and dismay. —Ellis

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowMy MMT Podcast appearance, part 2: monopoly, money, and the power of narrative


Last week, the Modern Monetary Theory Podcast ran part 1 of my interview with co-host Christian Reilly; they’ve just published the second and final half of our chat (MP3), where we talk about the link between corruption and monopoly, how to pitch monetary theory to people who want to abolish money altogether, and how stories shape the future.

If you’re new to MMT, here’s my brief summary of its underlying premises: “Governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.”

Cory DoctorowWhere to catch me at Burning Man!

This is my last day at my desk until Labor Day: tomorrow, we’re driving to Burning Man to get our annual dirtrave fix! If you’re heading to the playa, here’s three places and times you can find me:

Seating is always limited at these things (our living room is big, but it’s not that big!) so come by early!

I hope you have an amazing burn — we always do! This year I’m taking a break from working in the cafe pulling shots in favor of my first-ever Greeter shift, which I’m really looking forward to.

While we’re on the subject, there’s still time to sign up for the Liminal Labs Assassination Game!

Google AdsenseAdditional safeguards to protect the quality of our ad network

Supporting a healthy ads ecosystem that works for publishers, advertisers, and users continues to be a top priority in our effort to sustain a free and open web. As the ecosystem evolves, our ad systems and defenses must adapt as well. Today, we’d like to highlight some of our efforts to protect the quality of our ad network, and the benefits to our publishers and the advertising ecosystem. 


Last year, we introduced a site verification process in AdSense to provide additional safeguards before a publisher can serve ads. This feature allows us to provide more direct feedback to our publishers on the eligibility of their site, while allowing us to communicate issues sooner and lessen the likelihood of future violations. As an added benefit, confirming which websites a publisher intends to monetize allows us to reduce potential misuse of a publisher's ad code, such as when a bad actor tries to claim a website as their own, or when they use a legitimate publisher's ad code to serve ads on bad content in an attempt to demonetize the good website — each day, we now block more than 120 million ad requests with this feature. 


This year, we’re enhancing our defenses even more by improving the systems that identify potentially invalid traffic or high risk activities before ads are served. These defenses allow us to limit ad serving as needed to further protect our advertisers and users, while maximizing revenue opportunities for legitimate publishers. While most publishers will not notice any changes to their ad traffic, we are working on improving the experience for those that may be impacted, by providing more transparency around these actions. Publishers on AdSense and AdMob that are affected will soon be notified of these ad traffic restrictions directly in their Policy Center. This will allow them to understand why they may be experiencing reduced ad serving, and what steps they can take to resolve any issues and continue partnering with us.


We’re excited for what’s to come, and will continue to roll out improvements to these systems with all of our users in mind. Look out for future updates on our ongoing efforts to promote and sustain a healthy ads ecosystem.


Posted by: 
Andres Ferrate - Chief Advocate for Ad Traffic Quality

Krebs on SecurityForced Password Reset? Check Your Assumptions

Almost weekly now I hear from an indignant reader who suspects a data breach at a Web site they frequent that has just asked the reader to reset their password. Further investigation almost invariably reveals that the password reset demand was not the result of a breach but rather the site’s efforts to identify customers who are reusing passwords from other sites that have already been hacked.

But ironically, many companies taking these proactive steps soon discover that their explanation as to why they’re doing it can get misinterpreted as more evidence of lax security. This post attempts to unravel what’s going on here.

Over the weekend, a follower on Twitter included me in a tweet sent to California-based job search site Glassdoor, which had just sent him the following notice:

The Twitter follower expressed concern about this message, because it suggested to him that in order for Glassdoor to have done what it described, the company would have had to be storing its users’ passwords in plain text. I replied that this was in fact not an indication of storing passwords in plain text, and that many companies are now testing their users’ credentials against lists of hacked credentials that have been leaked and made available online.

The reality is Facebook, Netflix and a number of big-name companies are regularly combing through huge data leak troves for credentials that match those of their customers, and then forcing a password reset for those users. Some are even checking for password re-use on all new account signups.

The idea here is to stymie a massively pervasive problem facing all companies that do business online today: Namely, “credential-stuffing attacks,” in which attackers take millions or even billions of email addresses and corresponding cracked passwords from compromised databases and see how many of them work at other online properties.

So how does the defense against this daily deluge of credential stuffing work? A company employing this strategy will first extract from these leaked credential lists any email addresses that correspond to their current user base.

From there, the corresponding cracked (plain text) passwords are fed into the same process that the company relies upon when users log in: That is, the company feeds those plain text passwords through its own password “hashing” or scrambling routine.

Password hashing is designed to be a one-way function which scrambles a plain text password so that it produces a long string of numbers and letters. Not all hashing methods are created equal, and some of the most commonly used methods — MD5 and SHA-1, for example — can be far less secure than others, depending on how they’re implemented (more on that in a moment). Whatever the hashing method used, it’s the hashed output that gets stored, not the password itself.

Back to the process: If a user’s plain text password from a hacked database matches the output of what a company would expect to see after running it through their own internal hashing process, that user is then prompted to change their password to something truly unique.

Now, password hashing methods can be made more secure by amending the password with what’s known as a “salt” — or random data added to the input of a hash function to guarantee a unique output. And many readers of the Twitter thread on Glassdoor’s approach reasoned that the company couldn’t have been doing what it described without also forgoing this additional layer of security.

My tweeted explanatory reply as to why Glassdoor was doing this was (in hindsight) incomplete and in any case not as clear as it should have been. Fortunately, Glassdoor’s chief information officer Anthony Moisant chimed in to the Twitter thread to explain that the salt is in fact added as part of the password testing procedure.

“In our [user] database, we’ve got three columns — username, salt value and scrypt hash,” Moisant explained in an interview with KrebsOnSecurity. “We apply the salt that’s stored in the database and the hash [function] to the plain text password, and that resulting value is then checked against the hash in the database we store. For whatever reason, some people have gotten it into their heads that there’s no possible way to do these checks if you salt, but that’s not true.”

CHECK YOUR ASSUMPTIONS

You — the user — can’t be expected to know or control what password hashing methods a given site uses, if indeed they use them at all. But you can control the quality of the passwords you pick.

I can’t stress this enough: Do not re-use passwords. And don’t recycle them either. Recycling involves rather lame attempts to make a reused password unique by simply adding a digit or changing the capitalization of certain characters. Crooks who specialize in password attacks are wise to this approach as well.

If you have trouble remembering complex passwords (and this describes most people), consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember.

According to a recent blog entry by Microsoft group program manager Alex Weinert, none of the above advice about password complexity amounts to a hill of beans from the attacker’s standpoint.

Weinert’s post makes a compelling argument that as long as we’re stuck with passwords, taking full advantage of the most robust form of multi-factor authentication (MFA) offered by a site you frequent is the best way to deter attackers. Twofactorauth.org has a handy list of your options here, broken down by industry.

“Your password doesn’t matter, but MFA does,” Weinert wrote. “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

Glassdoor’s Moisant said the company doesn’t currently offer MFA for its users, but that it is planning to roll that out later this year to both consumer and business users.

Password managers also can be useful for those who feel encumbered by having to come up with passphrases or complex passwords. If you’re uncomfortable with entrusting a third-party service or application to handle this process for you, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop or screen or whatever, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Although many readers will no doubt take me to task on that last bit of advice, as in all things security related it’s important not to let the perfect become the enemy of the good. Many people (think moms/dads/grandparents) can’t be bothered to use password managers  — even when you go through the trouble of setting them up on their behalf. Instead, without an easier, non-technical method they will simply revert to reusing or recycling passwords.

CryptogramGoogle Finds 20-Year-Old Microsoft Windows Vulnerability

There's no indication that this vulnerability was ever used in the wild, but the code it was discovered in -- Microsoft's Text Services Framework -- has been around since Windows XP.

,

CryptogramSurveillance as a Condition for Humanitarian Aid

Excellent op-ed on the growing trend to tie humanitarian aid to surveillance.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don't apply for people who are starving.

Worse Than FailureCodeSOD: I'm Sooooooo Random, LOL

There are some blocks of code that require a preamble, and an explanation of the code and its flow. Often you need to provide some broader context.

Sometimes, you get some code like Wolf found, which needs no explanation:

export function generateRandomId(): string { counter++; return 'id' + counter; }

I mean, I guess that's a slightly better than this solution. Wolf found this because some code downstream was expecting random, unique IDs, and wasn't getting them.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Cory DoctorowPodcast: A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition

In my latest podcast (MP3), I read my essay “A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition”, published today on EFF’s Deeplinks; it’s the latest in my ongoing series of case-studies of “adversarial interoperability,” where new services unseated the dominant companies by finding ways to plug into existing products against those products’ manufacturers. This week’s installment recounts the history of cable TV, and explains how the legal system in place when cable was born was subsequently extinguished (with the help of the cable companies who benefitted from it!) meaning that no one can do to cable what cable once did to broadcasters.

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own “community antenna television” networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony’s Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were “capable of substantial noninfringing uses.”

MP3

Krebs on SecurityThe Rise of “Bulletproof” Residential Networks

Cybercrooks increasingly are anonymizing their malicious traffic by routing it through residential broadband and wireless data connections. Traditionally, those connections have been mainly hacked computers, mobile phones, or home routers. But this story is about so-called “bulletproof residential VPN services” that appear to be built by purchasing or otherwise acquiring discrete chunks of Internet addresses from some of the world’s largest ISPs and mobile data providers.

In late April 2019, KrebsOnSecurity received a tip from an online retailer who’d seen an unusual number of suspicious transactions originating from a series of Internet addresses assigned to a relatively new Internet provider based in Maryland called Residential Networking Solutions LLC.

Now, this in itself isn’t unusual; virtually every provider has the occasional customers who abuse their access for fraudulent purposes. But upon closer inspection, several factors caused me to look more carefully at this company, also known as “Resnet.”

An examination of the IP address ranges assigned to Resnet shows that it maintains an impressive stable of IP blocks — totaling almost 70,000 IPv4 addresses — many of which had until quite recently been assigned to someone else.

Most interestingly, about ten percent of those IPs — more than 7,000 of them — had until late 2018 been under the control of AT&T Mobility. Additionally, the WHOIS registration records for each of these mobile data blocks suggest Resnet has been somehow reselling data services for major mobile and broadband providers, including AT&T, Verizon, and Comcast Cable.

The WHOIS records for one of several networks associated with Residential Networking Solutions LLC.

Drilling down into the tracts of IPs assigned to Resnet’s core network indicates those 7,000+ mobile IP addresses under Resnet’s control were given the label  “Service Provider Corporation” — mostly those beginning with IPs in the range 198.228.x.x.

An Internet search reveals this IP range is administered by the Wireless Data Service Provider Corporation (WDSPC), a non-profit formed in the 1990s to manage IP address ranges that could be handed out to various licensed mobile carriers in the United States.

Back when the WDSPC was first created, there were quite a few mobile wireless data companies. But today the vast majority of the IP space managed by the WDSPC is leased by AT&T Mobility and Verizon Wireless — which have gradually acquired most of their competing providers over the years.

A call to the WDSPC revealed the nonprofit hadn’t leased any new wireless data IP space in more than 10 years. That is, until the organization received a communication at the beginning of this year that it believed was from AT&T, which recommended Resnet as a customer who could occupy some of the company’s mobile data IP address blocks.

“I’m afraid we got duped,” said the person answering the phone at the WDSPC, while declining to elaborate on the precise nature of the alleged duping or the medium that was used to convey the recommendation.

AT&T declined to discuss its exact relationship with Resnet  — or if indeed it ever had one to begin with. It responded to multiple questions about Resnet with a short statement that said, “We have taken steps to terminate this company’s services and have referred the matter to law enforcement.”

Why exactly AT&T would forward the matter to law enforcement remains unclear. But it’s not unheard of for hosting providers to forge certain documents in their quest for additional IP space, and anyone caught doing so via email, phone or fax could be charged with wire fraud, which is a federal offense that carries punishments of up to $500,000 in fines and as much as 20 years in prison.

WHAT IS RESNET?

The WHOIS registration records for Resnet’s main Web site, resnetworking[.]com, are hidden behind domain privacy protection. However, a cursory Internet search on that domain turned up plenty of references to it on Hackforums[.]net, a sprawling community that hosts a seemingly never-ending supply of up-and-coming hackers seeking affordable and anonymous ways to monetize various online moneymaking schemes.

One user in particular — a Hackforums member who goes by the nickname “Profitvolt” — has spent several years advertising resnetworking[.]com and a number of related sites and services, including “unlimited” AT&T 4G/LTE data services, and the immediate availability of more than 1 million residential IPs that he suggested were “perfect for botting, shoe buying.”

The Hackforums user “Profitvolt” advertising residential proxies.

Profitvolt advertises his mobile and residential data services as ideal for anyone who wishes to run “various bots,” or “advertising campaigns.” Those services are meant to provide anonymity when customers are doing things such as automating ad clicks on platforms like Google Adsense and Facebook; generating new PayPal accounts; sneaker bot activity; credential stuffing attacks; and different types of social media spam.

For readers unfamiliar with this term, “shoe botting” or “sneaker bots” refers to the use of automated bot programs and services that aid in the rapid acquisition of limited-release, highly sought-after designer shoes that can then be resold at a profit on secondary markets. All too often, it seems, the people who profit the most in this scheme are using multiple sets of compromised credentials from consumer accounts at online retailers, and/or stolen payment card data.

To say shoe botting has become a thorn in the side of online retailers and regular consumers alike would be a major understatement: A recent State of The Internet Security Report (PDF) from Akamai (an advertiser on this site) noted that such automated bot activity now accounts for almost half of the Internet bandwidth directed at online retailers. The prevalance of shoe botting also might help explain Footlocker‘s recent $100 million investment in goat.com, the largest secondary shoe resale market on the Web.

In other discussion threads, Profitvolt advertises he can rent out an “unlimited number” of so-called “residential proxies,” a term that describes home or mobile Internet connections that can be used to anonymously relay Internet traffic for a variety of dodgy deals.

From a ne’er-do-well’s perspective, the beauty of routing one’s traffic through residential IPs is that few online businesses will bother to block malicious or suspicious activity emanating from them.

That’s because in general the pool of IP addresses assigned to residential or mobile wireless connections cycles intermittently from one user to the next, meaning that blacklisting one residential IP for abuse or malicious activity may only serve to then block legitimate traffic (and e-commerce) from the next user who gets assigned that same IP.

A BULLETPROOF PLAN?

In one early post on Hackforums, Profitvolt laments the untimely demise of various “bulletproof” hosting providers over the years, from the Russian Business Network and Atrivo/Intercage, to McColo, 3FN and Troyak, among others.

All of these Internet providers had one thing in common: They specialized in cultivating customers who used their networks for nefarious purposes — from operating botnets and spamming to hosting malware. They were known as “bulletproof” because they generally ignored abuse complaints, or else blamed any reported abuse on a reseller of their services.

In that Hackforums post, Profitvolt bemoans that “mediums which we use to distribute [are] locking us out and making life unnecessarily hard.”

“It’s still sketchy, so I am not going all out to reveal my plans, but currently I am starting off with a 32 GB RAM server with a 1 GB unmetered up-link in a Caribbean country,” Profitvolt told forum members, while asking in different Hackforums posts whether there are any other users from the dual-island Caribbean nation of Trinidad and Tobago on the forum.

“To be quite honest, the purpose of this is to test how far we can stretch the leniency before someone starts asking questions, or we start receiving emails,” Profitvolt continued.

Hackforums user Profitvolt says he plans to build his own “bulletproof” hosting network catering to fellow forum users who might want to rent his services for a variety of dodgy activities.

KrebsOnSecurity started asking questions of Resnet after stumbling upon several indications that this company was enabling different types of online abuse in bite-sized monthly packages. The site resnetworking[.]com appears normal enough on the surface, but a review of the customer packages advertised on it suggests the company has courted a very specific type of client.

“No bullshit, just proxies,” reads one (now hidden or removed) area of the site’s shopping cart. Other promotions advertise the use of residential proxies to promote “growth services” on multiple social media platforms including CraigslistFacebook, Google, Instagram, Spotify, Soundcloud and Twitter.

Resnet also peers with or partners with several other interesting organizations, including:

residential-network[.]com, also known as “IAPS Security Services” (formerly intl-alliance[.]com), which advertises the sale of residential VPNs and mobile 4G/IPv6 proxies aimed at helping customers avoid being blocked while automating different types of activity, from mass-creating social media and email accounts to bulk message sending on platforms like WhatsApp and Facebook.

Laksh Cybersecurity and Defense LLC, which maintains Hexproxy[.]com, another residential proxy service that largely courts customers involved in shoe botting.

-Several chunks of IP space from a Russian provider variously known by the names “SERVERSGET” and “Men Danil Valentinovich,” which has been associated with numerous instances of hijacking vast swaths of IP addresses from other organizations quite recently.

Some of Profitvolt’s discussion threads on Hackforums.

WHO IS RESNET?

Resnetworking[.]com lists on its home page the contact phone number 202-643-8533. That number is tied to the registration records for several domains, including resnetworking[.]com, residentialvpn[.]info, and residentialvpn[.]org. All of those domains also have in their historic WHOIS records the name Joshua Powder and Residential Networking Solutions LLC.

Running a reverse WHOIS lookup via Domaintools.com on “Joshua Powder” turns up almost 60 domain names — most of them tied to the email address joshua.powder@gmail.com. Among those are resnetworking[.]info, resvpn[.]com/net/org/info, tobagospeaks[.]com, tthack[.]com and profitvolt[.]com. Recall that “Profitvolt” is the nickname of the Hackforums user advertising resnetworking[.]com.

The email address josh@tthack.com was used to register an account on the scammer-friendly site blackhatworld[.]com under the nickname “BulletProofWebHost.” Here’s a list of domains registered to this email address.

A search on the Joshua Powder and tthack email addresses at Hyas, a startup that specializes in combining data from a number of sources to provide attribution of cybercrime activity, further associates those to mafiacloud@gmail.com and to the phone number 868-360-9983, which is a mobile number assigned by Digicel Trinidad and Tobago Ltd. A full list of domains tied to that 868- number is here.

Hyas’s service also pointed to this post on the Facebook page of the Prince George’s County Economic Development Corporation in Maryland, which appears to include a 2017 photo of Mr. Powder posing with county officials.

‘A GLORIFIED SOLUTIONS PROVIDER’

Roughly three weeks ago, KrebsOnSecurity called the 202 number listed at the top of resnetworking[.]com. To my surprise, a man speaking in a lovely Caribbean-sounding accent answered the call and identified himself as Josh Powder. When I casually asked from where he’d acquired that accent, Powder said he was a native of New Jersey but allowed that he has family members who now live in Trinidad and Tobago.

Powder said Residential Networking Solutions LLC is “a normal co-location Internet provider” that has been in operation for about three years and employs some 65 people.

“You’re not the first person to call us about residential VPNs,” Powder said. “In the past, we did have clients that did host VPNs, but it’s something that’s been discontinued since 2017. All we are is a glorified solutions provider, and we broker and lease Internet lines from different companies.”

When asked about the various “botting” packages for sale on Resnetworking[.]com, Powder replied that the site hadn’t been updated in a while and that these were inactive offers that resulted from a now-discarded business model.

“When we started back in 2016, we were really inexperienced, and hired some SEO [search engine optimization] firms to do marketing,” he explained. “Eventually we realized that this was creating a shitstorm, because it started to make us look a specific way to certain people. So we had to really go through a process of remodeling. That process isn’t complete, and the entire web site is going to retire in about a week’s time.”

Powder maintains that his company does have a contract with AT&T to resell LTE and 4G data services, and that he has a similar arrangement with Sprint. He also suggested that one of the aforementioned companies which partnered with Resnet — IAPS Security Services — was responsible for much of the dodgy activity that previously brought his company abuse complaints and strange phone calls about VPN services.

“That guy reached out to us and he leased service from us and nearly got us into a lot of trouble,” Powder said. “He was doing a lot of illegal stuff, and I think there is an ongoing matter with him legally. That’s what has caused us to be more vigilant and really look at what we do and change it. It attracted too much nonsense.”

Interestingly, when one visits IAPS Security Services’ old domain — intl-alliance[.]com — it now forwards to resvpn[.]com, which is one of the domains registered to Joshua Powder.

Shortly after our conversation, the monthly packages I asked Powder about that were for sale on resnetworking[.]com disappeared from the site, or were hidden behind a login. Also, Resnet’s IPv6 prefixes (a la IAPS Security Services) were removed from the company’s list of addresses. At the same time, a large number of Profitvolt’s posts prior to 2018 were deleted from Hackforums.

EPILOGUE

It appears that the future of low-level abuse targeting some of the most popular Internet destinations is tied to the increasing willingness of the world’s biggest ISPs to resell discrete chunks of their address space to whomever is able to pay for them.

Earlier this week, I had a Skype conversation with an individual who responded to my requests for more information from residential-network[.]com, and this person told me that plenty of mobile and land-line ISPs are more than happy to sell huge amounts of IP addresses to just about anybody.

“Mobile providers also sell mass services,” the person who responded to my Skype request offered. “Rogers in Canada just opened a new package for unlimited 4G data lines and we’re currently in negotiations with them for that service as well. The UK also has 4G providers that have unlimited data lines as well.”

The person responding to my Skype messages said they bought most of their proxies from a reseller at customproxysolutions[.]com, which advertises “the world’s largest network of 4G LTE modems in the United States.”

He added that “Rogers in Canada has a special offer that if you buy more than 50 lines you get a reduced price lower than the $75 Canadian Dollar price tag that they would charge for fewer than 50 lines. So most mobile ISPs want to sell mass lines instead of single lines.”

It remains unclear how much of the Internet address space claimed by these various residential proxy and VPN networks has been acquired legally or through other means. But it seems that Resnet and its business associates are in fact on the cutting edge of what it means to be a bulletproof Internet provider today.

CryptogramInfluence Operations Kill Chain

Influence operations are elusive to define. The Rand Corp.'s definition is as good as any: "the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent." Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on "Operation Infektion," a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from "information operations" to "influence operations," because the former is traditionally defined by the US Department of Defense in ways that don't really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6) -- either journalists or other influencers -- who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their "we don't care" attitudes since the 2016 election, there's no guarantee that they will continue to remove these accounts -- especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths -- a "firehose of falsehood" -- that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton's campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron's campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called "useful idiots." Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia's tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won't be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker's ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding "Democracy's Dilemma": how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise -- we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time -- especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn't fit every influence operation, it's important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition's Misinfosec Working Group proposed a "misinformation pyramid." The US Justice Department developed a "Malign Foreign Influence Campaign Cycle," with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there's no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.

Worse Than FailureLowest Bidder Squared

Stack of coins 0214

Initech was in dire straits. The website was dog slow, and the budget had been exceeded by a factor of five already trying to fix it. Korbin, today's submitter, was brought in to help in exchange for decent pay and an office in their facility.

He showed up only to find a boxed-up computer and a brand new flat-packed desk, also still in the box. The majority of the space was a video-recording studio that saw maybe 4-6 hours of use a week. After setting up his office, Korbin spent the next day and a half finding his way around the completely undocumented C# code. The third day, there was a carpenter in the studio area. Inexplicably, said carpenter decided he needed to contact-glue carpet to a set of huge risers ... indoors. At least a gallon of contact cement was involved. In minutes, Korbin got a raging headache, and he was essentially gassed out of the building for the rest of the day. Things were not off to a good start.

Upon asking around, Korbin quickly determined that the contractors originally responsible for coding the website had underbid the project by half, then subcontracted the whole thing out to a team in India to do the work on the cheap. The India team had then done the very same thing, subcontracting it out to the most cut-rate individuals they could find. Everything had been written in triplicate for some reason, making it impossible to determine what was actually powering the website and what was dead code. Furthermore, while this was a database-oriented site, there were no stored procedures, and none of the (sub)subcontractors seemed to understand how to use a JOIN command.

In an effort to tease apart what code was actually needed, Korbin turned on profiling. Only ... it was already on in the test version of the site. With a sudden ominous hunch, he checked the live site—and sure enough, profiling was running in production as well. He shut it off, and instantly, the whole site became more responsive.

The next fix was also pretty simple. The site had a bad habit of asking for information it already had, over and over, without any JOINs. Reducing the frequency of database hits improved performance again, bringing it to within an order of magnitude of what one might expect from a website.

While all this was going on, the leaderboard page had begun timing out. Sure enough, it was an N-squared solution: open database, fetch record, close database, repeat, then compare the two records, putting them in order and beginning again. With 500 members, it was doing 250,000 passes each time someone hit the page. Korbin scrapped the whole thing in favor of the site's first stored procedure, then cached it to call only once a day.

The weeks went on, and the site began to take shape, finally getting something like back on track. Thanks to the botched rollout, however, many of the company's endorsements had vanished, and backers were pulling out. The president got on the phone with some VIP about Facebook—because as we all know, the solution to any company's problem is the solution to every company's problems.

"Facebook was written in PHP. He told me it was the best thing out there. So we're going to completely redo the website in PHP," the president confidently announced at the next all-hands meeting. "I want to hear how long everyone thinks this will take to get done."

The only developers left at that point were Korbin and a junior kid just out of college, with one contractor with some experience on the project.

"Two weeks. Maybe three," the kid replied.

They went around the table, and all the non-programmers chimed in with the 2-3 week assessment. Next to last came the experienced contractor. Korbin's jaw nearly dropped when he weighed in at 3-4 weeks.

"None of that is realistic!" Korbin proclaimed. "Even with the existing code as a road map, it's going to take 4-6 months to rewrite. And with the inevitable feature-creep and fixes for things found in testing, it is likely to take even longer."

Korbin was told the next day he could pick up his final check. Seven months later, he ran into the junior kid again, and asked how the rewrite went.

"It's still ongoing," he admitted.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

CryptogramFriday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we're told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There's also plenty of work to do with using the fins for dynamic control, which the researchers say will "reveal the superiority of the natural flying squid movement."

I can't find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

Valerie AuroraGoth fashion tips for Ehlers-Danlos Syndrome

A woman wearing a dramatic black hooded jacket typing on a laptop
Skingraft hoodie, INC shirt, Fisherman’s Wharf fingerless gloves

My ideal style could perhaps be best described as “goth chic”—a lot of structured black somewhere on the border between couture and business casual—but because I have Ehlers-Danlos Syndrome, I more often end up wearing “sport goth”: a lot of stretchy black layers in washable fabrics with flat shoes. With great effort, I’ve nudged my style back towards “goth chic,” at least on good days. Enough people have asked me about my gear that I figured I’d share what I’ve learned with other EDS goths (or people who just like being comfortable and also wearing a lot of black).

Here are the constraints I’m operating under:

  • Flat shoes with thin soles to prevent ankle sprains and foot and back pain
  • Stretchy/soft shoes without pressure points to prevent blisters on soft skin
  • Can’t show sweat because POTS causes excessive sweating, also I walk a lot
  • Layers because POTS, walking, and San Francisco weather means I need to adjust my temperature a lot
  • Little or no weight on shoulders due to hypermobile shoulders
  • No tight clothes on abdomen due to pain (many EDS folks don’t have this problem but I do)
  • Soft fabric only touching skin due to sensitive easily irritated skin
  • Warm wrists to prevent hands from losing circulation due to Reynaud’s or POTS

On the other hand, I have a few things that make fashion easier for me. For starters, I can afford a four-figure annual clothing budget. I still shop a lot at thrift stores, discount stores like Ross, or discount versions of more expensive stores like Nordstrom Rack but I can afford a few expensive pieces at full price. Many of the items on this page can be found used on Poshmark, eBay, and other online used clothing marketplaces. I also recommend doing the math for “cost per wear” to figure out if you would save money if you wore a more expensive but more durable piece for a longer period of time. I usually keep clothing and shoes for several years and repair as necessary.

I currently fit within the “standard” size ranges of most clothing and shoe brands, but many of the brands I recommend here have a wider range of sizes. I’ve included the size range where relevant.

Finally, as a cis woman with an extremely femme body type, I can wear a wide range of masculine and feminine styles without being hassled in public for being gender-nonconforming (I still get hassled in public for being a woman, yay). Most of the links here are to women’s styles, but many brands also have men’s styles. (None of these brands have unisex styles that I know of.)

Shoes and socks

Shoes are my favorite part of fashion! I spend much more money on shoes than I used to because more expensive shoes are less likely to give me blisters. If I resole/reheel/polish them regularly, they can last for several years instead of a few months, so they cost the same per wear. Functional shoes are notoriously hard for EDS people to find, so the less often I have to search for new shoes, the better. I nearly always wear my shoes until they can no longer be repaired. If this post does nothing other than convince you that it is economical and wise to spend more money on shoes, I have succeeded.

Woman wearing two coats and holding two rolling bags
Via Spiga trench, Mossimo hoodie, VANELi flats, Aimee Kestenberg rolling laptop bag, Travelpro rolling bag

Smartwool black socks – My poor tender feet need cushiony socks that don’t sag or rub. Smartwool socks are expensive but last forever, and you can get them in 100% black so that you can wash them with your black clothes without covering them in little white balls. I wear mostly the men’s Walk Light Crew and City Slicker, with occasional women’s Hide and Seek No Show.

Skechers Cleo flats – These are a line of flats in a stretchy sweater-like material. The heel can be a little scratchy, but I sewed ribbon over the seam and it was fine. The BOBS line of Skechers is also extremely comfortable. Sizes 5 – 11.

VANELi flats – The sportier versions of these shoes are obscenely comfortable and also on the higher end of fashion. I wore my first pair until they had holes in the soles, and then I kept wearing them another year. I’m currently wearing out this pair. You can get them majorly discounted at DSW and similar places. Sizes 5 – 12.

Stuart Weitzman 5050 boots – These over-the-knee boots are the crown jewel of any EDS goth wardrobe. First, they are almost totally flat and roomy in the toe. Second, the elastic in the boot shaft acts like compression socks, helping with POTS. Third, they look amazing. Charlize Theron wore them in “Atomic Blonde” while performing martial arts. Angelina Jolie wears these in real life. The downside is the price, but there is absolutely no reason to pay full price. I regularly find them in Saks Off 5th for 30% off. Also, they last forever: with reheeling, my first pair lasted around three years of heavy use. Stuart Weitzman makes several other flat boots with elastic shafts which are also worth checking out, but they have been making the 5050 for around 25 years so this style should always be available. Sizes 4 – 12, runs about a half size large.

Pants/leggings/skirts

A woman wearing black leggings and a black sweater
Patty Boutik sweater, Demobaza leggings, VANELi flats

Satina high-waisted leggings – I wear these extremely cheap leggings probably five days a week under skirts or dresses. Available in two sizes, S – L and XL – XXXL. If you can wear tight clothing, you might want to check out the Spanx line of leggings (e.g. the Moto Legging) which I would totally wear if I could.

Toad & Co. Women’s Chaka skirt – I wear this skirt probably three days a week. Ridiculously comfortable and only middling expensive. Sizes XS – L.

NYDJ jeans/leggings – These are pushing it for me in terms of tightness, but I can wear them if I’m standing or walking most of the day. Expensive, but they look professional and last forever. Sizes 00 – 28, including petites, and  they run at least a size large.

Demobaza leggings – The leggings made mostly of stretch material are amazingly comfortable, but also obscenely expensive. They also last forever. Sizes XS – L.

Tops

Patty Boutik – This strange little label makes comfortable tops with long long sleeves and long long bodies, and it keeps making the same styles for years. Unfortunately, they tend to sell out of the solid black versions of my favorite tops on a regular basis. I order two or three of my favorite styles whenever they are in stock as they are reasonably cheap. I’ve been wearing the 3/4 sleeve boat neck shirt at least once a week for about 5 years now. Sizes XS – XL, tend to run a size small.

14th and Union – This label makes very simple pieces out of the most comfortable fabrics I’ve ever worn for not very much money. I wear this turtleneck long sleeve tee about once a week. I also like their skirts. Sizes XS to XL, standard and petite.

Macy’s INC – This label is a reliable source of stretchy black clothing at Macy’s prices. It often edges towards club wear but keeps the simplicity I prefer.

Coats

Mossimo hoodie – Ugh, I love this thing. It’s the perfect cheap fashion staple. I often wear it underneath other coats. Not sure about sizes since it is only available on resale sites.

Skingraft Royal Hoodie – A vastly more expensive version of the black hoodie, but still comfortable, stretchy, and washable. And oh so dramatic. Sizes XS – L.

3/4 length hooded black trench coat – Really any brand will do, but I’ve mostly recently worn out a Calvin Klein and am currently wearing a Via Spiga.

Accessories

A woman wearing all black with a fanny pack
Mossimo hoodie, Toad & Co. skirt, T Tahari fanny pack, Satina leggings, VANELi flats

Fingerless gloves – The cheaper, the better! I buy these from the tourist shops at Fisherman’s Wharf in San Francisco for under $10. I am considering these gloves from Demobaza.

Medline folding cane – Another cheap fashion staple for the EDS goth! Sturdy, adjustable, folding black cane with clean sleek lines.

T Tahari Logo Fanny Pack – I stopped being able to carry a purse right about the time fanny packs came back into style! Ross currently has an entire fanny pack section, most of which are under $13. If I’m using a backpack or the rolling laptop bag, I usually keep my wallet, phone, keys, and lipstick in the fanny pack for easy access.

Duluth Child’s Pack, Envelope style – A bit expensive, but another simple fashion staple. I used to carry the larger roll-top canvas backpack until I realized I was packing it full of stuff and aggravating my shoulders. The child’s pack barely fits a small laptop and a few accessories.

Aimee Kestenberg rolling laptop bag – For the days when I need more than I can fit in my tiny backpack and fanny pack. It has a strap to fit on to the handle of a rolling luggage bag, which is great for air travel.

Apple Watch – The easiest way to diagnose POTS! (Look up “poor man’s tilt table test.”) A great way to track your heart rate and your exercise, two things I am very focused on as someone with EDS. When your first watch band wears out, go ahead and buy a random cheap one off the Internet.

That’s my EDS goth fashion tips! If you have more, please share them in the comments.

,

TEDStages of Life: Notes from Session 5 of TEDSummit 2019

Yilian Cañizares rocks the TED stage with a jubilant performance of her signature blend of classic jazz and Cuban rhythms. She performs at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

The penultimate session of TEDSummit 2019 had a bit of everything — new thoughts on aging, loneliness and happiness as well as breakthrough science, music and even a bit of comedy.

The event: TEDSummit 2019, Session 5: Stages of Life, hosted by Kelly Stoetzel and Alex Moura

When and where: Wednesday, July 24, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Nicola Sturgeon, Sonia Livingstone, Howard Taylor, Sara-Jane Dunn, Fay Bound Alberti, Carl Honoré

Opening: Raconteur Mackenzie Dalrymple telling the story of the Goodman of Ballengeich

Music: Yilian Cañizares and her band, rocking the TED stage with a jubilant performance that blends classic jazz and Cuban rhythms

Comedy: Amidst a head-spinning program of big (and often heavy) ideas, a welcomed break from comedian Omid Djalili, who lightens the session with a little self-deprecation and a few barbed cultural observations

The talks in brief:

“In the world we live in today, with growing divides and inequalities, with disaffection and alienation, it is more important than ever that we … promote a vision of society that has well-being, not just wealth, at its very heart,” says Nicola Sturgeon, First Minister of Scotland. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Nicola Sturgeon, First Minister of Scotland

Big idea: It’s time to challenge the monolithic importance of GDP as a quality-of-life metric — and paint a broader picture that also encompasses well-being.

How? In 2018, Scotland, Iceland and New Zealand established the Wellbeing Economy Governments group to challenge the supremacy of GDP. The leaders of these countries — who are, incidentally, all women — believe policies that promote happiness (including equal pay, childcare and paternity rights) could help decrease alienation in its citizens and, in turn, build resolve to confront global challenges like inequality and climate change.

Quote of the talk: “Growth in GDP should not be pursued at any and all cost … The goal of economic policy should be collective well-being: how happy and healthy a population is, not just how wealthy a population is.”


Sonia Livingstone, social psychologist

Big idea: Parents often view technology as either a beacon of hope or a developmental poison, but the biggest influence on their children’s life choices is how they help them navigate this unavoidable digital landscape. Society as a whole can positively impact these efforts.

How? Sonia Livingstone’s own childhood was relatively analog, but her research has been focused on how families embrace new technology today. Changes abound in the past few decades — whether it’s intensified educational pressures, migration, or rising inequality — yet it’s the digital revolution that remains the focus of our collective apprehension. Livingstone’s research suggests that policing screen time isn’t the answer to raising a well-rounded child, especially at a time when parents are trying to live more democratically with their children by sharing decision-making around activities like gaming and exploring the internet. Leaders and institutions alike can support a positive digital future for children by partnering with parents to guide activities within and outside of the home. Instead of criticizing families for their digital activities, Livingstone thinks we should identify what real-world challenges they’re facing, what options are available to them and how we can support them better.

Quote of the talk: “Screen time advice is causing conflict in the family, and there’s no solid evidence that more screen time increases childhood problems — especially compared with socio-economic or psychological factors. Restricting children breeds resistance, while guiding them builds judgment.”


Howard Taylor, child safety advocate

Big idea: Violence against children is an endemic issue worldwide, with rates of reported incidence increasing in some countries. We are at a historical moment that presents us with a unique opportunity to end the epidemic, and some countries are already leading the way.

How? Howard Taylor draws attention to Sweden and Uganda, two very different countries that share an explicit commitment to ending violence against children. Through high-level political buy-in, data-driven strategy and tactical legislative initiatives, the two countries have already made progress on. These solutions and others are all part of INSPIRE, a set of strategies created by an alliance of global organizations as a roadmap to eliminating the problem. If we put in the work, Taylor says, a new normal will emerge: generations whose paths in life will be shaped by what they do — not what was done to them.

Quote of the talk: “What would it really mean if we actually end violence against children? Multiply the social, cultural and economic benefits of this change by every family, every community, village, town, city and country, and suddenly you have a new normal emerging. A generation would grow up without experiencing violence.”


“The first half of this century is going to be transformed by a new software revolution: the living software revolution. Its impact will be so enormous that it will make the first software revolution pale in comparison,” says computational biologist Sara-Jane Dunn. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sara-Jane Dunn, computational biologist

Big idea: In the 20th century, computer scientists inscribed machine-readable instructions on tiny silicon chips, completely revolutionizing our lives and workplaces. Today, a “living software” revolution centered around organisms built from programmable cells is poised to transform medicine, agriculture and energy in ways we can scarcely predict.

How? By studying how embryonic stem cells “decide” to become neurons, lung cells, bone cells or anything else in the body, Sara-Jane Dunn seeks to uncover the biological code that dictates cellular behavior. Using mathematical models, Dunn and her team analyze the expected function of a cellular system to determine the “genetic program” that leads to that result. While they’re still a long way from compiling living software, they’ve taken a crucial early step.

Quote of the talk: “We are at the beginning of a technological revolution. Understanding this ancient type of biological computation is the critical first step. And if we can realize this, we would enter into the era of an operating system that runs living software.”


Fay Bound Alberti, cultural historian

Big idea: We need to recognize the complexity of loneliness and its ever-transforming history. It’s not just an individual and psychological problem — it’s a social and physical one.

Why? Loneliness is a modern-day epidemic, with a history that’s often recognized solely as a product of the mind. Fay Bound Alberti believes that interpretation is limiting. “We’ve neglected [loneliness’s] physical effects — and loneliness is physical,” she says. She points to how crucial touch, smell, sound, human interaction and even nostalgic memories of sensory experiences are to coping with loneliness, making people feel important, seen and helping to produce endorphins. By reframing our perspective on this feeling of isolation, we can better understand how to heal it.

Quote of talk: “I am suggesting we need to turn to the physical body, we need to understand the physical and emotional experiences of loneliness to be able to tackle a modern epidemic. After all, it’s through our bodies, our sensory bodies, that we engage with the world.”

Fun fact: “Before 1800 there was no word for loneliness in the English language. There was something called: ‘oneliness’ and there were ‘lonely places,’ but both simply meant the state of being alone. There was no corresponding emotional lack and no modern state of loneliness.”


“Whatever age you are: own it — and then go out there and show the world what you can do!” says Carl Honoré. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Carl Honoré, writer, thinker and activist

Big idea: Stop the lazy thinking around age and the “cult of youth” — it’s not all downhill from 40.

How? We need to debunk the myths and stereotypes surrounding age — beliefs like “older people can’t learn new things” and “creativity belongs to the young.” There are plenty of trailblazers and changemakers who came into their own later in life, from artists and musicians to physicists and business leaders. Studies show that people who fear and feel bad about aging are more likely to suffer physical effects as if age is an actual affliction rather than just a number. The first step to getting past that is by creating new, more positive societal narratives. Honoré offers a set of simple solutions — the two most important being: check your language and own your age. Embrace aging as an adventure, a process of opening rather than closing doors. We need to feel better about aging in order to age better.

Quote of the talk: “Whatever age you are: own it — and then go out there and show the world what you can do!”

TEDWhat Brexit means for Scotland: A Q&A with First Minister Nicola Sturgeon

First Minister of Scotland Nicola Sturgeon spoke at TEDSummit on Wednesday in Edinburgh about her vision for making collective well-being the main aim of public policy and the economy. (Watch her full talk on TED.com.) That same morning, Boris Johnson assumed office as Prime Minister of the United Kingdom, the latest episode of the Brexit drama that has engulfed UK politics. During the 2016 referendum, Scotland voted against Brexit.

After her talk, Chris Anderson, the Head of TED, joined Sturgeon, who’s been vocally critical of Johnson, to ask a few questions about the current political landscape. Watch their exchange below.

,

Rondam RamblingsFedex: when it absolutely, positively has to get stuck in the system for over two months

I have seen some pretty serious corporate bureaucratic dysfunction over the years, but I think this one takes the cake: on May 23, we shipped a package via Fedex from California to Colorado.  The package required a signature.  It turned out that the person we sent it to had moved, and so was not able to sign for the package, and so it was not delivered. Now, the package has our return address on

,

TEDIt’s not about privacy — it’s about power: Carole Cadwalladr speaks at TEDSummit 2019

Three months after her landmark talk, Carole Cadwalladr is back at TED. In conversation with curator Bruno Giussani, Cadwalladr discusses the latest on her reporting on the Facebook-Cambridge Analytica scandal and what we still don’t know about the transatlantic links between Brexit and the 2016 US presidential election.

“Who has the information, who has the data about you, that is where power now lies,” Cadwalladr says.

Cadwalladr appears in The Great Hack, a documentary by Karim Amer and TED Prize winner Jehane Noujaim that explores how Cambridge Analytica has come to symbolize the dark side of social media. The documentary was screened for TEDSummit participants today. Watch it in select theaters and on Netflix starting July 24.

Learn more about how you can support Cadwalladr’s investigation into data, disinformation and democracy.

TEDBusiness Unusual: Notes from Session 4 of TEDSummit 2019

ELEW and Marcus Miller blend jazz improvisation with rock in a musical cocktail of “rock-jazz.” They perform at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

To keep pace with our ever-changing world, we need out-of-the-box ideas that are bigger and more imaginative than ever. The speakers and performers from this session explore these possibilities, challenging us to think harder about the notions we’ve come to accept.

The event: TEDSummit 2019, Session 4: Business Unusual, hosted by Whitney Pennington Rodgers and Cloe Shasha

When and where: Wednesday, July 24, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Margaret Heffernan, Bob Langert, Rose Mutiso, Mariana Mazzucato, Diego Prilusky

Music: A virtuosic violin performance by Min Kym, and a closing performance by ELEW featuring Marcus Miller, blending jazz improvisation with rock in a musical cocktail of “rock-jazz.”

The talks in brief:

“The more we let machines think for us, the less we can think for ourselves,” says Margaret Heffernan. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Margaret Heffernan, entrepreneur, former CEO and writer 

Big idea: The more we rely on technology to make us efficient, the fewer skills we have to confront the unexpected. That’s why we must start practicing “just-in-case” management — anticipating the events (climate catastrophes, epidemics, financial crises) that will almost certainly happen but are ambiguous in timing, scale and specifics. 

Why? In our complex, unpredictable world, changes can occur out of the blue and have outsize impacts. When governments, businesses and individuals prioritize efficiency above all else, it keeps them from responding quickly, effectively and creatively. That’s why we all need to focus on cultivating what Heffernan calls our “unpredictable, messy human skills.” These include exercising our social abilities to build strong relationships and coalitions; humility to admit we don’t have all the answers; imagination to dream up never-before-seen solutions; and bravery to keep experimenting.

Quote of the talk: “The harder, deeper truth is that the future is uncharted, that we can’t map it until we get there. But that’s OK because we have so much capacity for imagination — if we use it. We have deep talents for inventiveness and exploration — if we apply them. We are brave enough to invent things we’ve never seen before. Lose these skills and we are adrift. But hone and develop them, and we can make any future we choose.”


Bob Langert, sustainability expert and VP of sustainability at McDonald’s

Big idea: Adversaries can be your best allies.

How? Three simple steps: reach out, listen and learn. As a “corporate suit” (his words), Bob Langert collaborates with his company’s strongest critics to find business-friendly solutions for society. Instead of denying and pushing back, he tries to embrace their perspectives and suggestions. He encourages others in positions of power to do the same, driven by this mindset: assume the best intentions of your critics; focus on the truth, the science and facts; and be open and transparent in order to turn critics into allies. The worst-case scenario? You’ll become better, your organization will become better — and you might make some friends along the way.

Fun fact: After working with NGOs in the 1990s, McDonald’s reduced 300 million pounds of waste over 10 years.


“When we talk about providing energy for growth, it is not just about innovating the technology: it’s the slow and hard work of improving governance, institutions and a broader macro-environment,” says Rose Mutiso. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Rose Mutiso, energy scientist

Big Idea: In order to grow out of poverty, African countries need a steady supply of abundant and affordable electricity.

Why? Energy poverty, or the lack of access to electricity and other basic energy services, affects nearly two-thirds of Sub-Saharan Africa. As the region’s population continues to grow, we have the opportunity to build a new energy system — from scratch — to grow with it, says Rose Mutiso. It starts with naming the systemic holes that current solutions (solar, LED and battery technology) overlook: we don’t have a clear consensus on what energy poverty is; there’s too much reliance on quick fixes; and we’re misdirecting our climate change concerns. What we need, Mutiso says, is nuanced, large-scale solutions with a diverse range of energy sources. For instance, the region has significant hydroelectric potential, yet less than 10 percent of this potential is currently being utilized. If we work hard to find new solutions to our energy deficits now, everybody benefits.

Quote of talk:Countries cannot grow out of poverty without access to a steady supply of abundant, affordable and reliable energy to power these productive sectors — what I call energy for growth.”


Mariana Mazzucato, economist and policy influencer

Big idea: We’ve forgotten how to tell the difference between the value extractors in the C-suites and finance sectors and the value producers, the workers and taxpayers who actually fuel innovation and productivity. And recently we’ve neglected the importance of even questioning what the difference between the two.

How? Economists must redefine and recognize true value creators, envisioning a system that rewards them just as much as CEOs, investors and bankers. We need to rethink how we value education, childcare and other “free” services — which don’t have a price but clearly contribute to sustaining our economies. We need to make sure that our entire society not only shares risks but also rewards.

Quote of the talk: “[During the bank bailouts] we didn’t hear the taxpayers bragging that they were value creators. But, obviously, having bailed out the biggest ‘value-creating’ productive companies, perhaps they should have.”


Diego Prilusky demos his immersive storytelling technology, bringing Grease to the TED stage. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Diego Prilusky, video pioneer

Big idea: Get ready for the next revolution in visual storytelling: volumetric video, which aims to do nothing less than recreate reality as a cinematic experience.

How? Movies have been around for more than 100 years, but we’re still making (and watching) them in basically the same way. Can movies exist beyond the flat screen? Yes, says Diego Prilusky, but we’ll first need to completely rethink how they’re made. With his team at Intel Studios, Prilusky is pioneering volumetric video, a data-intensive medium powered by hundreds of sensors that capture light and motion from every possible direction. The result is like being inside a movie, which you could explore from different perspectives (or even through a character’s own eyes). In a live tech demo, Prilusky takes us inside a reshoot of an iconic dance number from the 1978 hit Grease. As actors twirl and sing “You’re the One That I Want,” he positions and repositions his perspective on the scene — moving, around, in front of and in between the performers. Film buffs can rest easy, though: the aim isn’t to replace traditional movies, he says, but to empower creators to tell stories in new ways, across multiple vantage points.

Quote of the talk: “We’re opening the gates for new possibilities of immersive storytelling.”

TEDThe Big Rethink: Notes from Session 3 of TEDSummit 2019

Marco Tempest and his quadcopters perform a mind-bending display that feels equal parts science and magic at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In an incredible session, speakers and performers laid out the biggest problems facing the world — from political and economic catastrophe to rising violence and deepfakes — and some new thinking on solutions.

The event: TEDSummit 2019, Session 3: The Big Rethink, hosted by Corey Hajim and Cyndi Stivers

When and where: Tuesday, July 23, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: George Monbiot, Nick Hanauer, Raghuram Rajan, Marco Tempest, Rachel Kleinfeld, Danielle Citron, Patrick Chappatte

Music: KT Tunstall sharing how she found her signature sound and playing her hits “Miniature Disasters,” “Black Horse and the Cherry Tree” and “Suddenly I See.”

The talks in brief:

“We are a society of altruists, but we are governed by psychopaths,” says George Monbiot. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

George Monbiot, investigative journalist and self-described “professional troublemaker”

Big idea: To get out of the political mess we’re in, we need a new story that captures the minds of people across fault lines.

Why? “Welcome to neoliberalism, the zombie doctrine that never seems to die,” says George Monbiot. We have been induced by politicians and economists into accepting an ideology of extreme competition and individualism, weakening the social bonds that make our lives worth living. And despite the 2008 financial crisis, which exposed the blatant shortcomings of neoliberalism, it still dominates our lives. Why? We haven’t yet produced a new story to replace it — a new narrative to help us make sense of the present and guide the future. So, Monbiot proposes his own: the “politics of belonging,” founded on the belief that most people are fundamentally altruistic, empathetic and socially minded. If we can tap into our fundamental urge to cooperate — namely, by building generous, inclusive communities around the shared sphere of the commons — we can build a better world. With a new story to light the way, we just might make it there.

Quote of the talk: “We are a society of altruists, but we are governed by psychopaths.”


Nick Hanauer, entrepreneur and venture capitalist.

Big idea: Economics has ceased to be a rational science in the service of the “greater good” of society. It’s time to ditch neoliberal economics and create tools that address inequality and injustice.

How? Today, under the banner of unfettered growth through lower taxes, fewer regulations, and lower wages, economics has become a tool that enforces the growing gap between the rich and poor. Nick Hanauer thinks that we must recognize that our society functions not because it’s a ruthless competition between its economically fittest members but because cooperation between people and institutions produces innovation. Competition shouldn’t be between the powerful at the expense of everyone else but between ideas battling it out in a well-managed marketplace in which everyone can participate.

Quote of the talk: “Successful economies are not jungles, they’re gardens — which is to say that markets, like gardens, must be tended … Unconstrained by social norms or democratic regulation, markets inevitably create more problems than they solve.”


Raghuram Rajan shares his idea for “inclusive localism” — giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption — at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Raghuram Rajan, economist and former Governor of the Reserve Bank of India

Big idea: As markets grow and governments focus on solving economic problems from the top-down, small communities and neighborhoods are losing their voices — and their livelihoods. But if nations lack the tools to address local problems, it’s time to turn to grass-roots communities for solutions.

How? Raghuram Rajan believes that nations must exercise “inclusive localism”: giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption. As local leaders step forward, citizens become active, and communities receive needed resources from philanthropists and through economic incentives, neighborhoods will thrive and rebuild their social fabric.

Quote of the talk: “What we really need [are] bottom-up policies devised by the community itself to repair the links between the local community and the national — as well as thriving international — economies.”


Marco Tempest, cyber illusionist

Big idea: Illusions that set our imaginations soaring are created when magic and science come together.

Why? “Is it possible to create illusions in a world where technology makes anything possible?” asks techno-magician Marco Tempest, as he interacts with his group of small flying machines called quadcopters. The drones dance around him, reacting buoyantly to his gestures and making it easy to anthropomorphize or attribute personality traits. Tempest’s buzzing buddies swerve, hover and pause, moving in formation as he orchestrates them. His mind-bending display will have you asking yourself: Was that science or magic? Maybe it’s both.

Quote to remember: “Magicians are interesting, their illusions accomplish what technology cannot, but what happens when the technology of today seems almost magical?”


Rachel Kleinfeld, democracy advisor and author

Big idea: It’s possible to quell violence — in the wider world and in our own backyards — with democracy and a lot of political TLC.

How? Compassion-concentrated action. We need to dispel the idea that some people deserve violence because of where they live, the communities they’re a part of or their socio-economic background. Kleinfeld calls this particular, inequality-based vein of violence “privilege violence,” explaining how it evolves in stages and the ways we can eradicate it. By deprogramming how we view violence and its origins and victims, we can move forward and build safer, more secure societies.

Quote of the talk: “The most important thing we can do is abandon the notion that some lives are just worth less than others.”


“Not only do we believe fakes, we are starting to doubt the truth,” says Danielle Citron, revealing the threat deepfakes pose to the truth and democracy. She speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Danielle Citron, professor of law and deepfake scholar

Big idea: Deepfakes — machine learning technology used to manipulate or fabricate audio and video content — can cause significant harm to individuals and society. We need a comprehensive legislative and educational approach to the problem.

How? The use of deepfake technology to manipulate video and audio for malicious purposes — whether it’s to stoke violence against minorities or to defame politicians and journalists — is becoming ubiquitous. With tools being made more accessible and their products more realistic, what becomes of that key ingredient for democratic processes: the truth? As Danielle Citron points out, “Not only do we believe fakes, we are starting to doubt the truth.” The fix, she suggests, cannot be merely technological. Legislation worldwide must be tailored to fighting digital impersonations that invade privacy and ruin lives. Educational initiatives are needed to teach the media how to identify fakes, persuade law enforcement that the perpetrators are worth prosecuting and convince the public at large that the future of democracy really is at stake.

Quote of the talk: “Technologists expect that advances in AI will soon make it impossible to distinguish a fake video and a real one. How can truths emerge in a deepfake ridden ‘marketplace of ideas?’ Will we take the path of least resistance and just believe what we want to believe, truth be damned?”


“Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance,” says editorial cartoonist Patrick Chappatte. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Patrick Chappatte, editorial cartoonist and graphic journalist

Big idea: We need humor like we need the air we breathe. We shouldn’t risk compromising our freedom of speech by censoring ourselves in the name of political correctness.

How? Our social media-saturated world is both a blessing and a curse for political cartoonists like Patrick Chappatte, whose satirical work can go viral while also making them, and the publications they work for, a target. Be it a prison sentence, firing or the outright dissolution of cartoon features in newspapers, editorial cartoonists worldwide are increasingly penalized for their art. Chappatte emphasizes the importance of the art form in political discourse by guiding us through 20 years of editorial cartoons that are equal parts humorous and caustic. In an age where social media platforms often provide places for fury instead of debate, he suggests that traditional media shouldn’t shy away from these online kingdoms, and neither should we. Now is the time to resist preventative self-censorship; if we don’t, we risk waking up in a sanitized world without freedom of expression.

Quote of the talk: “Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance.”

TEDAnthropo Impact: Notes from Session 2 of TEDSummit 2019

Radio Science Orchestra performs the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Session 2 of TEDSummit 2019 is all about impact: the actions we can take to solve humanity’s toughest challenges. Speakers and performers explore the perils — from melting glaciers to air pollution — along with some potential fixes — like ocean-going seaweed farms and radical proposals for how we can build the future.

The event: TEDSummit 2019, Session 2: Anthropo Impact, hosted by David Biello and Chee Pearlman

When and where: Monday, July 22, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Tshering Tobgay, María Neira, Tim Flannery, Kelly Wanser, Anthony Veneziale, Nicola Jones, Marwa Al-Sabouni, Ma Yansong

Music: Radio Science Orchestra, performing the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing (and the 100th anniversary of the theremin’s invention)

… and something completely different: Improv maestro Anthony Veneziale, delivering a made-up-on-the-spot TED Talk based on a deck of slides he’d never seen and an audience-suggested topic: “the power of potatoes.” The result was … surprisingly profound.

The talks in brief:

Tshering Tobgay, politician, environmentalist and former Prime Minister of Bhutan

Big idea: We must save the Hindu Kush Himalayan glaciers from melting — or else face dire, irreversible consequences for one-fifth of the global population.

Why? The Hindu Kush Himalayan glaciers are the pulse of the planet: their rivers alone supply water to 1.6 billion people, and their melting would massively impact the 240 million people across eight countries within their reach. Think in extremes — more intense rains, flash floods and landslides along with unimaginable destruction and millions of climate refugees. Tshering Togbay telegraphs the future we’re headed towards unless we act fast, calling for a new intergovernmental agency: the Third Pole Council. This council would be tasked with monitoring the glaciers’ health, implementing policies to protect them and, by proxy, the billions of who depend of them.

Fun fact: The Hindu Kush Himalayan glaciers are the world’s third-largest repository of ice (after the North and South poles). They’re known as the “Third Pole” and the “Water Towers of Asia.”


Air pollution isn’t just bad for the environment — it’s also bad for our brains, says María Neira. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

María Neira, public health leader

Big idea: Air pollution isn’t just bad for our lungs — it’s bad for our brains, too.

Why? Globally, poor air quality causes seven million premature deaths per year. And all this pollution isn’t just affecting our lungs, says María Neira. An emerging field of research is shedding a light on the link between air pollution and our central nervous systems. The fine particulate matter in air pollution travels through our bloodstreams to our major organs, including the brain — which can slow down neurological development in kids and speed up cognitive decline in adults. In short: air pollution is making us less intelligent. We all have a role to play in curbing air pollution — and we can start by reducing traffic in cities, investing in clean energy and changing the way we consume.

Quote of the talk: “We need to exercise our rights and put pressure on politicians to make sure they will tackle the causes of air pollution. This is the first thing we need to do to protect our health and our beautiful brains.”


Tim Flannery, environmentalist, explorer and professor

Big idea: Seaweed could help us drawdown atmospheric carbon and curb global warming.

How? You know the story: the blanket of CO2 above our heads is driving adverse climate changes and will continue to do so until we get it out of the air (a process known as “drawdown”). Tim Flannery thinks seaweed could help: it grows fast, is made out of productive, photosynthetic tissue and, when sunk more than a kilometer deep into the ocean, can lock up carbon long-term. If we cover nine percent of the ocean surface in seaweed farms, for instance, we could sequester the same amount of CO2 we currently put into the atmosphere. There’s still a lot to figure, Flannery notes —  like how growing seaweed at scale on the ocean surface will affect biodiversity down below — but the drawdown potential is too great to allow uncertainty to stymie progress.

Fun fact: Seaweed is the most ancient multicellular life known, with more genetic diversity than all other multicellular life combined.


Could cloud brightening help curb global warming? Kelly Wanser speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. Photo: Bret Hartman / TED

Kelly Wanser, geoengineering expert and executive director of SilverLining

Big idea: The practice of cloud brightening — seeding clouds with sea salt or other particulates to reflect sunshine back into space — could partially offset global warming, giving us crucial time while we figure out game-changing, long-term solutions.

How: Starting in 2020, new global regulations will require ships to cut emissions by 85 percent. This is a good thing, right? Not entirely, says Kelly Wanser. It turns out that when particulate emissions (like those from ships) mix with clouds, they make the clouds brighter — enabling them to reflect sunshine into space and temporarily cool our climate. (Think of it as the ibuprofen for our fevered climate.) Wanser’s team and others are coming up with experiments to see if “cloud-brightening” proves safe and effective; some scientists believe increasing the atmosphere’s reflectivity by one or two percent could offset the two degrees celsius of warming that’s been forecasted for earth. As with other climate interventions, there’s much yet to learn, but the potential benefits make those efforts worth it. 

An encouraging fact: The global community has rallied to pull off this kind of atmospheric intervention in the past, with the 1989 Montreal Protocol.


Nicola Jones, science journalist

Big idea: Noise in our oceans — from boat motors to seismic surveys — is an acute threat to underwater life. Unless we quiet down, we will irreparably damage marine ecosystems and may even drive some species to extinction.

How? We usually think of noise pollution as a problem in big cities on dry land. But ocean noise may be the culprit behind marine disruptions like whale strandings, fish kills and drops in plankton populations. Fortunately, compared to other climate change solutions, it’s relatively quick and easy to dial down our noise levels and keep our oceans quiet. Better ship propellor design, speed limits near harbors and quieter methods for oil and gas prospecting will all help humans restore peace and quiet to our neighbors in the sea.

Quote of the talk: “Sonar can be as loud as, or nearly as loud as, an underwater volcano. A supertanker can be as loud as the call of a blue whale.”


TED curator Chee Pearlman (left) speaks with architect Marwa Al-Sabouni at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Marwa Al-Sabouni, architect, interviewed by TED curator Chee Pearlman

Big idea: Architecture can exacerbate the social disruptions that lead to armed conflict.

How? Since the time of the French Mandate, officials in Syria have shrunk the communal spaces that traditionally united citizens of varying backgrounds. This contributed to a sense of alienation and rootlessness — a volatile cocktail that built conditions for unrest and, eventually, war. Marwa Al-Sabouni, a resident of Homs, Syria, saw firsthand how this unraveled social fabric helped reduce the city to rubble during the civil war. Now, she’s taking part in the city’s slow reconstruction — conducted by citizens with little or no government aid. As she explains in her book The Battle for Home, architects have the power (and the responsibility) to connect a city’s residents to a shared urban identity, rather than to opposing sectarian groups.

Quote of the talk: “Syria had a very unfortunate destiny, but it should be a lesson for the rest of the world: to take notice of how our cities are making us very alienated from each other, and from the place we used to call home.”


“Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit,” says Ma Yansong. He speaks at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Ma Yansong, architect and artist

Big Idea: By creating architecture that blends with nature, we can break free from the “matchbox” sameness of many city buildings.

How? Ma Yansong paints a vivid image of what happens when nature collides with architecture — from a pair of curvy skyscrapers that “dance” with each other to buildings that burst out of a village’s mountains like contour lines. Yansong embraces the shapes of nature — which never repeat themselves, he notes — and the randomness of hand-sketched designs, creating a kind of “emotional scenery.” When we think beyond the boxy geometry of modern cities, he says, the results can be breathtaking.

Quote of talk: “Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit.”