Planet Russell

,

TED10 years of TED Fellows: Notes from the Fellows Session of TEDSummit 2019

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

The event: TEDSummit 2019, Fellows Session, hosted by Shoham Arad and Lily Whitsitt

When and where: Monday, July 22, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Carl Joshua Ncube, Suzanne Lee, Sonaar Luthra, Jon Lowenstein, Alicia Eggert, Lauren Sallan, Laura Boykin

Opening: A quick, witty performance from Carl Joshua Ncube, one of Zimbabwe’s best-known comedians, who uses humor to approach culturally taboo topics from his home country.

Music: An opening from visual artist and cellist Paul Rucker of the hauntingly beautiful “Criminalization of Survival,” a piece he created to explore issues related to mass incarceration, racially-motivated violence, police brutality and the impact of slavery in the US.

And a dynamic closing from hip-hop artist and filmmaker Blitz Bazawule and his band, who tells stories of the polyphonic African diaspora.

The talks in brief:

Laura Boykin, computational biologist at the University of Western Australia

Big idea: If we’re going to solve the world’s toughest challenges — like food scarcity for millions of people living in extreme poverty — science needs to be more diverse and inclusive. 

How? Collaborating with smallholder farmers in sub-Saharan Africa, Laura Boykin uses genomics and supercomputing to help control whiteflies and viruses, which cause devastation to cassava crops. Cassava is a staple food that feeds more than 500 million people in East Africa and 800 million people globally. Boykin’s work transforms farmers’ lives, taking them from being unable to feed their families to having enough crops to sell and have enough income to thrive. 

Quote of the talk: “I never dreamt the best science I would ever do would be sitting on a blanket under a tree in East Africa, using the highest tech genomics gadgets. Our team imagined a world where farmers could detect crop viruses in three hours instead of six months — and then we did it.”


Lauren Sallan, paleobiologist at the University of Pennsylvania

Big idea: Paleontology is about so much more than dinosaurs.

How? The history of life on earth is rich, varied and … entirely too focused on dinosaurs, according to Lauren Sallan. The fossil record shows that earth has a dramatic past, with four mass extinctions occurring before dinosaurs even came along. From fish with fingers to galloping crocodiles and armored squid, the variety of life that has lived on our changing planet can teach us more about how we got here, and what the future holds, if we take the time to look.

Quote of the talk: “We have learned a lot about dinosaurs, but there’s so much left to learn from the other 99.9 percent of things that have ever lived, and that’s paleontology.”


“If we applied the same energy we currently do suppressing forms of life towards cultivating life, we’d turn the negative image of the urban jungle into one that literally embodies a thriving, living ecosystem,” says Suzanne Lee. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Suzanne Lee, designer, biofabricator

Big idea: What if we could grow bricks, furniture and even ready-made fabric for clothes?

How? Suzanne Lee is a fashion designer turned biofabrication pioneer who is part of a global community of innovators who are figuring how to grow their own materials. By utilizing living microbial organisms like bacteria and fungi, we can replace plastic, cement and other waste-generating materials with alternatives that can help reduce pollution.

Quote of the talk: If we applied the same energy we currently do suppressing forms of life towards cultivating life, we’d turn the negative image of the urban jungle into one that literally embodies a thriving, living ecosystem.”


Sonaar Luthra, founder and CEO of Water Canary

Big idea: We need to get better at monitoring the world’s water supplies — and we need to do it fast.

How? Building a global weather service for water would help governments, businesses and communities manage 21st-century water risk. Sonaar Luthra’s company Water Canary aims to develop technologies which will more efficiently monitor water quality and availability around the world, avoiding the unforecasted shortages that currently occur. Businesses and governments must also invest more in water, he says, and the largest polluters and misusers of water must be held accountable.

Quote of the talk: “It is in the public interest to measure and to share everything we can discover and learn about the risks we face in water. Reality doesn’t exist until it’s measured. It doesn’t just take technology to measure it — it takes our collective will.”


Jon Lowenstein shares photos from the migrant journey in Latin America at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Jon Lowenstein, Documentary photographer, filmmaker, visual artist

Big idea: We need to care about the humanity of migrants in order to understand the desperate journeys they’re making across borders.

How? For the past two decades, Jon Lowenstein has captured the experiences of undocumented Latin Americans living in the United States to show the real stories of the men and women who make up the largest transnational migration in world history. Lowenstein specializes in long-term, in-depth documentary explorations that confront power, poverty and violence. 

Quote of the talk: “With these photographs, I place you squarely in the middle of these moments and ask you to think about [the people in them] as if you knew them. This body of work is a historical document — a time capsule — that can teach us not only about migration but about society and ourselves.”


Like navigational signs, Eggert’s artwork asks us to recognize where we are now as individuals and as a society, to identify where we want to be in the future. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Alicia Eggert, interdisciplinary artist

Big idea: A brighter, more equitable future depends upon our ability to imagine it.  

How? Alicia Eggert creates art that explores how light travels across space and time and reveals the relationship between reality and possibility. They have been installed on building rooftops in Philadelphia, bridges in Amsterdam and uninhabited islands in Maine. Like navigational signs, Eggert’s artwork asks us to recognize where we are now as individuals and as a society, to identify where we want to be in the future — and to imagine the routes we can take to get there.

Quote of the talk: “Signs often help to orient us in the world by telling us where we are now and what’s happening in the present moment. But they can also help us zoom out, shift our perspective and get a sense of the bigger picture.”

TEDAnthropo Impact: Notes from Session 2 of TEDSummit 2019

Radio Science Orchestra performs the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Session 2 of TEDSummit 2019 is all about impact: the actions we can take to solve humanity’s toughest challenges. Speakers and performers explore the perils — from melting glaciers to air pollution — along with some potential fixes — like ocean-going seaweed farms and radical proposals for how we can build the future.

The event: TEDSummit 2019, Session 2: Anthropo Impact, hosted by David Biello and Chee Pearlman

When and where: Monday, July 22, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Tshering Tobgay, María Neira, Tim Flannery, Kelly Wanser, Anthony Veneziale, Nicola Jones, Marwa Al-Sabouni, Ma Yansong

Music: Radio Science Orchestra, performing the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing (and the 100th anniversary of the theremin’s invention)

… and something completely different: Improv maestro Anthony Veneziale, delivering a made-up-on-the-spot TED Talk based on a deck of slides he’d never seen and an audience-suggested topic: “the power of potatoes.” The result was … surprisingly profound.

The talks in brief:

Tshering Tobgay, politician, environmentalist and former Prime Minister of Bhutan

Big idea: We must save the Hindu Kush Himalayan glaciers from melting — or else face dire, irreversible consequences for one-fifth of the global population.

Why? The Hindu Kush Himalayan glaciers are the pulse of the planet: their rivers alone supply water to 1.6 billion people, and their melting would massively impact the 240 million people across eight countries within their reach. Think in extremes — more intense rains, flash floods and landslides along with unimaginable destruction and millions of climate refugees. Tshering Togbay telegraphs the future we’re headed towards unless we act fast, calling for a new intergovernmental agency: the Third Pole Council. This council would be tasked with monitoring the glaciers’ health, implementing policies to protect them and, by proxy, the billions of who depend of them.

Fun fact: The Hindu Kush Himalayan glaciers are the world’s third-largest repository of ice (after the North and South poles). They’re known as the “Third Pole” and the “Water Towers of Asia.”


Air pollution isn’t just bad for the environment — it’s also bad for our brains, says María Neira. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

María Neira, public health leader

Big idea: Air pollution isn’t just bad for our lungs — it’s bad for our brains, too.

Why? Globally, poor air quality causes seven million premature deaths per year. And all this pollution isn’t just affecting our lungs, says María Neira. An emerging field of research is shedding a light on the link between air pollution and our central nervous systems. The fine particulate matter in air pollution travels through our bloodstreams to our major organs, including the brain — which can slow down neurological development in kids and speed up cognitive decline in adults. In short: air pollution is making us less intelligent. We all have a role to play in curbing air pollution — and we can start by reducing traffic in cities, investing in clean energy and changing the way we consume.

Quote of the talk: “We need to exercise our rights and put pressure on politicians to make sure they will tackle the causes of air pollution. This is the first thing we need to do to protect our health and our beautiful brains.”


Tim Flannery, environmentalist, explorer and professor

Big idea: Seaweed could help us drawdown atmospheric carbon and curb global warming.

How? You know the story: the blanket of CO2 above our heads is driving adverse climate changes and will continue to do so until we get it out of the air (a process known as “drawdown”). Tim Flannery thinks seaweed could help: it grows fast, is made out of productive, photosynthetic tissue and, when sunk more than a kilometer deep into the ocean, can lock up carbon long-term. If we cover nine percent of the ocean surface in seaweed farms, for instance, we could sequester the same amount of CO2 we currently put into the atmosphere. There’s still a lot to figure, Flannery notes —  like how growing seaweed at scale on the ocean surface will affect biodiversity down below — but the drawdown potential is too great to allow uncertainty to stymie progress.

Fun fact: Seaweed is the most ancient multicellular life known, with more genetic diversity than all other multicellular life combined.


Could cloud brightening help curb global warming? Kelly Wanser speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. Photo: Bret Hartman / TED

Kelly Wanser, geoengineering expert and executive director of SilverLining

Big idea: The practice of cloud brightening — seeding clouds with sea salt or other particulates to reflect sunshine back into space — could partially offset global warming, giving us crucial time while we figure out game-changing, long-term solutions.

How: Starting in 2020, new global regulations will require ships to cut emissions by 85 percent. This is a good thing, right? Not entirely, says Kelly Wanser. It turns out that when particulate emissions (like those from ships) mix with clouds, they make the clouds brighter — enabling them to reflect sunshine into space and temporarily cool our climate. (Think of it as the ibuprofen for our fevered climate.) Wanser’s team and others are coming up with experiments to see if “cloud-brightening” proves safe and effective; some scientists believe increasing the atmosphere’s reflectivity by one or two percent could offset the two degrees celsius of warming that’s been forecasted for earth. As with other climate interventions, there’s much yet to learn, but the potential benefits make those efforts worth it. 

An encouraging fact: The global community has rallied to pull off this kind of atmospheric intervention in the past, with the 1989 Montreal Protocol.


Nicola Jones, science journalist

Big idea: Noise in our oceans — from boat motors to seismic surveys — is an acute threat to underwater life. Unless we quiet down, we will irreparably damage marine ecosystems and may even drive some species to extinction.

How? We usually think of noise pollution as a problem in big cities on dry land. But ocean noise may be the culprit behind marine disruptions like whale strandings, fish kills and drops in plankton populations. Fortunately, compared to other climate change solutions, it’s relatively quick and easy to dial down our noise levels and keep our oceans quiet. Better ship propellor design, speed limits near harbors and quieter methods for oil and gas prospecting will all help humans restore peace and quiet to our neighbors in the sea.

Quote of the talk: “Sonar can be as loud as, or nearly as loud as, an underwater volcano. A supertanker can be as loud as the call of a blue whale.”


TED curator Chee Pearlman (left) speaks with architect Marwa Al-Sabouni at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Marwa Al-Sabouni, architect, interviewed by TED curator Chee Pearlman

Big idea: Architecture can exacerbate the social disruptions that lead to armed conflict.

How? Since the time of the French Mandate, officials in Syria have shrunk the communal spaces that traditionally united citizens of varying backgrounds. This contributed to a sense of alienation and rootlessness — a volatile cocktail that built conditions for unrest and, eventually, war. Marwa Al-Sabouni, a resident of Homs, Syria, saw firsthand how this unraveled social fabric helped reduce the city to rubble during the civil war. Now, she’s taking part in the city’s slow reconstruction — conducted by citizens with little or no government aid. As she explains in her book The Battle for Home, architects have the power (and the responsibility) to connect a city’s residents to a shared urban identity, rather than to opposing sectarian groups.

Quote of the talk: “Syria had a very unfortunate destiny, but it should be a lesson for the rest of the world: to take notice of how our cities are making us very alienated from each other, and from the place we used to call home.”


“Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit,” says Ma Yansong. he speaks at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Ma Yansong, architect and artist

Big Idea: By creating architecture that blends with nature, we can break free from the “matchbox” sameness of many city buildings.

How? Ma Yansong paints a vivid image of what happens when nature collides with architecture — from a pair of curvy skyscrapers that “dance” with each other to buildings that burst out of a village’s mountains like contour lines. Yansong embraces the shapes of nature — which never repeat themselves, he notes — and the randomness of hand-sketched designs, creating a kind of “emotional scenery.” When we think beyond the boxy geometry of modern cities, he says, the results can be breathtaking.

Quote of talk: “Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit.”

TEDA new mission to mobilize 2 million women in US politics … and more TED news

TED2019 may be past, but the TED community is busy as ever. Below, a few highlights.

Amplifying 2 million women across the U.S. Activist Ai-jen Poo, Black Lives Matter co-founder Alicia Garza and Planned Parenthood past president Cecile Richards have joined forces to launch Supermajority, which aims to train 2 million women in the United States to become activists and political leaders. To scale, the political hub plans to partner with local nonprofits across the country; as a first step, the co-founders will embark on a nationwide listening tour this summer. (Watch Poo’s, Garza’s and Richards’ TED Talks.)

Sneaker reseller set to break billion-dollar record. Sneakerheads, rejoice! StockX, the sneaker-reselling digital marketplace led by data expert Josh Luber, will soon become the first company of its kind with a billion-dollar valuation, thanks to a new round of venture funding.  StockX — a platform where collectible and limited-edition sneakers are bought and exchanged through real-time bidding — is an evolution of Campless, Luber’s site that collected data on rare sneakers. In an interview with The New York Times, Luber said that StockX pulls in around $2 million in gross sales every day. (Watch Luber’s TED Talk.)

A move to protect iconic African-American photo archives. Investment expert Mellody Hobson and her husband, filmmaker George Lucas, filed a motion to acquire the rich photo archives of iconic African-American lifestyle magazines Ebony and Jet. The archives are owned by the recently bankrupt Johnson Publishing Company; Hobson and Lucas intend to gain control over them through their company, Capital Holdings V. The collections include over 5 million photos of notable events and people in African American history, particularly during the Civil Rights Movement. In a statement, Capital Holdings V said: “The Johnson Publishing archives are an essential part of American history and have been critical in telling the extraordinary stories of African-American culture for decades. We want to be sure the archives are protected for generations to come.” (Watch Hobson’s TED Talk.)

10 TED speakers chosen for the TIME100. TIME’s annual round-up of the 100 most influential people in the world include climate activist Greta Thunberg, primatologist and environmentalist Jane Goodall, astrophysicist Sheperd Doeleman and educational entrepreneur Fred Swaniker — also Nancy Pelosi, the Pope, Leana Wen, Michelle Obama, Gayle King (who interviewed Serena Williams and now co-hosts CBS This Morning home to TED segment), and Jeanne Gang. Thunberg was honored for her work igniting climate change activism among teenagers across the world; Goodall for her extraordinary life work of research into the natural world and her steadfast environmentalism; Doeleman for his contribution to the Harvard team of astronomers who took the first photo of a black hole; and Swaniker for the work he’s done to educate and cultivate the next generation of African leaders. Bonus: TIME100 luminaries are introduced in short, sharp essays, and this year many of them came from TEDsters including JR, Shonda Rhimes, Bill Gates, Jennifer Doudna, Dolores Huerta, Hans Ulrich Obrest, Tarana Burke, Kai-Fu Lee, Ian Bremmer, Stacey Abrams, Madeleine Albright, Anna Deavere Smith and Margarethe Vestager. (Watch Thunberg’s, Goodall’s, Doeleman’s, Pelosi’s, Pope Francis’, Wen’s, Obama’s, King’s, Gang’s and Swaniker’s TED Talks.)

Meet Sports Illustrated’s first hijab-wearing model. Model and activist Halima Aden will be the first hijab-wearing model featured in Sports Illustrated’s annual swimsuit issue, debuting May 8. Aden will wear two custom burkinis, modestly designed swimsuits. “Being in Sports Illustrated is so much bigger than me,” Aden said in a statement, “It’s sending a message to my community and the world that women of all different backgrounds, looks, upbringings can stand together and be celebrated.” (Watch Aden’s TED Talk.)

Scotland post-surgical deaths drop by a third, and checklists are to thank. A study indicated a 37 percent decrease in post-surgical deaths in Scotland since 2008, which it attributed to the implementation of a safety checklist. The 19-item list created by the World Health Organization is supposed to encourage teamwork and communication during operations. The death rate fell to 0.46 per 100 procedures between 2000 and 2014, analysis of 6.8 million operations showed. Dr. Atul Gawande, who introduced the checklist and co-authored the study, published in the British Journal of Surgery, said to the BBC: “Scotland’s health system is to be congratulated for a multi-year effort that has produced some of the largest population-wide reductions in surgical deaths ever documented.” (Watch Gawanda’s TED Talk.) — BG

And finally … After the actor Luke Perry died unexpectedly of a stroke in February, he was buried according to his wishes: on his Tennessee family farm, wearing a suit embedded with spores that will help his body decompose naturally and return to the earth. His Infinity Burial Suit was made by Coeio, led by designer, artist and TED Fellow Jae Rhim Lee. Back in 2011, Lee demo’ed the mushroom burial suit onstage at TEDGlobal; now she’s focused on testing and creating suits for more people. On April 13, Lee spoke at Perry’s memorial service, held at Warner Bros. Studios in Burbank; Perry’s daughter revealed his story in a thoughtful instagram post this past weekend. (Watch Lee’s TED Talk.) — EM

CryptogramScience Fiction Writers Helping Imagine Future Threats

The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn't new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it "embarrassing." I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats -- which contributes to overall fear and overestimation of the risks. And that doesn't help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

Worse Than FailureCodeSOD: Break my Validation

Linda inherited an inner-platform front-end framework. It was the kind of UI framework with an average file size of 1,000 lines of code, and an average test coverage of 0%.

Like most UI frameworks, it had a system for doing client side validation. Like most inner-platform UI frameworks, the validation system was fragile, confusing, and impossible to understand.

This code illustrates some of the problems:

/**
 * Modify a validator key, e.g change minValue or disable required
 *
 * @param fieldName
 * @param validatorKey - of the specific validator
 * @param key - the key to change
 * @param value - the value to set
 */
modifyValidatorValue: function (fieldName, validatorKey, key, value) {

	if (!helper.isNullOrUndefined(fieldName)) {
		// Iterate over fields
		for (var i in this.fields) {
			if (this.fields.hasOwnProperty(i)) {
				var field = this.fields[i];
				if (field.name === fieldName) {
					if (!helper.isNullOrUndefined(validatorKey)) {
						if (field.hasOwnProperty('validators')) {
							// Iterate over validators
							for (var j in field.validators) {
								if (field.validators.hasOwnProperty(j)) {
									var validator = field.validators[j];
									if (validator.key === validatorKey) {
										if (!helper.isNullOrUndefined(key) && !helper.isNullOrUndefined(value)) {
											if (validator.hasOwnProperty(key)) {
												validator[key] = value;
											}
										}
										break;
									}
								}
							}
						}
					}
					break;
				}
			}
		}
	}

}

What this code needs to do is find the field for a given name, check the list of validators for that field, and update a value on that validator.

Normally, in JavaScript, you’d do this by using an object/dictionary and accessing things directly by their keys. This, instead, iterates across all the fields on the object and all the validators on that field.

It’s smart code, though, as the developer knew that once they found the fields in question, they could exit the loops, so they added a few breaks to exit. I think those breaks are in the right place. To be sure, I’d need to count curly braces, and I’m worried that I don’t have enough fingers and toes to count them all in this case.

According to the git log, this code was added in exactly this form, and hasn’t been touched since.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowPodcast: Adversarial Interoperability is Judo for Network Effects

In my latest podcast (MP3), I read my essay SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects, published last week on EFF’s Deeplinks; it’s a furhter exploration of the idea of “adversarial interoperability” and the role it has played in fighting monopolies and preserving competition, and how we could use it to restore competition today.

In tech, “network effects” can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn’t just have to be better than Facebook, it has to be so much better than Facebook that it’s worth using, even though all the people you want to talk to are still on Facebook. That’s a tall order.

Adversarial interoperability is judo for network effects, using incumbents’ dominance against them. To see how that works, let’s look at a historical example of adversarial interoperability role in helping to unseat a monopolist’s dominance.

The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn’t open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn’t play nice with Macs.

MP3

TEDWeaving Community: Notes from Session 1 of TEDSummit 2019

Hosts Bruno Giussani and Helen Walters open Session 1: Weaving Community on July 21, 2019, Edinburgh, Scotland. (Photo: Bret Hartman / TED)

The stage is set for TEDSummit 2019: A Community Beyond Borders! During the opening session, speakers and performers explored themes of competition, political engagement and longing — and celebrated the TED communities (representing 84 countries) gathered in Edinburgh, Scotland to forge TED’s next chapter.

The event: TEDSummit 2019, Session 1: Weaving Community, hosted by Bruno Giussani and Helen Walters

When and where: Sunday, July 21, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Pico Iyer, Jochen Wegner, Hajer Sharief, Mariana Lin, Carole Cadwalladr, Susan Cain with Min Kym

Opening: A warm Scottish welcome from raconteur Mackenzie Dalrymple

Music: Findlay Napier and Gillian Frame performing selections from The Ledger, a series of Scottish folk songs

The talks in brief:

“Seeming happiness can stand in the way of true joy even more than misery does,” says writer Pico Iyer. (Photo: Ryan Lash / TED)

Pico Iyer, novelist and nonfiction author

Big idea: The opposite of winning isn’t losing; it’s failing to see the larger picture.

Why? As a child in England, Iyer believed the point of competition was to win, to vanquish one’s opponent. Now, some 50 years later and a resident of Japan, he’s realized that competition can be “more like an act of love.” A few times a week, he plays ping-pong at his local health club. Games are played as doubles, and partners are changed every five minutes. As a result, nobody ends up winning — or losing — for long. Iyer has found liberation and wisdom in this approach. Just as in a choir, he says, “Your only job is to play your small part perfectly, to hit your notes with feeling and by so doing help to create a beautiful harmony that’s much greater than the sum of its parts.”

Quote of the talk: “Seeming happiness can stand in the way of true joy even more than misery does.”


Jochen Wegner, journalist and editor of Zeit Online

Big idea: The spectrum of belief is as multifaceted as humanity itself. As social media segments us according to our interests, and as algorithms deliver us increasingly homogenous content that reinforces our beliefs, we become resistant to any ideas — or even facts — that contradict our worldview. The more we sequester ourselves, the more divided we become. How can we learn to bridge our differences?

How? Inspired by research showing that one-on-one conversations are a powerful tool for helping people learn to trust each other, Zeit Online built Germany Talks, a “Tinder for politics” that facilitates “political arguments” and face-to-face meetings between users in an attempt to bridge their points-of-view on issues ranging from immigration to same-sex marriage. With Germany Talks (and now My Country Talks and Europe Talks) Zeit has facilitated conversations between thousands of Europeans from 33 countries.

Quote of the talk: “What matters here is not the numbers, obviously. What matters here is whenever two people meet to talk in person for hours, without anyone else listening, they change — and so do our societies. They change, little by little, discussion by discussion.”


“The systems we have nowadays for political decision-making are not from the people for the people — they have been established by the few, for the few,” says activist Hajer Sharief. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Hajer Sharief, activist and cofounder of the Together We Build It Foundation

Big Idea: People of all genders, ages, races, beliefs and socioeconomic statuses should participate in politics.

Why? Hajer Sharief’s native Libya is recovering from 40 years of authoritarian rule and civil war. She sheds light on the way politics are involved in every aspect of life: “By not participating in it, you are literally allowing other people to decide what you can eat, wear, if you can have access to healthcare, free education, how much tax you pay, when can you retire, what is your pension,” she says. “Other people are also deciding whether your race is enough to consider you a criminal, or if your religion or nationality are enough to put you on a terrorist list.” When Sharief was growing up, her family held weekly meetings to discuss family issues, abiding by certain rules to ensured everyone was respectful and felt free to voice their thoughts. She recounts a meeting that went badly for her 10-year-old self, resulting in her boycotting them altogether for many years — until an issue came about which forced her to participate again. Rejoining the meetings was a political assertion, and it helped her realize an important lesson: you are never too young to use your voice — but you need to be present for it to work.

Quote of talk: “Politics is not only activism — it’s awareness, it’s keeping ourselves informed, it’s caring for facts. When it’s possible, it is casting a vote. Politics is the tool through which we structure ourselves as groups and societies.”


Mariana Lin, AI character designer and principal writer for Siri

Big idea: Let’s inject AI personalities with the essence of life: creativity, weirdness, curiosity, fun.

Why? Tech companies are going in two different directions when it comes to creating AI personas: they’re either building systems that are safe, flat, stripped of quirks and humor — or, worse, they’re building ones that are fully customizable, programmed to say just what you want to hear, just how you like to hear it. While this might sound nice at first, we’re losing part of what makes us human in the process: the friction and discomfort of relating with others, the hard work of building trusting relationships. Mariana Lin calls for tech companies to try harder to truly bring AI to life — in all its messy, complicated, uncomfortable glory. For starters, she says, companies can hire a diverse range of writers, creatives, artists and social thinkers to work on AI teams. If the people creating these personalities are as diverse as the people using it — from poets and philosophers to bankers and beekeepers — then the future of AI looks bright.

Quote of the talk: “If we do away with the discomfort of relating with others not exactly like us, with views not exactly like ours — we do away with what makes us human.”


In 2018, Carole Cadwalladr exposed Cambridge Analytica’s attempt to influence the UK Brexit vote and the 2016 US presidential election via personal data on Facebook. She’s still working to sound the alarm. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Carole Cadwalladr, investigative journalist, interviewed by TED curator Bruno Giussani

Big idea: Companies that collect and hoard our information, like Facebook, have become unthinkably powerful global players — perhaps more powerful than governments. It’s time for the public hold them accountable.

How? Tech companies with offices in different countries must obey the laws of those nations. It’s up to leaders to make sure those laws are enforced — and it’s up to citizens to pressure lawmakers to further tighten protections. Despite legal and personal threats from her adversaries, Carole Cadwalladr continues to explore the ways in which corporations and politicians manipulate data to consolidate their power.

Quote to remember: “In Britain, Brexit is this thing which is reported on as this British phenomenon, that’s all about what’s happening in Westminster. The fact that actually we are part of something which is happening globally — this rise of populism and authoritarianism — that’s just completely overlooked. These transatlantic links between what is going on in Trump’s America are very, very closely linked to what is going on in Britain.”


Susan Cain meditates on how the feeling of longing can guide us to a deeper understanding of ourselves, accompanied by Min Kym on violin, at TEDSummit: A Community Beyond Borders. July 21, 2019, Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Susan Cain, quiet revolutionary, with violinist Min Kym

Big idea: Life is steeped in sublime magic that you can tap into, opening a whole world filled with passion and delight.

How? By forgoing constant positivity for a state of mind more exquisite and fleeting — a place where light (joy) and darkness (sorrow) meet, known to us all as longing. Susan Cain weaves her journey in search for the sublime with the splendid sounds of Min Kym on violin, sharing how the feeling of yearning connects us to each other and helps us to better understand what moves us deep down.

Quote of the talk: “Follow your longing where it’s telling you to go, and may it carry you straight to the beating heart of the perfect and beautiful world.”

Krebs on SecurityWhat You Should Know About the Equifax Data Breach Settlement

Big-three credit bureau Equifax has reportedly agreed to pay at least $650 million to settle lawsuits stemming from a 2017 breach that let intruders steal personal and financial data on roughly 148 million Americans. Here’s a brief primer that attempts to break down what this settlement means for you, and what it says about the value of your identity.

Q: What happened?

A: If the terms of the settlement are approved by a court, the Federal Trade Commission says Equifax will be required to spend up to $425 million helping consumers who can demonstrate they were financially harmed by the breach. The company also will provide up to 10 years of free credit monitoring to those who had their data exposed.

Q: What about the rest of the money in the settlement?

A: An as-yet undisclosed amount will go to pay lawyers fees for the plaintiffs.

Q: $650 million seems like a lot. Is that some kind of record?

A: If not, it’s pretty close. The New York Times reported earlier today that it was thought to be the largest settlement ever paid by a company over a data breach, but that statement doesn’t appear anywhere in their current story.

Q: Hang on…148 million affected consumers…out of that $425 million pot that comes to just $2.87 per victim, right?

A: That’s one way of looking at it. But as always, the devil is in the details. You won’t see a penny or any other benefit unless you do something about it, and how much you end up costing the company (within certain limits) is up to you.

The Times reports that the proposed settlement assumes that only around seven million people will sign up for their credit monitoring offers. “If more do, Equifax’s costs for providing it could rise meaningfully,” the story observes.

Q: Okay. What can I do?

A: You can visit www.equifaxbreachsettlement.com, although none of this will be official or on offer until a court approves the settlement.

Q: Uh, that doesn’t look like Equifax’s site…

A: Good eyes! It’s not. It’s run by a third party. But we should probably just be grateful for that; given Equifax’s total dumpster fire of a public response to the breach, the company has shown itself incapable of operating (let alone securing) a properly functioning Web site.

Q: What can I get out of this?

A: In a nutshell, affected consumers are eligible to apply for one or more remedies, including:

Free credit monitoring: At least three years of credit monitoring via all three major bureaus simultaneously, including Equifax, Experian and Trans Union. The settlement also envisions up to six more years of single bureau monitoring through Experian. Or, if you don’t want to take advantage of the credit monitoring offers, you can opt instead for a $125 cash payment. You can’t get both.

Reimbursement: …For the time you spent remedying identity theft or misuse of your personal information caused by the breach, or purchasing credit monitoring or credit reports. This is capped at 20 total hours at $25 per hour ($500). Total cash reimbursement payment will not exceed $20,000 per consumer.

Help with ongoing identity theft issues: Up to seven years of “free assisted identity restoration services.” Again, the existing breach settlement page is light on specifics there.

Q: Does this cover my kids/dependents, too?

A: The FTC says if you were a minor in May 2017 (when Equifax first learned of the breach), you are eligible for a total of 18 years of free credit monitoring.

Q: How do I take advantage of any of these?

A: You can’t yet. The settlement has to be approved first. The settlement Web site says to check back again later. In addition to checking the breach settlement site periodically, consumers can sign up with the FTC to receive email updates about this settlement.

The settlement site said consumers also can call 1-833-759-2982 for more information. Press #2 on your phone’s keypad if you want to skip the 1-minute preamble and get straight into the queue to speak with a real person.

KrebsOnSecurity dialed in to ask for more details on the “free assisted identity restoration services,” and the person who took my call said they’d need to have some basic information about me in order to proceed. He said they needed my name, address and phone number to proceed. I gave him a number and a name, and after checking with someone he came back and said the restoration services would be offered by Equifax, but confirmed that affected consumers would still have to apply for it.

He added that the Equifaxbreachsettlement.com site will soon include a feature that lets visitors check to see if they’re eligible, but also confirmed that just checking eligibility won’t entitle one to any of the above benefits: Consumers will still need to file a claim through the site (when it’s available to do so).

ANALYSIS

We’ll see how this unfolds, but I’ll be amazed if anything related to taking advantage of this settlement is painless. I still can’t even get a free copy of my credit report from Equifax, as I’m entitled to under the law for free each year. I’ve even requested a copy by mail, according to their instructions. So far nothing.

But let’s say for the sake of argument that our questioner is basically right — that this settlement breaks down to about $3 worth of flesh extracted from Equifax for each affected person. The thing is, this figure probably is less than what Equifax makes selling your credit history to potential creditors each year.

In a 2017 story about the Equifax breach, I quoted financial fraud expert Avivah Litan saying the credit bureaus make about $1 every time they sell your credit file to a potential creditor (or identity thief posing as you). According to recent stats from the New York Federal Reserve, there were around 145 million hard credit pulls in the fourth quarter of 2018 (it’s not known how many of those were legitimate or desired).

But there is something you can do to stop the Equifax and the other bureaus from profiting this way: Freeze your credit files with them.

A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand. With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you. And it’s now free for all Americans.

This post explains in detail what’s involved in freezing your files; how to place, thaw or remove a freeze; the limitations of a freeze and potential side effects; and alternatives to freezes.

What’s wrong with just using credit monitoring, you might ask? These services do not prevent thieves from using your identity to open new lines of credit, and from damaging your good name for years to come in the process. The most you can hope for is that credit monitoring services will alert you soon after an ID thief does steal your identity.

If past experience is any teacher, anyone with a freeze on their credit file will need to briefly thaw their file at Equifax before successfully signing up for the service when it’s offered. Since a law mandating free freezes across the land went into effect, all three bureaus have made it significantly easier to place and lift security freezes.

Probably too easy, in fact. Especially for people who had freezes in place before Equifax revamped its freeze portal. Those folks were issued a numeric PIN to lift, thaw or remove a freeze, but Equifax no longer lets those users do any of those things online with just the PIN.

These days, that PIN doesn’t play a role in any freeze or thaw process. To create an account at the MyEquifax portal, one need only supply name, address, Social Security number, date of birth, any phone number  (all data points exposed in the Equifax breach, and in any case widely available for sale in the cybercrime underground) and answer 4 multiple-guess questions whose answers are often available in public records or on social media.

And so this is yet another reason why you should freeze your credit: If you don’t sign up as you at MyEquifax, someone else might do it for you.

What else can you do in the meantime? Be wary of any phone calls or emails you didn’t sign up for that invoke this data breach settlement and ask you to provide personal and/or financial information.

And if you haven’t done so lately, go get a free copy of your credit report from annualcreditreport.com; by law all Americans are entitled to a free report from each of the major bureaus annually. You can opt for one report, or all three at once. Either way, make sure to read the report(s) closely and dispute anything that looks amiss.

It has long been my opinion that the big three bureaus are massively stifling innovation and offering consumers so little choice or say in the bargain that’s being made on the backs of their hard work, integrity and honesty. The real question is, if someone or something eventually serves to dis-intermediate the big three and throw the doors wide open to competition, what would the net effect for consumers?

Obviously, there is no way to know for sure, but a company that truly offered to pay consumers anywhere near what their data is actually worth would probably wipe these digital dinosaurs from the face of the earth.

That is, if the banks could get on board. After all, the banks and their various fingers are what drive the credit industry. And these giants don’t move very nimbly. They’re massively hard to turn on the simplest changes. And they’re not known for quickly warming to an entirely new model of doing business (i.e. huge cost investments).

My hometown Sen. Mark Warner (D-Va.) seems to suggest the $650 million settlement was about half what it should be.

“Americans don’t choose to have companies like Equifax collecting their data – by the nature of their business models, credit bureaus collect your personal information whether you want them to or not. In light of that, the penalties for failing to secure that data should be appropriately steep. While I’m happy to see that customers who have been harmed as a result of Equifax’s shoddy cybersecurity practices will see some compensation, we need structural reforms and increased oversight of credit reporting agencies in order to make sure that this never happens again.”

Sen. Warner sponsored a bill along with Sen. Elizabeth Warren (D-Ma.) called “The Data Breach Prevention and Compensation Act,” which calls for “robust compensation to consumers for stolen data; mandatory penalties on credit reporting agencies (CRAs) for data breaches; and giving the FTC more direct supervisory authority over data security at CRAs.

“Had the bill been in effect prior to the 2017 Equifax breach, the company would have had to pay at least $1.5 billion for their failure to protect Americans’ personal information,” Warner’s statement concludes.

Update, 4:44 pm: Added statement from Sen. Warner.

CryptogramHackers Expose Russian FSB Cyberattack Projects

More nation-state activity in cyberspace, this time from Russia:

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

  • Nautilus -- a project for collecting data about social media users (such as Facebook, MySpace, and LinkedIn).

  • Nautilus-S -- a project for deanonymizing Tor traffic with the help of rogue Tor servers.

  • Reward -- a project to covertly penetrate P2P networks, like the one used for torrents.

  • Mentor -- a project to monitor and search email communications on the servers of Russian companies.

  • Hope -- a project to investigate the topology of the Russian internet and how it connects to other countries' network.

  • Tax-3 -- a project for the creation of a closed intranet to store the information of highly-sensitive state figures, judges, and local administration officials, separate from the rest of the state's IT networks.

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.

Planet DebianCandy Tsai: Outreachy Week 6 – Week 7: Getting Code Merge

Already half way through the internship! I have implemented some features and opened a merge request. So… what now? Let’s get those changes merged once and for all! Since I’m already at mid-point, there’s also a video shared on what I’ve done so far in this project.

  • Breaking large merge request into smaller pieces
  • Thoughts on remote pair programming
  • Video sharing for the current progress with the project

Making that video was probably the most time-consuming part. Paying great respects to all YouTubers out there!

Breaking The Merge Request

When I looked back at my merge request, it actually started out quite small and precise. After discussions in the merge request, I started to fix things in the same merge request and then it just got bigger and bigger and we had to seperate out the “mergable parts” to make actual progress in this project.

Remote Pair Programming

You can’t overhear what others are doing or learn something about your colleagues through gossip over lunch break when working remotely. So after being stuck for quite a bit, terceiro suggested that we try pair programming.

After our first remote pair programming session, I think there should be no difference in pair programming in person. We shared the same terminal, looked at the same code and discussed just like people standing side by side.

Through our pair programming session, I found out that I had a bad habit. I didn’t run tests on my code that often, so when I had failing tests that didn’t fail before, I spent more time debugging than I should have. Pair programming gave insight to how others work and I think little improvements go a long way.

Video Report of the Internship

This was initially made for DebConf 2019.

Week 6

And then I took almost a week off, so my week 7 was delayed.

Week 7

I found out that I can make small merge requests and list the merge requests it depends on. Gitlab will automatically handle the rest for me once a request is merged.

  • finally finished breaking down my large merge request
  • added the history section

Worse Than FailureAn Indispensible Guru

Simple budgeting spreadsheet eg

Business Intelligence is the oxymoron that makes modern capitalism possible. In order for a company the size of a Fortune 500 to operate, key people have to know key numbers: how the finances are doing, what sales looks like, whether they're trending on target to meet their business goals or above or below that mystical number.

Once upon a time, Initech had a single person in charge of their Business Intelligence reports. When that person left for greener pastures, the company had a problem. They had no idea how he'd done what he did, just that he'd gotten numbers to people who'd needed them on time every month. There was no documentation about how he'd generated the numbers, nothing to pass on to his successor. They were entirely in the dark.

Recognizing the weakness of having a single point of failure, they set up a small team to create and manage the BI reporting duties and to provide continuity in the event that somebody else were to leave. This new team consisted of four people: Charles, a senior engineer; Liam, his junior sidekick; and two business folks who could provide context around what numbers were needed where and when.

Charles knew Excel. Intimately. Charles could make Excel do frankly astonishing things. Our submitter has worked in IT for three decades, and yet has never seen the like: spreadsheets so chock-full with array formulae, vlookups, hlookups, database functions, macros, and all manner of cascading sheets that they were virtually unreadable. Granted, Charles also had Microsoft Access. However, to Charles, the only thing Access was useful for was the initial downloading of all the raw data from the IBM AS/400 mainframe. Everything else was done in Excel.

Nobody doubted the accuracy of Charles' reports. However, actually running a report involved getting Excel primed and ready to go, hitting the "manual recalculate" button, then sitting back and waiting 45 minutes for the various formulae and macros to do all the things they did. On a good day, Charles could run five, maybe six reports. On a bad day? Three, at best.

By contrast, Liam was very much the "junior" role. He was younger, and did not have the experience that Charles did. That said, Liam was a smart cookie. He took one look at the spreadsheet monstrosity and knew it was a sledgehammer to crack a walnut. Unfortunately, he was the junior member of the engineering half of the team. His objections were taken as evidence of his inexperience, not his intelligence, and his suggestions were generally overlooked.

Eventually, Charles also left for bigger and brighter things, and Liam inherited all of his reports. Almost before the door had stopped swinging, Liam solicited our submitter's assistance in recreating just one of Charles' reports in Access. This took a combined four days; it mostly consisted of the submitter asking "So, Sheet 1, cell A1 ... where does that number come from?", and Liam pointing out the six other sheets they needed to pivot, fold, spindle, and mutilate in order to calculate the number. "Right, so, Sheet 1, cell A2 ... where does that one come from?" ...

Finally, it was done, and the replacement was ready to test. They agreed to run the existing report alongside the new one, so they could determine that the new reports were producing the same output as the old ones. Liam pressed "manual recalculate" while our submitter did the honors of running the new Access report. Thirty seconds later, the Access report gave up and spat out numbers.

"Damn," our submitter muttered. "Something's wrong, it must have died or aborted or something."

"I dunno," replied Liam, "those numbers do look kinda right."

Forty minutes later, when Excel finally finished running its version, lo and behold the outputs were identical. The new report was simply three orders of magnitude faster than the old one.

Enthused by this success, they not only converted all the other reports to run in Access, but also expanded them to run Region- and Area- level variants, essentially running the report about 54 times in the same time it took the original report to run once. They also set up an automatic distribution process so that the reports were emailed out to the appropriate department heads and sales managers. Management was happy; business was happy; developers were happy.

"Why didn't we do this sooner?" was the constant refrain from all involved.

Liam was able to give our submitter the real skinny: "Charles used the long run times to prove how complex the reports were, and therefore, how irreplaceable he was. 'Job security,' he used to call it."

To this day, Charles' LinkedIn profile shows that he was basically running Initech. Liam's has a little more humility about the whole matter. Which just goes to show you shouldn't undersell your achievements in your resume. On paper, Charles still looks like a genius who single-handedly solved all the BI issues in the whole company.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianDaniel Lange: Cleaning a broken GnuPG (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Update

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

  * gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
  * gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
  * gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
  * gpg: New import option "self-sigs-only".
  * gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
  * dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
  * dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
  * dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
  * dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
  * gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

Planet DebianDaniel Lange: Security is hard, open source security unnecessarily harder

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

IRC wallop on hackint

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Keep calm and enjoy the silence

Updates:

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.

  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

Planet DebianGiovanni Mascellani: Bootstrappable Debian BoF

Greetings from DebConf 19 in Curitiba! Just a quick reminder that I will run a Bootstrappable Debian BoF on Tuesday 23rd, at 13.30 Brasilia time (which is 16.30 UTC, if I am not mistaken). If you are curious about bootstrappability in Debian, why do we want it and where we are right now, you are welcome to come in person if you are at DebCon or to follow the streaming.

,

Planet DebianVincent Bernat: A Makefile for your Go project (2019)

My most loathed feature of Go was the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. I was not alone and people devised tools or crafted their own Makefile to avoid organizing their code around GOPATH.

Hopefully, since Go 1.11, it is possible to use Go’s modules to manage dependencies without relying on GOPATH. First, you need to convert your project to a module:1

$ go mod init hellogopher
go: creating new go.mod: module hellogopher
$ cat go.mod
module hellogopher

Then, you can invoke the usual commands, like go build or go test. The go command resolves imports by using versions listed in go.mod. When it runs into an import of a package not present in go.mod, it automatically looks up the module containing that package using the latest version and adds it.

$ go test ./...
go: finding github.com/spf13/cobra v0.0.5
go: downloading github.com/spf13/cobra v0.0.5
?       hellogopher     [no test files]
?       hellogopher/cmd [no test files]
ok      hellogopher/hello       0.001s
$ cat go.mod
module hellogopher

require github.com/spf13/cobra v0.0.5

If you want a specific version, you can either edit go.mod or invoke go get:

$ go get github.com/spf13/cobra@v0.0.4
go: finding github.com/spf13/cobra v0.0.4
go: downloading github.com/spf13/cobra v0.0.4
$ cat go.mod
module hellogopher

require github.com/spf13/cobra v0.0.4

Add go.mod to your version control system. Optionally, you can also add go.sum as a safety net against overriden tags. If you really want to vendor the dependencies, you can invoke go mod vendor and add the vendor/ directory to your version control system.

Thanks to the modules, in my opinion, Go’s dependency management is now on a par with other languages, like Ruby. While it is possible to run day-to-day operations—building and testing—with only the go command, a Makefile can still be useful to organize common tasks, a bit like Python’s setup.py or Ruby’s Rakefile. Let me describe mine.

Using third-party tools

Most projects need some third-party tools for testing or building. We can either expect them to be already installed or compile them on the fly. For example, here is how code linting is done with Golint:

BIN = $(CURDIR)/bin
$(BIN):
    @mkdir -p $@
$(BIN)/%: | $(BIN)
    @tmp=$$(mktemp -d); \
       env GO111MODULE=off GOPATH=$$tmp GOBIN=$(BIN) go get $(PACKAGE) \
        || ret=$$?; \
       rm -rf $$tmp ; exit $$ret

$(BIN)/golint: PACKAGE=golang.org/x/lint/golint

GOLINT = $(BIN)/golint
lint: | $(GOLINT)
    $(GOLINT) -set_exit_status ./...

The first block defines how a third-party tool is built: go get is invoked with the package name matching the tool we want to install. We do not want to pollute our dependency management and therefore, we are working in an empty GOPATH. The generated binaries are put in bin/.

The second block extends the pattern rule defined in the first block by providing the package name for golint. Additional tools can be added by just adding another line like this.

The last block defines the recipe to lint the code. The default linting tool is the golint built using the first block but it can be overrided with make GOLINT=/usr/bin/golint.

Tests

Here are some rules to help running tests:

TIMEOUT  = 20
PKGS     = $(or $(PKG),$(shell env GO111MODULE=on $(GO) list ./...))
TESTPKGS = $(shell env GO111MODULE=on $(GO) list -f \
            '{{ if or .TestGoFiles .XTestGoFiles }}{{ .ImportPath }}{{ end }}' \
            $(PKGS))

TEST_TARGETS := test-default test-bench test-short test-verbose test-race
test-bench:   ARGS=-run=__absolutelynothing__ -bench=.
test-short:   ARGS=-short
test-verbose: ARGS=-v
test-race:    ARGS=-race
$(TEST_TARGETS): test
check test tests: fmt lint
    go test -timeout $(TIMEOUT)s $(ARGS) $(TESTPKGS)

A user can invoke tests in different ways:

  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.

go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you need some additional tools.

COVERAGE_MODE    = atomic
COVERAGE_PROFILE = $(COVERAGE_DIR)/profile.out
COVERAGE_XML     = $(COVERAGE_DIR)/coverage.xml
COVERAGE_HTML    = $(COVERAGE_DIR)/index.html
test-coverage-tools: | $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) # ❶
test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
test-coverage: fmt lint test-coverage-tools
    @mkdir -p $(COVERAGE_DIR)/coverage
    @for pkg in $(TESTPKGS); do \ # ❷
        go test \
            -coverpkg=$$(go list -f '{{ join .Deps "\n" }}' $$pkg | \
                    grep '^$(MODULE)/' | \
                    tr '\n' ',')$$pkg \
            -covermode=$(COVERAGE_MODE) \
            -coverprofile="$(COVERAGE_DIR)/coverage/`echo $$pkg | tr "/" "-"`.cover" $$pkg ;\
     done
    @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE)
    @go tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML)
    @$(GOCOV) convert $(COVERAGE_PROFILE) | $(GOCOVXML) > $(COVERAGE_XML)

First, we define some variables to let the user override them. In ❶, we require the following tools—built like golint previously:

  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format, for Jenkins;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.

In ❷, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Build

Another useful recipe is to build the program. While this could be done with just go build, it is not uncommon to have to specify build tags, additional flags, or to execute supplementary build steps. In the following example, the version is extracted from Git tags. It will replace the value of the Version variable in the hellogopher/cmd package.

VERSION ?= $(shell git describe --tags --always --dirty --match=v* 2> /dev/null || \
            echo v0)
all: fmt lint | $(BIN)
    go build \
        -tags release \
        -ldflags '-X hellogopher/cmd.Version=$(VERSION)' \
        -o $(BIN)/hellogopher main.go

The recipe also runs code formatting and linting.


The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks, including fancy output and integrated help!


  1. For an application not meant to be used as a library, I prefer to use a short name instead of a name derived from an URL, like github.com/vincentbernat/hellogopher. It makes it easier to read import sections:

    import (
            "fmt"
            "os"
    
            "hellogopher/cmd"
    
            "github.com/pkg/errors"
            "github.com/spf13/cobra"
    )
    

    ↩︎

Planet DebianBits from Debian: DebConf19 starts today in Curitiba

DebConf19 logo

DebConf19, the 20th annual Debian Conference, is taking place in Curitiba, Brazil from from July 21 to 28, 2019.

Debian contributors from all over the world have come together at Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, to participate and work in a conference exclusively run by volunteers.

Today the main conference starts with over 350 attendants expected and 121 activities scheduled, including 45- and 20-minute talks and team meetings ("BoF"), workshops, a job fair as well as a variety of other events.

The full schedule at https://debconf19.debconf.org/schedule/ is updated every day, including activities planned ad-hoc by attendees during the whole conference.

If you want to engage remotely, you can follow the video streaming available from the DebConf19 website of the events happening in the three talk rooms: Auditório (the main auditorium), Miniauditório and Sala de Videoconferencia. Or you can join the conversation about what is happening in the talk rooms: #debconf-auditorio, #debconf-miniauditorio and #debconf-videoconferencia (all those channels in the OFTC IRC network).

You can also follow the live coverage of news about DebConf19 on https://micronews.debian.org or the @debian profile in your favorite social network.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Anti-Harassment team) are available to help so both on-site and remote participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf19 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf19, particularly our Platinum Sponsors: Infomaniak, Google and Lenovo.

TEDGetting ready for TEDSummit 2019: Photo gallery

TEDSummit banners are hung at the entrance of the Edinburgh Convention Centre, our home for the week. (Photo: Bret Hartman / TED)

TEDSummit 2019 officially kicks off today! Members of the TED community from 84 countries — TEDx’ers, TED Translators, TED Fellows, TED-Ed Educators, past speakers and more — have gathered in Edinburgh, Scotland to dream up what’s next for TED. Over the next week, the community will share adventures around the city, more than 100 Discovery Sessions and, of course, seven sessions of TED Talks.

Below, check out some photo highlights from the lead-up to TEDSummit and pre-conference activities. (And view our full photostream here.)

It takes a small (and mighty) army to get the theater ready for TED Talks.

(Photo: Bret Hartman / TED)

(Photo: Ryan Lash / TED)

(Photo: Bret Hartman / TED)

TED Translators get the week started with a trip to Edinburgh Castle, complete with high tea in the Queen Anne Tea Room, and a welcome reception.

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

A bit of Scottish rain couldn’t stop the TED Fellows from enjoying a hike up Arthur’s Seat. Weather wasn’t a problem at a welcome dinner.

(Photo: Ryan Lash / TED)

(Photo: Ryan Lash / TED)

(Photo: Ryan Lash / TED)

TEDx’ers kick off the week with workshops, panel discussions and a welcome reception.

(Photo: Dian Lofton / TED)

(Photo: Dian Lofton / TED)

(Photo: Ryan Lash / TED)

It’s all sun and blue skies for the speaker community’s trip to Edinburgh Castle and reception at the Playfair Library.

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

Cheers to an amazing week ahead!

(Photo: Ryan Lash / TED)

Planet DebianHolger Levsen: 20190721-piuparts-was-not-down

piuparts.debian.org was't down for maintenance

I hadn't shut down piuparts.debian.org for maintenance, I just said so, to make you attend my talk, as my last call for help at DebConf17 was attended by 3 people only...

So please join the session about piuparts(d.o.) today at 14:30 localtime.

Please help help help!

Planet DebianSylvain Beucler: Planet clean-up

planet.gnu.org logo

I did some clean-up / resync on the planet.gnu.org setup :)

  • Fix issue with newer https websites (SNI)
  • Re-sync Debian base config, scripts and packaging, update documentation; the planet-venus package is still in bad shape though, it's not officially orphaned but the maintainer is unreachable AFAICS
  • Fetch all Savannah feeds using https
  • Update feeds with redirections, which seem to mess-up caching

Planet DebianDirk Eddelbuettel: RPushbullet 0.3.2

RPpushbullet demo

A new release 0.3.2 of the RPushbullet package is now on CRAN. RPushbullet is interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the left to your browser, phone, tablet, … – or all at once.

This is the first new release in almost 2 1/2 years, and it once again benefits greatly from contributed pull requests by Colin (twice !) and Chan-Yub – see below for details.

Changes in version 0.3.2 (2019-07-21)

  • The Travis setup was robustified with respect to the token needed to run tests (Dirk in #48)

  • The configuration file is now readable only by the user (Colin Gillespie in #50)

  • At startup initialization is now more consistent (Colin Gillespie in #53 fixing #52)

  • A new function to fetch prior posts was added (Chanyub Park in #54). `

Courtesy of CRANberries, there is also a diffstat report for this release. More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TEDA first glimpse at the TEDSummit 2019 speaker lineup

At TEDSummit 2019, more than 1,000 members of the TED community will gather for five days of performances, workshops, brainstorming, outdoor activities, future-focused discussions and, of course, an eclectic program of TED Talks — curated by TED Global curator Bruno Giussani, pictured above. (Photo: Marla Aufmuth / TED)

With TEDSummit 2019 just two months away, it’s time to unveil the first group of speakers that will take to the stage in Edinburgh, Scotland, from July 21-25.

Three years ago, more than 1,000 members of the TED global community convened in Banff, Canada, for the first-ever TEDSummit. We talked about the fracturing state of the world, the impact of technology and the accelerating urgency of climate change. And we drew wisdom and inspiration from the speakers — and from each other.

These themes are equally pressing today, and we’ll bring them to the stage in novel, more developed ways in Edinburgh. We’ll also address a wide range of additional topics that demand attention — looking not only for analysis but also antidotes and solutions. To catalyze this process, half of the TEDSummit conference program will take place outside the theatre, as experts host an array of Discovery Sessions in the form of hands-on workshops, activities, debates and conversations.

Check out a glimpse of the lineup of speakers who will share their future-focused ideas below. Some are past TED speakers returning to give new talks; others will step onto the red circle for the first time. All will help us understand the world we currently live in.

Here we go! (More will be added in the coming weeks):

Anna Piperal, digital country expert

Bob Langert, corporate changemaker

Carl Honoré, author

Carole Cadwalladr, investigative journalist

Diego Prilusky, immersive media technologist

Eli Pariser, organizer and author

Fay Bound Alberti, historian

George Monbiot, thinker and author

Hajer Sharief, youth inclusion activist

Howard Taylor, children safety advocate

Jochen Wegner, editor and dialogue creator

Kelly Wanser, geoengineering expert

Ma Yansong, architect

Marco Tempest, technology magician

Margaret Heffernan, business thinker

María Neira, global public health official

Mariana Lin, AI personalities writer

Mariana Mazzucato, economist

Marwa Al-Sabouni, architect

Nick Hanauer, capitalism redesigner

Nicola Jones, science writer

Nicola Sturgeon, First Minister of Scotland

Omid Djalili, comedian

Patrick Chappatte, editorial cartoonist

Pico Iyer, global author

Poet Ali, Philosopher, poet

Rachel Kleinfeld, violence scholar

Raghuram Rajan, former central banker

Rose Mutiso, energy for Africa activist

Sandeep Jauhar, cardiologist

Sara-Jane Dunn, computational biologist

Sheperd Doeleman, black hole scientist

Sonia Livingstone, social psychologist

Susan Cain, quiet revolutionary

Tim Flannery, carbon-negative tech scholar

Tshering Tobgay, former Prime Minister of Bhutan

 

With them, a number of artists will also join us at TEDSummit, including:

Djazia Satour, singer

ELEW, pianist and DJ

KT Tunstall, singer and songwriter

Min Kym, virtuoso violinist

Radio Science Orchestra, space-music orchestra

Yilian Cañizares, singer and songwriter

 

Registration for TEDSummit is open for active members of our various communities: TED conference members, Fellows, past TED speakers, TEDx organizers, Educators, Partners, Translators and more. If you’re part of one of these communities and would like to attend, please visit the TEDSummit website.

TED7 things you can do in Edinburgh and nowhere else

Edinburgh, Scotland will host TEDSummit this summer, from July 21-25. The city was selected because of its special blend of history, culture and beauty, and for its significance to the TED community (TEDGlobal 2011, 2012 and 2013 were all held there). We asked longtime TEDster Ellen Maloney to share some of her favorite activities that showcase Edinburgh’s unique flavor.

 

From the Castle that dominates the skyline to Arthur’s Seat, an extinct volcano with hiking trails offering panoramic views of the city. Having lived here for most of my adult life, I am still discovering captivating and quirky places to explore. You probably won’t find the sites listed below on the typical “top things to do in Edinburgh” rundowns, but I recommend them to people coming for the upcoming TEDSummit 2019 who love the idea of experiencing this lovely city through a different lens.

St. Cecilia’s Hall and Music Museum

Originally built in 1762 by the University of Edinburgh’s Music Society, this was Scotland’s first venue intentionally built to be a concert hall. Its Music Museum has an impressive collection of musical instruments from around the globe, and it’s claimed to be the only place in the world where you can listen to 18th-century instruments played in an 18th-century setting — some of its ancient harpsichords are indeed playable. Learn how keyboards were once status symbols, and how technology has changed the devices that humans use to make sounds. The museum is open to the public, and the hall regularly hosts concerts and other events.

Innocent Railway Tunnel

This 19th-century former railway tunnel runs beneath the city for 1,696 feet (about 520 meters). One of the first railway tunnels in the United Kingdom and part of the first public railway tunnel in Scotland, it was in use from 1831 until 1968. Today it’s open to walkers and cyclists and connects to a lovely outdoor cycleway. The origin of its name is a mystery, but one theory is that it alludes to the fact that no fatal accidents occurred during its construction. Visitors, however, will find that walking through the tunnel doesn’t feel quite so benign — it’s cold and the wind whistles through.

The Library of Mistakes

This free library dedicated to one subject and one subject only: the human behavior and historical patterns that led to world-shaking financial mistakes. It contains research materials, photos and relics that tell the stories of the bad decisions that shaped our world. Yes, you can read about well-known wrongdoers such Charles Ponzi, but there are plenty of lesser-known schemes and people to discover. For instance, you can learn about the story behind the line “bought and sold for English gold” from the poem by Scotsman Robert Burns. While the library is free and open to the public, viewing is strictly by appointment so you’ll need to book ahead.

Blair Street Vaults

Just off the Royal Mile is Blair Street, which leads to an underground world of 19 cavernous vaults. These lie beneath the bridge that was built in 1788 to connect the Southside of the city with the university area. The archways were once home to a bustling marketplace of cobblers, milliners and other vendors. But it was taken over by less salubrious forces. Its darkness made it an attractive place for anyone who didn’t want to be seen, including thieves and 19th-century murderers William Burke and William Hare, who hid corpses there — there was a convenient opening that led directly to the medical school where they sold the bodies for dissection. Sometime in the 19th century, the vaults were declared too dangerous for use and the entryway was bricked up. Today they can be visited by tour. A warning that paranormal activity has been reported there.  

Sanctuary Stones and Holyrood Abbey

At the foot of the Royal Mile lies Abbey Strand, which leads down to the gates of Holyrood Palace (the Queen’s primary royal residence in Scotland). Look carefully on the road at Abbey Strand, and you will see three stones marked with a golden “S” on them. These stones mark part of what used to be a five-mile radius known as Abbey Sanctuary, where criminals could seek refuge from civil law under the auspices of Holyrood Abbey. In the 16th century, when land came under royal control, sanctuary was reserved for financial debtors. In 1880, a change in law meant debtors could no longer be jailed, so the sanctuary was no longer needed. As you walk the Royal Mile, be sure to appreciate these remnants of Scotland’s history. The Abbey, now a scenic ruin, can be accessed through Holyrood Palace.

White Stuff fitting rooms

This may look like an ordinary store — and yes, you can purchase clothes, home goods and gifts here —  until you head upstairs to the 10 fitting rooms. Open the door to your cubicle and instead of the usual unflattering mirror and bad lighting, you’ll find individually themed rooms. From a 1940s kitchen pantry stocked with cans of gravy and marrowfat peas to a room filled with cuddly toys, these are fitting rooms that you’ll actually want to spend time in (there is room for you to try on clothes). Most of the rooms were designed by AMD Interior Architects, but a few were winning designs from a school competition. The crafty should take a break in the “meet and make” area where they can enjoy arts and crafts while sipping tea from vintage teacups.

Jupiter Artland

Just 10 miles outside of Edinburgh, Jupiter Artland is a sculpture park set among hundreds of acres of gardens and woodlands. It’s located on the grounds of Bonnington House, a 17th-century Jacobean Manor house. While visitors are provided with a map of different artworks, there is no set route to follow. Turn left, turn right, go backwards, go forwards. Look out for the peacocks and geese. Be amazed, be delighted, be stunned. A visit to Jupiter Artland is a mini-adventure in itself.

TEDSummit is a celebration of the different communities and people that make up TED and help spread its world-changing ideas. Learn more about TEDSummit 2019. And to find even more to do in Edinburgh and Scotland, visit Scotland.org.

 

,

Planet DebianJose M. Calhariz: New release of switchconf 0.0.16

I have not touched switchconf for a long time. Being at DebCamp19 was a good time to work on it.

I have moved the development of switchconf from a private svn repo to a git repo in salsa: https://salsa.debian.org/debian/switchconf Created a virtual host called http://software.calhariz.com were I will publish the sources of the software that I take care. Updated the Makefile to the git repo and released version 0.0.16.

You can download the latest version of switchconf from here: http://software.calhariz.com/switchconf

Planet DebianJohn Goerzen: Alas, Poor PGP

Over in The PGP Problem, there’s an extended critique of PGP (and also specifics of the GnuPG implementation) in a modern context. Robert J. Hansen, one of the core GnuPG developers, has an interesting response:

First, RFC4880bis06 (the latest version) does a pretty good job of bringing the crypto angle to a more modern level. There’s a massive installed base of clients that aren’t aware of bis06, and if you have to interoperate with them you’re kind of screwed: but there’s also absolutely nothing prohibiting you from saying “I’m going to only implement a subset of bis06, the good modern subset, and if you need older stuff then I’m just not going to comply.” Sequoia is more or less taking this route — more power to them.

Second, the author makes a couple of mistakes about the default ciphers. GnuPG has defaulted to AES for many years now: CAST5 is supported for legacy reasons (and I’d like to see it dropped entirely: see above, etc.).

Third, a couple of times the author conflates what the OpenPGP spec requires with what it permits, and with how GnuPG implements it. Cleaner delineation would’ve made the criticisms better, I think.

But all in all? It’s a good criticism.

The problem is, where does that leave us? I found the suggestions in the original author’s article (mainly around using IM apps such as Signal) to be unworkable in a number of situations.

The Problems With PGP

Before moving on, let’s tackle some of the problems identified.

The first is an assertion that email is inherently insecure and can’t be made secure. There are some fairly convincing arguments to be made on that score; as it currently stands, there is little ability to hide metadata from prying eyes. And any format that is capable of talking on the network — as HTML is — is just begging for vulnerabilities like EFAIL.

But PGP isn’t used just for this. In fact, one could argue that sending a binary PGP message as an attachment gets around a lot of that email clunkiness — and would be right, at the expense of potentially more clunkiness (and forgetfulness).

What about the web-of-trust issues? I’m in agreement. I have never really used WoT to authenticate a key, only in rare instances trusting an introducer I know personally and from personal experience understand how stringent they are in signing keys. But this is hardly a problem for PGP alone. Every encryption tool mentioned has the problem of validating keys. The author suggests Signal. Signal has some very strong encryption, but you have to have a phone number and a smartphone to use it. Signal’s strength when setting up a remote contact is as strong as SMS. Let that disheartening reality sink in for a bit. (A little social engineering could probably get many contacts to accept a hijacked SIM in Signal as well.)

How about forward secrecy? This is protection against a private key that gets compromised in the future, because an ephemeral session key (or more than one) is negotiated on each communication, and the secret key is never stored. This is a great plan, but it really requires synchronous communication (or something approaching it) between the sender and the recipient. It can’t be used if I want to, for instance, burn a backup onto a Bluray and give it to a friend for offsite storage without giving the friend access to its contents. There are many, many situations where synchronous key negotiation is impossible, so although forward secrecy is great and a nice enhancement, we should assume it to be always applicable.

The saltpack folks have a more targeted list of PGP message format problems. Both they, and the article I link above, complain about the gpg implementation of PGP. There is no doubt truth to these. Among them is a complaint that gpg can emit unverified data. Well sure, because it has a streaming mode. It exits with a proper error code and warnings if a verification fails at the end — just as gzcat does. This is a part of the API that the caller needs to be aware of. It sounds like some callers weren’t handling this properly, but it’s just a function of a streaming tool.

Suggested Solutions

The Signal suggestion is perfectly reasonable in a lot of cases. But the suggestion to use WhatsApp — a proprietary application from a corporation known to brazenly lie about privacy — is suspect. It may have great crypto, but if it uploads your address book to a suspicious company, is it a great app?

Magic Wormhole is a pretty neat program I hadn’t heard of before. But it should be noted it’s written in Python, so it’s probably unlikely to be using locked memory.

How about backup encryption? Backups are a lot more than just filesystem; maybe somebody has a 100GB MySQL or zfs send stream. How should this be encrypted?

My current estimate is that there’s no magic solution right now. The Sequoia PGP folks seem to have a good thing going, as does Saltpack. Both projects are early in development, so as a privacy-concerned person, should you trust them more than GPG with appropriate options? That’s really hard to say.

Additional Discussions

Planet DebianGunnar Wolf: DebConf19 Key Signing Party: Your personalized map is ready!

When facing a large key signing party in a group, even a group where you are already well socially connected in, you often lose track whom you have cross-signed with already, who is farther away from you (in the interest of better weaving the Web of Trust)...

So, having Samuel announce the DebConf19 KSP fingerprints list, I hacked a bit to improve the scripts I used on previous years, and... Behold!

The DC19 KSP personalized maps!

This time it's even color-coded! People you have not cross-signed with are in light grey. People whose keys have been signed by you are presented with blue text. People that have signed your key are presented with green background. Of course, people you have cross-signed with have blue text and green background :-]

The graph is up to date as of early today, pulling the data from keys.gnupg.net. Sorry for the huge size, but it's the only way I found it to be useful to see both the big picture and the detailed information. Of course — You can zoom in and out at will!

Planet DebianBits from Debian: DebConf19 invites you to Debian Open Day at the Federal University of Technology - Paraná (UTFPR), in Curitiba

DebConf19 logo

DebConf, the annual conference for Debian contributors and users interested in improving the Debian operating system, will be held in Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28, 2019. The conference is preceded by DebCamp from July 14 to 19, and the DebConf19 Open Day on July 20.

The Open Day, Saturday, 20 July, is targeted at the general public. Events of interest to a wider audience will be offered, ranging from topics specific to Debian to the greater Free Software community and maker movement.

The event is a perfect opportunity for interested users to meet the Debian community, for Debian to broaden its community, and for the DebConf sponsors to increase their visibility.

Less purely technical than the main conference schedule, the events on Open Day will cover a large range of topics from social and cultural issues to workshops and introductions to Debian.

The detailed schedule of the Open Day's events includes events in English and Portuguese. Some of the talks are:

  • "The metaverse, gaming and the metabolism of cities" by Bernelle Verster
  • "O Projeto Debian quer você!" by Paulo Henrique de Lima Santana
  • "Protecting Your Web Privacy with Free Software" by Pedro Barcha
  • "Bastidores Debian - Entenda como a distribuição funciona" by Joao Eriberto Mota Filho
  • "Caninos Loucos: a plataforma nacional de Single Board Computers para IoT" by geonnave
  • "Debian na vida de uma Operadora de Telecom" by Marcelo Gondim
  • "Who's afraid of Spectre and Meltdown?" by Alexandre Oliva
  • "New to DebConf BoF" by Rhonda D'Vine

During the Open Day, there will also be a Job Fair with booths from our several of our sponsors, a workshop about the Git version control system and a Debian installfest, for attendees who would like to get help installing Debian on their machines.

Everyone is welcome to attend. As the rest of the conference, attendance is free of charge, but registration in the DebConf19 website is highly recommended.

The full schedule for the Open Day's events and the rest of the conference is at https://debconf19.debconf.org/schedule and the video streaming will be available at the DebConf19 website

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the numerous sponsors for their commitment to DebConf19, particularly its Platinum Sponsors: Infomaniak, Google and Lenovo.

Planet DebianMichael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Planet DebianSteinar H. Gunderson: Nageru 1.9.0 released

I've just released version 1.9.0 of Nageru, my live video mixer. This contains some fairly significant changes to the way themes work, and I'd like to elaborate a bit about why:

Themes in Nageru govern what's put on screen at any given time (this includes the actual output, of course, but also preview channels show in the UI). They were always a compromise between flexibility and implementation cost; with limited resources, I just could not create a full-fledged animation studio like VizRT has.

Themes work by defining chains (now called scenes) at startup, which get optimized and compiled down to a set of OpenGL shaders. In the beginning, most chains were fairly pedestrian; take an input and put it on screen:

local chain = EffectChain.new(16, 9)
input = chain:add_live_input(false, false)
chain:finalize(true)

You'd actually have to create each chain twice, since the live output and the previews need different output formats (Y'CbCr vs. RGB), but that wasn't worse than a little for loop and calling finalize() with true or false, respectively.

After a while, one would want to support e.g. 1080p inputs in a 720p stream. By default, those would be scaled directly by the GPU, which is acceptable but not the best one could do, so one would add a high-quality Lanzcos3 resampler:

local chain = EffectChain.new(16, 9)
input = chain:add_live_input(false, false)

local resample_effect = chain:add_effect(ResampleEffect.new())
resample_effect:set_int("width", 1280)
resample_effect:set_int("height", 720)

chain:finalize(true)

But you wouldn't want to waste GPU resources on resampling if the input signal were already the right resolution, so you'd build chains with and without resampling and choose the right one ahead-of-time.

At some point, Nageru started supporting interlaced inputs, by means of deinterlacing them. This requires adding a deinterlacer in the chain and also keeping some history of previous frames; most of this is transparent, but it would need to be specified when building the chain. So now we're up to eight possibilities; all combinatoins of deinterlacing on/off, scaling on/off, and preview or live output.

And as I started doing sports, and I wanted fades. This means you would have two different inputs to deal with, and you're up to 32 different kinds. And as Nageru started to support multiple input types, such as images or HTML inputs (rendered via an embedded Chromium), there would be even more. And the for loops would grow, and be replaced by some fairly elaborate multidimensional Lua tables, and as I one day needed to add crop support for some inputs to alleviate letterboxing, I thought working with themes does not spark joy and it was time to do something.

So Nageru 1.9.0 moves a lot of this complexity to where you no longer need to think about it. You just do addinput(), and that can display any kind of input (be it progressive, deinterlaced, image, or HTML). And you can add _optional effects, such as the resampling mentioned above, and turn it on and off as needed. You're still writing themes in Lua instead of drawing boxes in a neat GUI, and there's still combinatorial explosion behind the scenes (no pun intended), but it's much, much more manageable. Here's an example from the included theme:

local scene = Scene.new(16, 9)
local simple_scene = {
        scene = scene,
        input = scene:add_input(),
        resample_effect = scene:add_effect({ResampleEffect.new(), ResizeEffect.new(), IdentityEffect.new()}),
        wb_effect = scene:add_effect(WhiteBalanceEffect.new())
}
scene:finalize()

So that's an input (of any kind), a high-quality resize, low-quality resize or no resize, a white balance adjustment, and then finalization. This becomes 24 different sets of shaders internally, and you don't really need to know anything about it. You just do

simple_scene.resample_effect:choose(ResampleEffect)
simple_scene.resample_effect:set_int("width", width)
simple_scene.resample_effect:set_int("height", height)

or

simple_scene.resample_effect:disable()  -- No scaling.

There are also many other small tweaks to how themes work, I believe all of them strongly for the better. However, all old themes continue to work as before; I don't like breaking people's hard work for no reason. I do recommend you move to the newer interfaces as soon as possible, though!

As usual, Nageru 1.9.0 can be downloaded from https://nageru.sesse.net/. It is also uploaded to Debian experimental, not not to unstable yet—it depends on a newer version of bmusb clearing the NEW queue for a soname bump. The documentation is updated with the new theme interfaces, too.

Planet DebianHolger Levsen: 20190719-piuparts-down

piuparts.debian.org down for maintenance

So I've just shut down piuparts.debian.org for maintenance, the website is still up but the slaves won't be running for the next week. I think this will block testing migration for a few packages, but probably that's how it is. (Edit 2019-7-21: this was a joke to make you attend the talk.)

If you want to know more, please join my session about piuparts(d.o.) tomorrow on the first day of DebConf19 at 14:30 localtime.

With a little help from some friends the service should soon be running nicely again for many more years! :)

Please help help help!

Planet DebianVincent Bernat: Writing sustainable Python scripts

Python is a great language to write a standalone script. Getting to the result can be a matter of a dozen to a few hundred lines of code and, moments later, you can forget about it and focus on your next task.

Six months later, a co-worker asks you why the script fails and you don’t have a clue: no documentation, hard-coded parameters, nothing logged during the execution and no sensible tests to figure out what may go wrong.

Turning a “quick-and-dirty” Python script into a sustainable version, which will be easy to use, understand and support by your co-workers and your future self, only takes some moderate effort. As an illustration, let’s start from the following script solving the classic Fizz-Buzz test:

import sys
for n in range(int(sys.argv[1]), int(sys.argv[2])):
    if n % 3 == 0 and n % 5 == 0:
        print("fizzbuzz")
    elif n % 3 == 0:
        print("fizz")
    elif n % 5 == 0:
        print("buzz")
    else:
        print(n)

Documentation

I find useful to write documentation before coding: it makes the design easier and it ensures I will not postpone this task indefinitely. The documentation can be embedded at the top of the script:

#!/usr/bin/env python3

"""Simple fizzbuzz generator.

This script prints out a sequence of numbers from a provided range
with the following restrictions:

 - if the number is divisble by 3, then print out "fizz",
 - if the number is divisible by 5, then print out "buzz",
 - if the number is divisible by 3 and 5, then print out "fizzbuzz".
"""

The first line is a short summary of the script purpose. The remaining paragraphs contain additional details on its action.

Command-line arguments

The second task is to turn hard-coded parameters into documented and configurable values through command-line arguments, using the argparse module. In our example, we ask the user to specify a range and allow them to modify the modulo values for “fizz” and “buzz”.

import argparse
import sys


class CustomFormatter(argparse.RawDescriptionHelpFormatter,
                      argparse.ArgumentDefaultsHelpFormatter):
    pass


def parse_args(args=sys.argv[1:]):
    """Parse arguments."""
    parser = argparse.ArgumentParser(
        description=sys.modules[__name__].__doc__,
        formatter_class=CustomFormatter)

    g = parser.add_argument_group("fizzbuzz settings")
    g.add_argument("--fizz", metavar="N",
                   default=3,
                   type=int,
                   help="Modulo value for fizz")
    g.add_argument("--buzz", metavar="N",
                   default=5,
                   type=int,
                   help="Modulo value for buzz")

    parser.add_argument("start", type=int, help="Start value")
    parser.add_argument("end", type=int, help="End value")

    return parser.parse_args(args)


options = parse_args()
for n in range(options.start, options.end + 1):
    # ...

The added value of this modification is tremendous: parameters are now properly documented and are discoverable through the --help flag. Moreover, the documentation we wrote in the previous section is also displayed:

$ ./fizzbuzz.py --help
usage: fizzbuzz.py [-h] [--fizz N] [--buzz N] start end

Simple fizzbuzz generator.

This script prints out a sequence of numbers from a provided range
with the following restrictions:

 - if the number is divisble by 3, then print out "fizz",
 - if the number is divisible by 5, then print out "buzz",
 - if the number is divisible by 3 and 5, then print out "fizzbuzz".

positional arguments:
  start         Start value
  end           End value

optional arguments:
  -h, --help    show this help message and exit

fizzbuzz settings:
  --fizz N      Modulo value for fizz (default: 3)
  --buzz N      Modulo value for buzz (default: 5)

The argparse module is quite powerful. If you are not familiar with it, skimming through the documentation is helpful. I like to use the ability to define sub-commands and argument groups.

Logging

A nice addition to a script is to display information during its execution. The logging module is a good fit for this purpose. First, we define the logger:

import logging
import logging.handlers
import os
import sys

logger = logging.getLogger(os.path.splitext(os.path.basename(sys.argv[0]))[0])

Then, we make its verbosity configurable: logger.debug() should output something only when a user runs our script with --debug and --silent should mute the logs unless an exceptional condition occurs. For this purpose, we add the following code in parse_args():

# In parse_args()
g = parser.add_mutually_exclusive_group()
g.add_argument("--debug", "-d", action="store_true",
               default=False,
               help="enable debugging")
g.add_argument("--silent", "-s", action="store_true",
               default=False,
               help="don't log to console")

We add this function to configure logging:

def setup_logging(options):
    """Configure logging."""
    root = logging.getLogger("")
    root.setLevel(logging.WARNING)
    logger.setLevel(options.debug and logging.DEBUG or logging.INFO)
    if not options.silent:
        ch = logging.StreamHandler()
        ch.setFormatter(logging.Formatter(
            "%(levelname)s[%(name)s] %(message)s"))
        root.addHandler(ch)

The main body of our script becomes this:

if __name__ == "__main__":
    options = parse_args()
    setup_logging(options)

    try:
        logger.debug("compute fizzbuzz from {} to {}".format(options.start,
                                                             options.end))
        for n in range(options.start, options.end + 1):
            # ...
    except Exception as e:
        logger.exception("%s", e)
        sys.exit(1)
    sys.exit(0)

If the script may run unattended—e.g. from a crontab, we can make it log to syslog:

def setup_logging(options):
    """Configure logging."""
    root = logging.getLogger("")
    root.setLevel(logging.WARNING)
    logger.setLevel(options.debug and logging.DEBUG or logging.INFO)
    if not options.silent:
        if not sys.stderr.isatty():
            facility = logging.handlers.SysLogHandler.LOG_DAEMON
            sh = logging.handlers.SysLogHandler(address='/dev/log',
                                                facility=facility)
            sh.setFormatter(logging.Formatter(
                "{0}[{1}]: %(message)s".format(
                    logger.name,
                    os.getpid())))
            root.addHandler(sh)
        else:
            ch = logging.StreamHandler()
            ch.setFormatter(logging.Formatter(
                "%(levelname)s[%(name)s] %(message)s"))
            root.addHandler(ch)

For this example, this is a lot of code just to use logger.debug() once, but in a real script, this will come handy to help users understand how the task is completed.

$ ./fizzbuzz.py --debug 1 3
DEBUG[fizzbuzz] compute fizzbuzz from 1 to 3
1
2
fizz

Tests

Unit tests are very useful to ensure an application behaves as intended. It is not common to use them in scripts, but writing a few of them greatly improves their reliability. Let’s turn the code in the inner “for” loop into a function with some interactive examples of use to its documentation:

def fizzbuzz(n, fizz, buzz):
    """Compute fizzbuzz nth item given modulo values for fizz and buzz.

    >>> fizzbuzz(5, fizz=3, buzz=5)
    'buzz'
    >>> fizzbuzz(3, fizz=3, buzz=5)
    'fizz'
    >>> fizzbuzz(15, fizz=3, buzz=5)
    'fizzbuzz'
    >>> fizzbuzz(4, fizz=3, buzz=5)
    4
    >>> fizzbuzz(4, fizz=4, buzz=6)
    'fizz'

    """
    if n % fizz == 0 and n % buzz == 0:
        return "fizzbuzz"
    if n % fizz == 0:
        return "fizz"
    if n % buzz == 0:
        return "buzz"
    return n

pytest can ensure the results are correct:1

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py
============================ test session starts =============================
platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/bernat/code/perso/python-script, inifile:
plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0
collected 1 item

fizzbuzz.py::fizzbuzz.fizzbuzz PASSED                                  [100%]

========================== 1 passed in 0.05 seconds ==========================

In case of an error, pytest displays a message describing the location and the nature of the failure:

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py -k fizzbuzz.fizzbuzz
============================ test session starts =============================
platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/bernat/code/perso/python-script, inifile:
plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0
collected 1 item

fizzbuzz.py::fizzbuzz.fizzbuzz FAILED                                  [100%]

================================== FAILURES ==================================
________________________ [doctest] fizzbuzz.fizzbuzz _________________________
100
101     >>> fizzbuzz(5, fizz=3, buzz=5)
102     'buzz'
103     >>> fizzbuzz(3, fizz=3, buzz=5)
104     'fizz'
105     >>> fizzbuzz(15, fizz=3, buzz=5)
106     'fizzbuzz'
107     >>> fizzbuzz(4, fizz=3, buzz=5)
108     4
109     >>> fizzbuzz(4, fizz=4, buzz=6)
Expected:
    fizz
Got:
    4

/home/bernat/code/perso/python-script/fizzbuzz.py:109: DocTestFailure
========================== 1 failed in 0.02 seconds ==========================

We can also write unit tests as code. Let’s suppose we want to test the following function:

def main(options):
    """Compute a fizzbuzz set of strings and return them as an array."""
    logger.debug("compute fizzbuzz from {} to {}".format(options.start,
                                                         options.end))
    return [str(fizzbuzz(i, options.fizz, options.buzz))
            for i in range(options.start, options.end+1)]

At the end of the script,2 we add the following unit tests, leveraging pytest’s parametrized test functions:

# Unit tests
import pytest                   # noqa: E402
import shlex                    # noqa: E402


@pytest.mark.parametrize("args, expected", [
    ("0 0", ["fizzbuzz"]),
    ("3 5", ["fizz", "4", "buzz"]),
    ("9 12", ["fizz", "buzz", "11", "fizz"]),
    ("14 17", ["14", "fizzbuzz", "16", "17"]),
    ("14 17 --fizz=2", ["fizz", "buzz", "fizz", "17"]),
    ("17 20 --buzz=10", ["17", "fizz", "19", "buzz"]),
])
def test_main(args, expected):
    options = parse_args(shlex.split(args))
    options.debug = True
    options.silent = True
    setup_logging(options)
    assert main(options) == expected

The test function runs once for each of the provided parameters. The args part is used as input for the parse_args() function to get the appropriate options we need to pass to the main() function. The expected part is compared to the result of the main() function. When everything works as expected, pytest says:

python3 -m pytest -v --doctest-modules ./fizzbuzz.py
============================ test session starts =============================
platform linux -- Python 3.7.4, pytest-3.10.1, py-1.8.0, pluggy-0.8.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/bernat/code/perso/python-script, inifile:
plugins: xdist-1.26.1, timeout-1.3.3, forked-1.0.2, cov-2.6.0
collected 7 items

fizzbuzz.py::fizzbuzz.fizzbuzz PASSED                                  [ 14%]
fizzbuzz.py::test_main[0 0-expected0] PASSED                           [ 28%]
fizzbuzz.py::test_main[3 5-expected1] PASSED                           [ 42%]
fizzbuzz.py::test_main[9 12-expected2] PASSED                          [ 57%]
fizzbuzz.py::test_main[14 17-expected3] PASSED                         [ 71%]
fizzbuzz.py::test_main[14 17 --fizz=2-expected4] PASSED                [ 85%]
fizzbuzz.py::test_main[17 20 --buzz=10-expected5] PASSED               [100%]

========================== 7 passed in 0.03 seconds ==========================

When an error occurs, pytest provides a useful assessment of the situation:

$ python3 -m pytest -v --doctest-modules ./fizzbuzz.py
[...]
================================== FAILURES ==================================
__________________________ test_main[0 0-expected0] __________________________

args = '0 0', expected = ['0']

    @pytest.mark.parametrize("args, expected", [
        ("0 0", ["0"]),
        ("3 5", ["fizz", "4", "buzz"]),
        ("9 12", ["fizz", "buzz", "11", "fizz"]),
        ("14 17", ["14", "fizzbuzz", "16", "17"]),
        ("14 17 --fizz=2", ["fizz", "buzz", "fizz", "17"]),
        ("17 20 --buzz=10", ["17", "fizz", "19", "buzz"]),
    ])
    def test_main(args, expected):
        options = parse_args(shlex.split(args))
        options.debug = True
        options.silent = True
        setup_logging(options)
>       assert main(options) == expected
E       AssertionError: assert ['fizzbuzz'] == ['0']
E         At index 0 diff: 'fizzbuzz' != '0'
E         Full diff:
E         - ['fizzbuzz']
E         + ['0']

fizzbuzz.py:160: AssertionError
----------------------------- Captured log call ------------------------------
fizzbuzz.py                125 DEBUG    compute fizzbuzz from 0 to 0
===================== 1 failed, 6 passed in 0.05 seconds =====================

The call to logger.debug() is included in the output. This is another good reason to use the logging feature! If you want to know more about the wonderful features of pytest, have a look at “Testing network software with pytest and Linux namespaces.”


To sum up, enhancing a Python script to make it more sustainable can be done in four steps:

  1. add documentation at the top,
  2. use the argparse module to document the different parameters,
  3. use the logging module to log details about progress, and
  4. add some unit tests.

You can find the complete example on GitHub and use it as a template!

Update (2019.06)

There are some interesting threads about this article on Lobsters and Reddit. While the addition of documentation and command-line arguments seems to be well-received, logs and tests are sometimes reported as too verbose.


  1. This requires the script name to end with .py. I dislike appending an extension to a script name: the language is a technical detail that shouldn’t be exposed to the user. However, it seems to be the easiest way to let test runners, like pytest, discover the enclosed tests. ↩︎

  2. Because the script ends with a call to sys.exit(), when invoked normally, the additional code for tests will not be executed. This ensures pytest is not needed to run the script. ↩︎

Planet DebianHideki Yamane: Debian 10 "buster" release party @Tokyo (7/7)


We ate a delicious cake to celebrate Debian 10 "buster" release, at party in Tokyo (my employer provided the venue, cake and wine. Thanks to SIOS Technology, Inc.! :)

Hope we'll do the same 2 years later for "bullseye"


Planet DebianSean Whitton: Debian Policy call for participation -- July 2019

Debian Policy started off the Debian 11 “bullseye” release cycle with the release of Debian Policy 4.4.0.0. Please consider helping us fix more bugs and prepare more releases (whether or not you’re at DebCamp19!).

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

,

CryptogramFriday Squid Blogging: Squid Mural

Large squid mural in the Bushwick neighborhood of Brooklyn.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityQuickBooks Cloud Hosting Firm iNSYNQ Hit in Ransomware Attack

Cloud hosting provider iNSYNQ says it is trying to recover from a ransomware attack that shut down its network and has left customers unable to access their accounting data for the past three days. Unfortunately for iNSYNQ, the company appears to be turning a deaf ear to the increasingly anxious cries from its users for more information about the incident.

A message from iNSYNQ to customers.

Gig Harbor, Wash.-based iNSYNQ specializes in providing cloud-based QuickBooks accounting software and services. In a statement posted to its status page, iNSYNQ said it experienced a ransomware attack on July 16, and took its network offline in a bid to contain the spread of the malware.

“The attack impacted data belonging to certain iNSYNQ clients, rendering such data inaccessible,” the company said. “As soon as iNSYNQ discovered the attack, iNSYNQ took steps to contain it. This included turning off some servers in the iNSYNQ environment.”

iNSYNQ said it has engaged outside cybersecurity assistance and to determine whether any customer data was accessed without authorization, but that so far it has no estimate for when those files might be available again to customers.

Meanwhile, iNSYNQ’s customers — many of them accountants who manage financial data for a number of their own clients — have taken to Twitter to vent their frustration over a lack of updates since that initial message to users.

In response, the company appears to have simply deleted or deactivated its Twitter account (a cached copy from June 2019 is available here). Several customers venting about the outage on Twitter also accused the company of unpublishing negative comments about the incident from its Facebook page.

Some of those customers also said iNSYNQ initially blamed the outage on an alleged problem with U.S.-based nationwide cable ISP giant Comcast. Meanwhile, competing cloud hosting providers have been piling on to the tweetstorms about the iNSYNQ outage by marketing their own services, claiming they would never subject their customers to a three-day outage.

iNSYNQ has not yet responded to requests for comment.

Update, 4:35 p.m. ET: I just heard from iNSYNQ’s CEO Elliot Luchansky, who shared the following:

While we have continually updated our website and have emailed customers once if not twice daily during this malware attack, I acknowledge we’ve had to keep the detail fairly minimal.

Unfortunately, and as I’m sure you’re familiar with, the lack of detailed information we’ve shared has been purposeful and in an effort to protect our customers and their data- we’re in a behind the scenes trench warfare doing everything we possibly can to secure and restore our system and customer data and backups. I understand why our customers are frustrated, and we want more than anything to share every piece of information that we have.

Our customers and their businesses are our number one priority right now. Our team is working around the clock to secure and restore access to all impacted data, and we believe we have an end in sight in the near future.

You know as well as we that no one is 100% impervious to this – businesses large and small, governments and individuals are susceptible. iNSYNQ and our customers were the victims of a malware attack that’s a totally new variant that hadn’t been detected before, confirmed by the experienced and knowledgeable cybersecurity team we’ve employed.

Original story: There is no question that a ransomware infestation at any business — let alone a cloud data provider — can quickly turn into an all-hands-on-deck, hair-on-fire emergency that diverts all attention to fixing the problem as soon as possible.

But that is no excuse for leaving customers in the dark, and for not providing frequent and transparent updates about what the victim organization is doing to remediate the matter. Particularly when the cloud provider in question posts constantly to its blog about how companies can minimize their risk from such incidents by trusting it with their data.

Ransomware victims perhaps in the toughest spot include those providing cloud data hosting and software-as-service offerings, as these businesses are completely unable to serve their customers while a ransomware infestation is active.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files.

In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual. It’s not hard to see why: Having customer data ransomed or stolen can send many customers scrambling to find new providers. As a result, the temptation to simply pay up may become stronger with each passing day.

That’s exactly what happened in February, when cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider Dataresolution.net took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

KrebsOnSecurity will endeavor to update this story as more details become available. Any iNSYNQ affected by the outage is welcome to contact this author via Twitter (my direct messages are open to all) or at krebsonsecurity @ gmail.com.

Planet DebianRitesh Raj Sarraf: Cross Architecture Linux Containers

Linux and ARM

With more ARM based devices in the market, and with them getting more powerful every day, it is more common to see more of ARM images for your favorite Linux distribution. Of them, Debian has become the default choice for individuals and companies to base their work on. It must have to do with Debian’s long history of trying to support many more architectures than the rest of the distributions. Debian also tends to have a much wider user/developer mindshare even though it does not have a direct backing from any of the big Linux distribution companies.

Some of my work involves doing packaging and integration work which reflects on all architectures and image types; ARM included. So having the respective environment readily available is really important to get work done quicker.

I still recollect back in 2004, when I was much newer to Linux Enterprise while working at a big Computer Hardware Company, I had heard about the Itanium 64 architecture. Back then, trying out anything other than x86 would mean you need access to physical hardware. Or be a DD and have shell access the Debian Machines.

With Linux Virtualization, a lot seems to have improved over time.

Linux Virtualization

With a decently powered processor with virtualization support, you can emulate a lot of architectures that Linux supports.

Linux has many virtualization options but the main dominant ones are KVM/Qemu, Xen and VirtualBox. Qemu is the most feature-rich virtualization choice on Linux with a wide range of architectures that it can emulate.

In case of ARM, things are still a bit tricky as hardware definition is tightly coupled. Emulating device type on a virtualized Qemu is not straightforward as x86 architecture. Thankfully, in libvirt, you can be provided with a generic machine type called virt. For basic cross architecture tasks, this should be enough.

But, while, virtualization is a nice progression, it is not always an optimal one. For one, it needs good device virtualization support, which can be tricky in the ARM context. Second, it can be (very) slow at times.

And unless you are doing low-level hardware specific work, you can look for an alternative in Linux Containers

Linux Containers

So this is nothing new now. Lots and lots of buzz around containers already. There’s many different implementations across platforms supporting similar concept. From good old chroot (with limited functionality), jails (on BSD), to well marketed products like: Docker, LXC, systemd-nspawn. There’s also some like firejail targeting specific use cases.

As long as you do not have tight dependency on the hardware or a dependency on the specific parts of the Linux kernel (like once I explored the possibility of running open-iscsi in a containerized environment instead), containers are a quick way to get an equal environment. Especially, things like Process, Namespace and Network separation are awesome helping me concentrate on the work rather than putting the focus on Host <=> Guest issues.

Given how fast work can be accomplished with containers, I have been wanting to explore the possibility of building container images for ARM and other architectures that I care about.

The good thing is that architecture virtualization is offered through Qemu and the same tool also provides similar architecture emulation. So features and fixes have a higher chance of parity as they are being served from the same tool.

systemd-nspawn

I haven’t explored all the container implementations that Linux has to offer. Initially, I used LXC for a while. But these days, for my work it is docker, while my personal preference lies with systemd-nspawn.

Part of the reason is simply because I have grown more familiar with systemd given it is the house keeper for my operating system now. And also, so far, I like most of what systemd offers.

Getting Cross Architecture Containers under Debian GNU/Linux

  • Use qemu-user-static for emulation
  • Generate cross architecture chroot images with qemu-debootstrap
  • Import those as sub-volumes under systemd-nspawn
  • Set a template for your containers and other misc stuff

Glitches

Not everything works perfect but most of it does work.

Here’s my container list. Subvolumed containers do help de-duplicate and save some space. They are also very very quick when creating a clone for the container

rrs@priyasi:~$ machinectl list-images
NAME                 TYPE      RO USAGE CREATED                     MODIFIED
2019                 subvolume no   n/a Mon 2019-06-10 09:18:26 IST n/a     
SDK1812              subvolume no   n/a Mon 2018-10-15 12:45:39 IST n/a     
BusterArm64          subvolume no   n/a Mon 2019-06-03 14:49:40 IST n/a     
DebSidArm64          subvolume no   n/a Mon 2019-06-03 14:56:42 IST n/a     
DebSidArmhf          subvolume no   n/a Mon 2018-07-23 21:18:42 IST n/a     
DebSidMips           subvolume no   n/a Sat 2019-06-01 08:31:34 IST n/a     
DebianJessieTemplate subvolume no   n/a Mon 2018-07-23 21:18:54 IST n/a     
DebianSidTemplate    subvolume no   n/a Mon 2018-07-23 21:18:05 IST n/a     
aptVerifySigsDebSid  subvolume no   n/a Mon 2018-07-23 21:19:04 IST n/a     
jenkins-builder      subvolume no   n/a Tue 2018-11-27 20:11:42 IST n/a     
jenkins-builder-new  subvolume no   n/a Tue 2019-04-16 10:13:43 IST n/a     
opensuse             subvolume no   n/a Mon 2018-07-23 21:18:34 IST n/a     

12 images listed.
21:08 ♒♒♒   ☺ 😄    

Problems with networking on foreign architectures

So this must mostly have to do with Qemu’s emulation. When making networking work, I did see many reports and fixes upstream about networking and other subsystems having issues with emulation. Luckily for the architectures listed above, I have been able to make use of them with some workarounds.

Here’s a running Debian Sid ARM64 container image under systemd-nsapwn with qemu emulation

DebSidArm64(fba63648a4c4451ebf56eb758463b37d)
           Since: Fri 2019-07-19 21:13:51 IST; 1min 16s ago
          Leader: 12061 (systemd)
         Service: systemd-nspawn; class container
            Root: /var/lib/machines/DebSidArm64
           Iface: sysbr0
              OS: Debian GNU/Linux 10 (buster)
       UID Shift: 445841408
            Unit: systemd-nspawn@DebSidArm64.service
                  ├─payload
                  │ ├─init.scope
                  │ │ └─12061 /usr/bin/qemu-aarch64-static /usr/lib/systemd/systemd
                  │ └─system.slice
                  │   ├─console-getty.service
                  │   │ └─12245 /usr/bin/qemu-aarch64-static /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 vt220
                  │   ├─cron.service
                  │   │ └─12200 /usr/bin/qemu-aarch64-static /usr/sbin/cron -f
                  │   ├─dbus.service
                  │   │ └─12197 /usr/bin/qemu-aarch64-static /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
                  │   ├─rsyslog.service
                  │   │ └─12202 /usr/bin/qemu-aarch64-static /usr/sbin/rsyslogd -n -iNONE
                  │   ├─systemd-journald.service
                  │   │ └─12123 /usr/bin/qemu-aarch64-static /lib/systemd/systemd-journald
                  │   └─systemd-logind.service
                  │     └─12203 /usr/bin/qemu-aarch64-static /lib/systemd/systemd-logind
                  └─supervisor
                    └─12059 /usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-bridge=sysbr0 --bind /var/tmp/Debian-Build/containers/ --bind-ro /var/tmp/:/var/tmp/vartmp -U --settings=override --machine=DebSidArm64

Jul 19 21:13:53 priyasi systemd-nspawn[12059]: [  OK  ] Reached target Multi-User System.
Jul 19 21:13:53 priyasi systemd-nspawn[12059]: [  OK  ] Reached target Graphical Interface.
Jul 19 21:13:53 priyasi systemd-nspawn[12059]:          Starting Update UTMP about System Runlevel Changes...
Jul 19 21:13:53 priyasi systemd-nspawn[12059]: [  OK  ] Started Update UTMP about System Runlevel Changes.
Jul 19 21:13:53 priyasi systemd-nspawn[12059]: [  OK  ] Started Rotate log files.
Jul 19 21:13:57 priyasi systemd-nspawn[12059]: [  OK  ] Started Daily apt download activities.
Jul 19 21:13:57 priyasi systemd-nspawn[12059]:          Starting Daily apt upgrade and clean activities...
Jul 19 21:13:59 priyasi systemd-nspawn[12059]: [2B blob data]
Jul 19 21:13:59 priyasi systemd-nspawn[12059]: Debian GNU/Linux 10 SidArm64 console
Jul 19 21:13:59 priyasi systemd-nspawn[12059]: [1B blob data]
21:15 ♒♒♒   ☺ 😄    

And the login

rrs@priyasi:~$ machinectl login DebSidArm64 
Connected to machine DebSidArm64. Press ^] three times within 1s to exit session.

Debian GNU/Linux 10 SidArm64 pts/0

SidArm64 login: root
Password: 
Last login: Thu Jun 20 10:57:30 IST 2019 on pts/0
Linux SidArm64 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@SidArm64:~# 
root@SidArm64:~# 
root@SidArm64:~# uname -a
Linux SidArm64 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) aarch64 GNU/Linux
root@SidArm64:~# 

The problematic networking part. Notice the qemu emulation error messages. Basically, at this stage networking is dead.

root@SidArm64:~# ip a
Unsupported setsockopt level=270 optname=11
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: host0@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
request send failed: Operation not supported
    link/ether f6:9d:cc:d6:ad:32 brd ff:ff:ff:ff:ff:ffroot@SidArm64:~# 
root@SidArm64:~# 


root@SidArm64:~# ping www.google.com
ping: www.google.com: Temporary failure in name resolution
root@SidArm64:~# 

root@SidArm64:~# cat /etc/resolv.conf 
nameserver 172.16.20.1

Because I prefer systemd for containers, I chose to make use of systemd’s network management tools too. Maybe that is causing the problem but in my opinion, is highly unlikely to be the cause.

Anyways….simply invoking dhclient at this stage does assign the container an ip address. But more than that the core networking stack gets back in action, even though the reporting tools still report errors.

root@SidArm64:~# dhclient host0
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
Unsupported setsockopt level=270 optname=11
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
Unsupported setsockopt level=263 optname=8
Unsupported setsockopt level=270 optname=11
Unknown target IFA type: 4
Unknown target IFA type: 3
Unknown target IFA type: 6
root@SidArm64:~# 


root@SidArm64:~# ip a
Unsupported setsockopt level=270 optname=11
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
Unknown host QEMU_IFLA type: 50
Unknown host QEMU_IFLA type: 51
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: host0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
request send failed: Operation not supported
    link/ether f6:9d:cc:d6:ad:32 brd ff:ff:ff:ff:ff:ffroot@SidArm64:~# 
root@SidArm64:~# 


root@SidArm64:~# ping 172.16.20.1
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=64 time=0.091 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=64 time=0.138 ms
64 bytes from 172.16.20.1: icmp_seq=4 ttl=64 time=0.136 ms
64 bytes from 172.16.20.1: icmp_seq=5 ttl=64 time=0.152 ms
64 bytes from 172.16.20.1: icmp_seq=6 ttl=64 time=0.143 ms
64 bytes from 172.16.20.1: icmp_seq=7 ttl=64 time=0.137 ms
64 bytes from 172.16.20.1: icmp_seq=8 ttl=64 time=0.144 ms
^C
--- 172.16.20.1 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 158ms
rtt min/avg/max/mdev = 0.061/0.125/0.152/0.030 ms


root@SidArm64:~# ping www.google.com
PING www.google.com (216.58.197.68) 56(84) bytes of data.
64 bytes from maa03s21-in-f4.1e100.net (216.58.197.68): icmp_seq=1 ttl=50 time=45.2 ms
64 bytes from maa03s21-in-f4.1e100.net (216.58.197.68): icmp_seq=2 ttl=50 time=63.7 ms
64 bytes from maa03s21-in-f4.1e100.net (216.58.197.68): icmp_seq=3 ttl=50 time=63.2 ms
64 bytes from maa03s21-in-f4.1e100.net (216.58.197.68): icmp_seq=4 ttl=50 time=80.7 ms
64 bytes from maa03s21-in-f4.1e100.net (216.58.197.68): icmp_seq=5 ttl=50 time=65.4 ms
64 bytes from maa03s21-in-f4.1e100.net (216.58.197.68): icmp_seq=6 ttl=50 time=70.4 ms
^C
--- www.google.com ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 12ms
rtt min/avg/max/mdev = 45.190/64.758/80.717/10.597 ms

Conclusion

With some minor annoyances but otherwise this has been my fastest way to get work done, especially cross architecture stuff. When I compare it back to how I’d have thought such a use case back in the early days, I can’t imagine the speed and simplicity I have at hand today. Free and Opensource Software has been a great choice, while it started just as a curiosity of young boy.

Cory DoctorowAppearance on the Jim Rutt Podcast

Jim Rutt — former chairman of the Santa Fe Institute and ex-Network Solutions CEO — just launched his new podcast, and included me in the first season! (MP3) It was a characteristically wide-ranging, interdisciplinary kind of interview, covering competition and adversarial interoperability, technological self-determination and human rights, conspiracy theories and corruption. There’s a full transcript here.

CryptogramJohn Paul Stevens Was a Cryptographer

I didn't know that Supreme Court Justice John Paul Stevens "was also a cryptographer for the Navy during World War II." He was a proponent of individual privacy.

Planet DebianJonathan McDowell: Upgrading my home server

Vertically mounted 2U server

At the end of last year I decided it was time to upgrade my home server. I built it back in 2013 as an all-in-one device to be my only always-on machine, with some attempt towards low power consumption. It was starting to creak a bit - the motherboard is limited to 16G RAM and the i3-3220T is somewhat ancient (though has served me well). So it was time to think about something more up to date. Additionally since then my needs have changed; my internet connection is VDSL2 (BT Fibre-to-the-Cabinet) so I have an BT HomeHub 5 running OpenWRT to drive that and provide core routing/firewalling. My wifi is provided by a pair of UniFi APs at opposite ends of the house. I also decided I could use something low power to run Kodi and access my ripped DVD collection, rather than having the main machine in the living room. That meant what I wanted was much closer to just a standard server rather than having any special needs.

The first thing to consider was a case. My ADSL terminates in what I call the “comms room” - it has the electricity meter / distribution board and gas boiler, as well as being where one of the UniFi’s lives and where the downstairs ethernet terminates. In short it’s the right room for a server to live in. I don’t want a full rack, however, and ideally wanted something that could sit alongside the meter cabinet without protruding from the wall any further. A tower case would have worked, but only if turned sideways, which would have made it a bit awkward to access. I tried in vain to find a wall mount case with side access that was shallow enough, but failed. However in the process I discovered a 4U vertical wall mount. This was about the same depth as the meter cabinet, so an ideal choice. I paired it with a basic 2U case from X-Case, giving me a couple of spare U should I decide I want another rack-mount machine or two.

My old machine has 2 3.5” hotswap drive bays; this has been useful in the past when a drive failed even just to avoid having to take the machine apart. I still wanted to aim for low power consumption, so 2 drives is enough. I started with a pair of cheap 5.25” drive bay to dual 2.5” + 3.5” hotswap bay devices, but the rear SATA connectors ended up being very fragile and breaking off, so I bit the bullet and bought a SilverStone FS303. This takes up 2 5.25” bays and provides 3 x 3.5” hotswap bays. It’s well constructed and the extra bay has already turned out useful when a drive started to fail and I was able to put the replacement in and resync the RAID set before having to remove the old drive.

Now I had the externals sorted I needed to think about what to put inside. The only thing coming from the old machine were the hard disks (a 4T Seagate and a 6T WD RED, 4T of software RAID1 and 2T of unRAIDed backup space), so everything else was up for discussion. I toyed with an Intel i7-8700T - 6 cores in 35W. AMD have a stronger offering these days though and the AMD Ryzen 2700E with 8 cores in 45W seemed like a good option for an extra 10W. Plus on top there are several of the recent speculative execution exploits that don’t seem to affect AMD chips (or more recent Intel CPUs, but they weren’t out at the time in a low power format). Sadly the 2700E proved to be made of unobtanium; I sat with it on backorder for nearly 3 months before giving up and ordering a AMD Ryzen 2700 that was on offer. This is rated at up to 65W, but I considered trying to underclock if necessary or tweak the cpufreq settings at least.

Next up was a motherboard. The 2U case is short, but allows for MicroATX, an improvement over the MiniITX my last case needs. One of the things constraining me with the old machine was that it maxed out at 16G RAM, so I wanted something that would take more. It turns out there are a number of Socket AM4 MicroATX boards that will take 64G over 4 DIMMs. I chose an ASRock B450M Pro4, which had a couple of good reviews and seemed to have all the bits I wanted. It’s been decent so far - including having some interactions with ASRock support when I initially put an AMD 240GE (while waiting for the 2700E that was never coming) in it. I like to think of BIOS 3.10 as mine ;).

For RAM I went with a Corsair CMK32GX4M2A2400C14 Vengeance LPX 32GB (2 x 16GB) set. I’m sure I should care more about RAM but it was decently priced from a vendor I trust. At some point I’ll buy another set to bring the board up to the full 64GB, but for now this is twice what the old machine had.

Finally I decided to splash out on some SSD. The spinning rust is primarily for media (music + video shared out to Kodi etc) and backups, but I wanted to move my containers (home automation, UniFi controller, various others) over to SSD. I talked myself into a pair of Corsair MP510 960GB NVMe M.2 drives. One went on the motherboard slot and I had to buy a low profile PCIe adaptor for the other (of course they’re RAID1ed). They fly; initially I clocked them in at about 1.5GB/s until I realised the one in the add-in card was only using 2 PCIe lanes. Once I rejigged things so it had all 4 it can use I was up to 2.3GB/s. Impressive.

You’ll note I haven’t mentioned a graphic card here. I ended up with a cheap NVidia off eBay to get things going, but this is a server in a comms room and removing the graphics card saves me at least 10W of power (it was also the reason the NVMe drive only had 2 lanes). I couldn’t find an AM4 motherboard that did serial console, but the 450M Pro is happy to boot without a graphics card present, and I have GRUB onward configured to do serial console just in case.

And the power consumption? The previous machine idled at around 50W, getting to maybe 60-65W under load. I’ve cheated with the new machine; because the spinning rust is not generally in use it’s configured to spin down after 20 minutes idle. As a result the machine idles at around 36W. It hits 50W when the drives spin up, so for 8 cores compared to 2 we’re still sitting in the same ballpark. That’s good, because that’s the general case - idle here means Home Assistant operational, the UniFi controller going, the syslog container logging and so on. However the new server peaks considerably higher; if the drives are spun up and I compile a kernel I can hit 120W. However the compilation takes less than a quarter of the time - the machine is significantly faster than the old one, and even without taking advantage of the SSDs idles at roughly the same power level. I’d call that an overall win.

Worse Than FailureError'd: The Parameter was NOT Set

"Spotted this in front of a retro-looking record player in an Italian tech shop. I don't think anybody had any idea how to categorize it so they just left it up to the system," Marco D. writes.

 

George C. wrote, "Never thought it would come to this, but it looks like LinkedIn can't keep up with all of my connections!"

 

"Apparently opening a particular SharePoint link using anything else other than Internet Explorer made Excel absolutely lose its mind," wrote Arno P.

 

Dima R. writes, "OH! My bad, Edge, I only tried to access a file:// URL while I was offline."

 

"This display at the Vancouver airport doesn't have a lot of fans," Bruce R. wrote.

 

"Woo hoo! This is what I'm talking about! A septillion percentage gain!!" John G. writes.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianHolger Levsen: 20190718-social-media

joining social media at DebConf19

Two days ago I joined telegram (installed via F-Droid). It was an interesting experience, immediatly I was contacted by people who had shared their addressbook with "the cloud" and thus were notified by the "heavily encrypted" telegram servers.

To quote a friend: "If you upload your address book to 'the cloud', I don't want to be in it." (And while I think so, I'm not angry for past actions. But if would like you to be considerate in the future.)

As an SMS user from 1997 until today it's very interesting to taste some of the same survailance as the rest of the the whole planet. And I have to admit, it's tasty, but consciously I know it's tasty in a bitter-sweet way. What also puzzled me that Telegram chats are unecrypted by default. In 2019.

And now let's do something about it. Or sing this karaoke version of the yellow submarine: we all live in global world surveillance, global world surveillance, global world surveillance! Cheers!

,

Planet DebianJohn Goerzen: The Desktop Security Nightmare

Back in 1995 or so, pretty much everyone with a PC did all their work as root. We ran graphics editors, word processors, everything as root. Well, not literally an account named “root”, but the most common DOS, Windows, and Mac operating systems of the day had no effective reduced privilege account.

It was that year that I tried my first Unix. “Wow!” A virus can’t take over my system. My programs are safe!

That turned out to be a little short-sighted.

The fundamental problem we have is that we’d like to give users of a computer more access than we would like to give the computer itself.

Many of us have extremely sensitive data on our systems. Emails to family, medical or bank records, Bitcoin wallets, browsing history, the list goes on. Although we have isolation between our user account and root, we have no isolation between applications that run as our user account. We still, in effect, have to be careful about what attachments we open in email.

Only now it’s worse. You might “npm install hello-world”, and audit hello-world itself, but get some totally malicious code as well. How many times do we see instructions to gem install this, pip install that, go get the other, and even curl | sh? Nowadays our risky click isn’t an email attachment. It’s hosted on Github with a README.md.

Not only that, but my /usr/bin has over 4000 binaries. Have every one been carefully audited? Certainly not, and this is from a distro with some of the highest quality control around. What about the PPAs that people add? The debs or rpms that are installed from the Internet? Are you sure that the postinst scripts — which run as root — aren’t doing anything malicious when you install Oracle Virtualbox?

Wouldn’t it be nice if we could, say, deny access to everything in ~/.ssh or ~/bankstatements except for trusted programs when we want it? On mobile, this happens, to an extent. But we have both a legacy of a different API on desktop, and a much more demanding set of requirements.

It feels like our ecosystem is on the cusp of being able to do this, but none of the options I’ve looked at quite get us there. Let’s take a look at some.

AppArmor

AppArmor falls into the “first line of defense — better than nothing” category. It works by imposing mandatory access controls on a per-executable basis. This starts out as a pretty good idea: we can go after some high-risk targets (Firefox, Chromium, etc) and lock them down. Great! Although it’s not exactly intuitive, with a little configuration, you can prevent them from accessing sensitive areas on disk.

But there’s a problem. Actually, several. To start with, AppArmor does nothing by default. On my system, aa-unconfined --paranoid lists 171 processes that have no policies on them. Among them are Firefox, Apache, ssh, a ton of Pythons, and some stuff I don’t even recognize (/usr/lib/geoclue-2.0/demos/agent? What’s this craziness?)

Worse, since AppArmor matches on executable, all shell scripts would match the /bin/bash profile, all Python programs the Python profile, etc. It’s not so useful for them. While AppArmor does technically have a way to set a default profile, it’s not as useful as you might think.

Then you’re still left with problems like: a PDF viewer should not ordinarily have access to my sensitive files — except when I want to see an old bank statement. This can’t really be expressed in AppArmor.

SELinux

From its documentation, it sounds like SELinux might fit the bill well. It allows transitions into different roles after logging in, which is nice. The problem is complexity. The “notebook” for SELinux is 395 pages. The SELinux homepage has a wiki, which says it’s outdated and replaced by a github link with substantially less information. The Debian wiki page on it is enough to be scary in itself: you need to have various filesystem support, even backups are complicated. Ted T’so had a famous comment about never getting some of his life back, and the Debian wiki also warns that it’s not really tested on desktop systems.

We have certainly learned that complexity is an enemy of good security, leading users to just bypass it. I’m not sure we can rely on it.

Mount Tricks

One thing a person could do would be to keep the sensitive data on a separate, ideally encrypted, filesystem. (Maybe even a fuse one such as gocryptfs.) Then, at least, it could be unavailable for most of the time the system is on.

Of course, the downside here is that it’s still going to be available to everything when it is mounted, and there’s the hassle of mounting, remembering to unmount, password typing, etc. Not exactly transparent.

I wondered if mount namespaces might be an answer here. A filesystem could be mounted but left pretty much unavailable to processes unless a proper mount namespace is joined. Indeed that might be a solution. It is somewhat complicated, though, since nsenter requires root to work. Enter sudo, and dropping privileges back to a particular user — a not particularly ideal situation, and complex as well.

Still, it might well have some promise for some of these things.

Firejail

Firejail is a great idea, but suffers from a lot of the problems that AppArmor does: things must explicitly be selected to run within it.

AppImage and related tools

So now there’s your host distro and your bundled distro, each with libraries that may or may not be secure, both with general access to your home directory. I think this is a recipe for worse security, not better. Add to that the difficulty of making those things work; I know that the Digikam people have been working for months to get sound to work reliably in their AppImage.

Others?

What other ideas are out there? I’ve occasionally created a separate user on the system for running suspicious-ish code, or even a VM or container. That’s a fair bit of work, and provides incomplete protection, but has some benefits. Still, it’s again not going to work for everything.

I hope to play around with many of these tools, especially SELinux, before too long and report back how I’ve found them to be.

Finally, I would like to be really clear that I don’t believe this issue is limited to Debian, or even to Linux. It impacts every desktop platform in wide use today. Actually, I think we’re in a better position to address it than some, but it won’t be easy for anyone.

Sociological ImagesCrowding Out Crime

Buzzfeed News recently ran a story about reputation management companies using fake online personas to help their clients cover up convictions for fraud. These firms buy up domains and create personal websites for a crowd of fake professionals (stock photo headshots and all) who share the same name as the client. The idea is that search results for the client’s name will return these websites instead, hiding any news about white collar crime.

In a sea of profiles with the same name, how do you vet a new hire? Image source: anon617, Flickr CC

This is a fascinating response to a big trend in criminal justice where private companies are hosting mugshots, criminal histories, and other personal information online. Sociologist Sarah Lageson studies these sites, and her research shows that these databases are often unregulated, inaccurate, and hard to correct. The result is more inequality as people struggle to fix their digital history and often have to pay private firms to clean up these records. This makes it harder to get a job, or even just to move into a new neighborhood.

The Buzzfeed story shows how this pattern flips for wealthy clients, whose money goes toward making information about their past difficult to find and difficult to trust. Beyond the criminal justice world, this is an important point about the sociology of deception and “fake news.” The goal is not necessarily to fool people with outright deception, but to create just enough uncertainty so that it isn’t worth the effort to figure out whether the information you have is correct. The time and money that come with social class make it easier to navigate uncertainty, and we need to talk about how those class inequalities can also create a motive to keep things complicated in public policy, the legal system, and other large bureaucracies.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston (Fall, 2019). You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramIdentity Theft on the Job Market

Identity theft is getting more subtle: "My job application was withdrawn by someone pretending to be me":

When Mr Fearn applied for a job at the company he didn't hear back.

He said the recruitment team said they'd get back to him by Friday, but they never did.

At first, he assumed he was unsuccessful, but after emailing his contact there, it turned out someone had created a Gmail account in his name and asked the company to withdraw his application.

Mr Fearn said the talent assistant told him they were confused because he had apparently emailed them to withdraw his application on Wednesday.

"They forwarded the email, which was sent from an account using my name."

He said he felt "really shocked and violated" to find out that someone had created an email account in his name just to tarnish his chances of getting a role.

This is about as low-tech as it gets. It's trivially simple for me to open a new Gmail account using a random first and last name. But because people innately trust email, it works.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 201 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 7 hours (out of 14 hours allocated plus 7 extra hours from May, thus carrying over 14h to July).
  • Adrian Bunk did 6 hours (out of 8 hours allocated plus 8 extra hours from May, thus carrying over 10h to July).
  • Ben Hutchings did 17 hours (out of 17 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17 hours allocated plus 0.25 extra hours from May, thus carrying over 0.25h to July).
  • Emilio Pozuelo Monfort did not provide his June report yet. (He got 17 hours allocated and carried over 0.25h from May).
  • Hugo Lefeuvre did 4.25 hours (out of 17 hours allocated and he gave back 12.75 hours to the pool, thus he’s not carrying over any hours to July).
  • Jonas Meurer did 16.75 hours (out of 17 hours allocated plus 1.75h extra hours from May, thus he is carrying over 2h to July).
  • Markus Koschany did 17 hours (out of 17 hours allocated).
  • Mike Gabriel did 9.75 hours (out of 17 hours allocated, thus carrying over 7.25h to July).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated plus 6h from June, then he gave back 1.5h to the pool, thus he is carrying over 8h to July).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 17 hours (out of 17 hours allocated).
  • Thorsten Alteholz did 17 hours (out of 17 hours allocated).

DebConf sponsorship

Thanks to the Extended LTS service, Freexian has been able to invest some money in DebConf sponsorship. This year, Debconf attendees should have Debian LTS stickers and flyer in their welcome bag. And while we were thinking of marketing, we also opted to create a promotional video explaining LTS and Freexian’s offer. This video will be premiered at Debconf 19!

Evolution of the situation

We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker (now for oldoldstable as Buster has been released and thus Stretch became oldoldstable) currently lists 41 packages with a known CVE and the dla-needed.txt file has 43 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureThe Hardware Virus

Dvi-cable

Jen was a few weeks into her new helpdesk job. Unlike past jobs, she started getting her own support tickets quickly—but a more veteran employee, Stanley, had been tasked with showing her the ropes. He also got notification of Jen's tickets, and they worked on them together. A new ticket had just come in, asking for someone to replace the DVI cable that'd gone missing from Conference Room 3. Such cables were the means by which coworkers connected their laptops to projectors for presentations.

Easy enough. Jen left her cube to head for the hardware "closet"—really, more of a room crammed full of cables, peripherals, and computer parts. On a dusty shelf in a remote corner, she spotted what she was looking for. The coiled cable was a bit grimy with age, but looked serviceable. She picked it up and headed to Stanley's cube, leaning against the threshold when she got there.

"That ticket that just came in? I found the cable they want. I'll go walk it down." Jen held it up and waggled it.

Stanley was seated, facing away from her at first. He swiveled to face her, eyed the cable, then went pale. "Where did you find that?"

"In the closet. What, is it—?"

"I thought they'd been purged." Stanley beckoned her forward. "Get in here!"

Jen inched deeper into the cube. As soon as he could reach it, Stanley snatched the cable out of her hand, threw it into the trash can sitting on the floor beside him, and dumped out his full mug of coffee on it for good measure.

"What the hell are you doing?" Jen blurted.

Stanley looked up at her desperately. "Have you used it already?"

"Uh, no?"

"Thank the gods!" He collapsed back in his swivel chair with relief, then feebly kicked at the trash can. The contents sloshed around inside, but the bin remained upright.

"What's this about?" Jen demanded. "What's wrong with the cable?"

Under the harsh office lighting, Stanley seemed to have aged thirty years. He motioned for Jen to take the empty chair across from his. Once she'd sat down, he continued nervously and quietly. "I don't know if you'll believe me. The powers-that-be would be angry if word were to spread. But, you've seen it. You very nearly fell victim to it. I must relate the tale, no matter how vile."

Jen frowned. "Of what?"

Stanley hesitated. "I need more coffee."

He picked up his mug and walked out, literally leaving Jen at the edge of her seat. She managed to sit back, but her mind was restless, wondering just what had her mentor so upset.

Eventually, Stanley returned with a fresh mug of coffee. Once he'd returned to his chair, he placed the mug on his desk and seemed to forget all about it. With clear reluctance, he focused on Jen. "I don't know where to start. The beginning, I suppose. It fell upon us from out of nowhere. Some say it's the spawn of a Sales meeting; others blame a code review gone horribly wrong. In the end, it matters little. It came alive and spread like fire, leaving destruction and chaos in its wake."

Jen's heart thumped with apprehension. "What? What came alive?"

Stanley's voice dropped to a whisper. "The hardware virus."

"Hardware virus?" Jen repeated, eyes wide.

Stanley glared. "You're going to tell me there's no such thing, but I tell you, I've seen it! The DVI cables ..."

He trailed off helplessly, reclining in his chair. When he straightened and resumed, his demeanor was calmer, but weary.

"At some godforsaken point in space and time, a single pin on one of our DVI cables was irrevocably bent. This was the source of the contagion," he explained. "Whenever the cable was plugged into a laptop, it cracked the plastic composing the laptop's DVI port, contorting it in a way that resisted all mortal attempt at repair. Any time another DVI cable was plugged into that laptop, its pin was bent in just the same way as with the original cable.

"That was how it spread. Cable infected laptop, laptop infected cable, all with vicious speed. There was no hope for the infected. We ... we were forced to round up and replace every single victim. I was knee-deep in the carnage, Jen. I see it in my nightmares. The waste, the despair, the endless reimaging!"

Stanley buried his head in his hands. It was a while before he raised his haunted gaze again. "I don't know how long it took, but it ran its course; the support tickets stopped coming in. Our superiors consider the matter resolved ... but I've never been able to let my guard down." He glanced warily at the trash can, then made eye contact with Jen. "Take no chances with any DVI cables you find within this building. Buy your own, and keep them with you at all times. If you see any more of those—" he pointed an accusing finger at the bin "—don't go near them, don't try taking a paperclip to them. There's everything to lose, and nothing to gain. Do you understand?"

Unable to manage words, Jen nodded instead.

"Good." The haunted expression vanished in favor of grim determination. Stanley stood, then rummaged through a desk drawer loaded with office supplies. He handed Jen a pair of scissors, and armed himself with a brassy letter opener.

"Our job now is to track down the missing cable that resulted in your support ticket," he continued. "If we're lucky, someone's absent-mindedly walked off with it. If we're not, we may find that this is step one in the virus' plan to re-invade. Off we go!"

Jen's mind reeled, but she sprang to her feet and followed Stanley out of the cubicle, telling herself to be ready for anything.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianKees Cook: security things in Linux v5.2

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid things like Use-After-Free heap grooming, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

,

Krebs on SecurityParty Like a Russian, Carder’s Edition

“It takes a certain kind of man with a certain reputation
To alleviate the cash from a whole entire nation…”

KrebsOnSecurity has seen some creative yet truly bizarre ads for dodgy services in the cybercrime underground, but the following animated advertisement for a popular credit card fraud shop likely takes the cake.

The name of this particular card shop won’t be mentioned here, and its various domain names featured in the video have been pixelated so as not to further promote the online store in question.

But points for knowing your customers, and understanding how to push emotional buttons among a clientele that mostly views America’s financial system as one giant ATM that never seems to run out of cash.

WARNING: Some viewers may find this video disturbing. Also, it is almost certainly Not Safe for Work.

The above commercial is vaguely reminiscent of the slick ads produced for and promoted by convicted Ukrainian credit card fraudster Vladislav “BadB” Horohorin, who was sentenced in 2013 to serve 88 months in prison for his role in the theft of more than $9 million from RBS Worldpay, an Atlanta-based credit card processor. (In February 2017, Horohorin was released and deported from the United States. He now works as a private cybersecurity consultant).

The clip above is loosely based on the 2016 music video, “Party Like a Russian,” produced by British singer-songwriter Robbie Williams.

Tip of the hat to Alex Holden of Hold Security for finding and sharing this video.

TEDApply to be a TED2020 Fellow

Apply to be a TED2020 Fellow

Since launching the TED Fellows program ten years ago, we’ve gotten to know and support some of the brightest, most ambitious thinkers, change-makers and culture-shakers from nearly every discipline and corner of the world. The numbers speak for themselves:

  • 472 Fellows covering a vast array of disciplines, from astrophysics to the arts
  • 96 countries represented
  • More than 1.3 million views per TED Talk given by Fellows (on average)
  • At least 90 new businesses and 46 nonprofits fostered within the program

Whether it’s discovering new galaxies, leading social movements or making waves in environmental conservation, with the support of TED, our Fellows are dedicated to making the world a better place through their innovative work. And you could be one of them.

Apply now to be a TED Fellow by August 27, 2019.

What’s in it for you?

  • The opportunity to give a talk on the TED mainstage
  • Career coaching and speaker training
  • Mentorship, professional development and public relations guidance
  • The opportunity to be part of a diverse, collaborative community of more than 450 thought leaders
  • Participation in the global TED2020 conference in Vancouver, BC

What are the requirements?

  • An idea worth spreading!
  • A completed online application consisting of general biographical information, short essays on your work and three references (It’s short, fun, and it’ll make you think…)
  • You must be at least 18 years old to apply.
  • You must be fluent in English.
  • You must be available to be in Vancouver, BC from April 17 to April 25, 2020.

What do you have to lose?

The deadline to apply is August 27, 2019 at 11:59pm UTC. To learn more about the TED Fellows program and apply, head here. Don’t wait until the last minute! We do not accept late applications. Really.

Planet DebianSteve Kemp: Building a computer - part 2

My previous post on the subject of building a Z80-based computer briefly explained my motivation, and the approach I was going to take.

This post describes my progress so far:

  • On the hardware side, zero progress.
  • On the software-side, lots of fun.

To recap I expect to wire a Z80 microprocessor to an Arduino (mega). The arduino will generate a clock-signal which will make the processor "tick". It will also react to read/write attempts that the processor makes to access RAM, and I/O devices.

The Z80 has a neat system for requesting I/O, via the use of the IN and OUT instructions which allow the processor to read/write a single byte to one of 255 connected devices.

To experiment, and for a memory recap I found a Z80 assembler, and a Z80 disassembler, both packaged for Debian. I also found a Z80 emulator, which I forked and lightly-modified.

With the appropriate tools available I could write some simple code. I implemented two I/O routines in the emulator, one to read a character from STDIN, and one to write to STDOUT:

IN A, (1)   ; Read a character from STDIN, store in A-register.
OUT (1), A  ; Write the character in A-register to STDOUT

With those primitives implemented I wrote a simple script:

;
;  Simple program to upper-case a string
;
org 0
   ; show a prompt.
   ld a, '>'
   out (1), a
start:
   ; read a character
   in a,(1)
   ; eof?
   cp -1
   jp z, quit
   ; is it lower-case?  If not just output it
   cp 'a'
   jp c,output
   cp 'z'
   jp nc, output
   ; convert from lower-case to upper-case.  yeah.  math.
   sub a, 32
output:
   ; output the character
   out (1), a
   ; repeat forever.
   jr start
quit:
   ; terminate
   halt

With that written it could be compiled:

 $ z80asm ./sample.z80 -o ./sample.bin

Then I could execute it:

 $ echo "Hello, world" | ./z80emulator ./sample.bin
 Testing "./sample.bin"...
 >HELLO, WORLD

 1150 cycle(s) emulated.

And that's where I'll leave it for now. When I have the real hardware I'll hookup some fake-RAM containing this program, and code a similar I/O handler to allow reading/writing to the arduino's serial-console. That will allow the same code to run, unchanged. That'd be nice.

I've got a simple Z80-manager written, but since I don't have the chips yet I can only compile-test it. We'll see how well I did soon enough.

Planet DebianJohn Goerzen: Tips for Upgrading to, And Securing, Debian Buster

Wow.  Once again, a Debian release impresses me — a guy that’s been using Debian for more than 20 years.  For the first time I can ever recall, buster not only supported suspend-to-disk out of the box on my laptop, but it did so on an encrypted volume atop LVM.  Very impressive!

For those upgrading from previous releases, I have a few tips to enhance the experience with buster.

AppArmor

AppArmor is a new line of defense against malicious software.  The release notes indicate it’s now enabled by default in buster.  For desktops, I recommend installing apparmor-profiles-extra apparmor-notify.  The latter will provide an immediate GUI indication when something is blocked by AppArmor, so you can diagnose strange behavior.  You may also need to add userself to the adm group with adduser username adm.

Security

I recommend installing these packages and taking note of these items, some of which are different in buster:

  • unattended-upgrades will automatically install security updates for you.  New in buster, the default config file will also apply stable updates in addition to security updates.
  • needrestart will detect what processes need a restart after a library update and, optionally, restart them. Beginning in buster, it will not automatically restart them when in noninteractive (unattended-upgrades) mode. This can be changed by editing /etc/needrestart/needrestart.conf (or, better, putting a .conf file in /etc/needrestart/conf.d) and setting $nrconf{restart} = 'a'. Edit: If you have an Intel CPU, installing iucode-tool intel-microcode will let needrestart also check on your CPU microcode.
  • debian-security-support will warn you of gaps in security support for packages you are installing or running.
  • package-update-indicator is useful for desktops that won’t be running unattended-upgrades. I believe Gnome 3 has this built in, but for other desktops, this adds an icon when updates are available.
  • You can harden apt with seccomp.
  • You can enable UEFI secure boot.

Tuning

If you hadn’t noticed, many of these items are links into the buster release notes. It’s a good document to read over, even for a new buster install.

Planet DebianJonathan Dowland: Nadine Shah

ticket and cuttings from gig

ticket and cuttings from gig

On July 8 I went to see Nadine Shah perform at the Whitley Bay Playhouse as part of the Mouth Of The Tyne Festival. It was a fantastic gig!

I first saw Nadine Shah — as a solo artist — supporting the Futureheads in the same venue, back in 2013. At that point, she had either just released her debut album, Love Your Dum and Mad, or was just about to (It came out sometime in the same month), but this was the first we heard of her. If memory serves, she played with a small backing band (possibly just drummer, likely co-writer Ben Hillier) and she handled keyboards. It's a pretty small venue. My friends and I loved that show, and as we talked about how good it was, what it reminded us of, (I think we said stuff like "that was nice and gothy, I haven't heard stuff like that for ages"), we hadn't realised that she was sat right behind us, with a grin on her face!

Since then shes put out two more albums, Fast Food which got a huge amount of airplay on 6 Music (and was the point at which I bought into her) and the Mercury-nominated Holiday Destination, a really compelling evolution of her art and a strong political statement.

Kinevil 7 inch

Kinevil 7 inch

It turns out, though, that I think we saw her before that, too: A local band called Kinevil (now disbanded) supported Ladytron at Digital in Newcastle in 2008. I happen to have their single "Everything's Gone Black" on vinyl (here it is on bandcamp) and noticed years later that the singer is credited as Nadine Shar.

This year's gig was my first gig of 2019, and it was a real blast. The sound mix was fantastic, and loud. The performance was very confident: Nadine now exclusively sings, all the instrument work is done by her band which is now five-strong. The saxophonist made some incredible noises that reminded me of some synth stuff from mid-90s Nine Inch Nails records. I've never heard a saxaphone played that way before. Apparently Shah has been on hiatus for a while for personal reasons and this was her comeback gig. Under those circumstances, it was very impressive. I hope the reception was what she hoped for.

Worse Than FailureAnnouncements: Meetup in Kansas City: Dinner and a Pint after KCDC

The Kansas City Developer Conference is this week, followed by PubConf. Between these two events on Friday evening is plenty of time for a TDWTF dinner, and that's exactly what we're planning!

If you find yourself in Kansas City Missouri this Friday, for KCDC, PubConf, or perhaps because you live here, please come out to the Dubliner at 5:30 PM for dinner and a pint. I'll be there along with Martine Dowden and some TDWTF swag to give away. We'll talk software, discuss what we took away from the conference, and can head over to PubConf together.

If you would like to join us at 5:30 PM CT on Friday, July 19 please contact me at @mrdowden (Twitter) or drop me an email: Michael (at) Andromeda16.com

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Worse Than FailureCodeSOD: Nothing Direct About directAddCartEntry

It’s old hat, but every function, every class, every code unit you write, should all have a single, well-defined purpose. “Do one thing, and do it well,” as it were.

Of course, sometimes, it’s easier said that done, and mindlessly following that advice can lead to premature abstraction, and then you’ll have quite a mess on your hands. Still, it’s good advice, and a great design goal, even if you don’t head straight there.

Marigold found some code which, well, has a long way to go. A looooooong way to go.

directAddCartEntry = function (matnr, menge,updateByField,m,redu){
 
    var that=this;
    var produkt=new Object;
    var target = document.getElementById('content');
    spinner.spin(target);
   
    if (produkt.BACK_PREIS!=undefined && produkt.BACK_PREIS!=""){
        produkt.PREIS = produkt.BACK_PREIS  
    }  
    var Kundennummer = app.getModel("userData").getData().KUNDENNR;
    var Land  = app.getModel("userData").getData().LAND;       
    var Euland = app.getModel("userData").getData().ISTEULAND;
    var Kundennummer_u_Euland = Kundennummer+"|"+Euland+"|"+Land;
    var verpackungseinheit = "";
    sap.ui.getCore().byId("app").getModel("kategorie").read("/PRODUKT_SET(MATNR='"+matnr+"',VKORG='"+VKORG+"',SPRAS='de',KAMPAGNE='"+Kundennummer_u_Euland+"',VTWEG='10')?$expand=MERKMAL",null,null,false,function(oData,response){
        produkt=oData;     
        var mindestbestellmenge = produkt.BOMRABATT;
        verpackungseinheit =  produkt.VERPACKUNGSEINHEIT
        if (mindestbestellmenge!="0.000 "&& mindestbestellmenge!=""  &&  mindestbestellmenge != undefined){        
            mindestbestellmenge=mindestbestellmenge.split(".")[0]
            var mindestbestellmenge = parseInt(mindestbestellmenge);
            produkt.BOMRABATT=mindestbestellmenge          
            if (menge != mindestbestellmenge && vielfacher(menge,mindestbestellmenge)==false && redu!=true){
                var dialog = new sap.m.Dialog({
                    showHeader: false,
                    content: [
                        new sap.ui.layout.HorizontalLayout({
                            content: [
                                new sap.m.Image({
                                    src: "Image/helia_small.png",
                                }),
                                new sap.m.Text({
                                    //text: sap.ui.getCore().byId("app").getModel("i18n").getProperty("dialogUsersonlyFooter")
                                }).addStyleClass('dialog__usersonlySmall'),
                            ]
                        }).addStyleClass('dialog__usersonlyHeader'),
                   
                        new sap.ui.layout.Grid({
                            hSpacing: 1,
                            vSpacing: 1,
                            layoutData: new sap.ui.layout.GridData({
                                span: "L12 M12 S12",
                            }),
                            content: [
                                new sap.ui.core.HTML({ 
                                    layoutData: new sap.ui.layout.GridData({
                                        span: "L12 M12 S12",
                                    }),
                                    content: verpackungseinheit=="ZS" ? sap.ui.getCore().byId("navContainer").getModel("i18n").getProperty("infozigarette") : sap.ui.getCore().byId("navContainer").getModel("i18n").getResourceBundle().getText("infonormal", produkt.BOMRABATT)
                                }).addStyleClass("dialog__usersonlyTitle"),
                               
                                new sap.m.Select("dialogSelect",{
                                    layoutData: new sap.ui.layout.GridData({
                                        span: "L2 M2 S2",
                                    }),
//                                  items: productListItems2("zweier","","","")
                                    items: productListItems4(null,null,produkt.MAXMENGE,produkt.BOMRABATT)
                                })/*.attachBrowserEvent(
                    "click",function(evt){
                      var _numProductsSelected = parseInt( this.getSelectedKey() );
                      var _i  = +_numProductsSelected;
                      var plus=+_i;
                      this.destroyItems();
                      for (_i;_i<=999;_i=_i+plus){
                        // XXX
                        this.addItem(new sap.ui.core.ListItem({ text: _i,  key: _i }))                                  
                      }
                      this.setSelectedKey( _numProductsSelected );
                    }
                ).attachBrowserEvent(
                    "tap",function(evt){
                      var _numProductsSelected = parseInt( this.getSelectedKey() );
                      var _i  = +_numProductsSelected;
                      var plus=+_i;
                      this.destroyItems();
                      for (_i;_i<=999;_i=_i+plus){
                        // XXX
                        this.addItem(new sap.ui.core.ListItem({ text: _i,  key: _i }))                                  
                      }
                      this.setSelectedKey( _numProductsSelected );
                    }
                )*/,
                                new sap.m.Button({
                                    text: "OK",
                                    press: function(oEvent) {
                                        var new_menge=sap.ui.getCore().byId("dialogSelect").getSelectedItem().mProperties.text
                                        dialog.destroy();
                                        directAddCartEntry  (matnr, new_menge,updateByField,m)
                                        simulateOrder()
                                        directController.setPreiseundRabatte();                        
                                    },
                                    layoutData: new sap.ui.layout.GridData({
                                        span: "L1 M1 S1",
                                    })
                                }),                        
                               
                                new sap.m.Button({
                                    text: app.getModel("i18n").getProperty("pwdabort"),
                                    press: function() {
                                        dialog.destroy();
                                    },
                                    layoutData: new sap.ui.layout.GridData({
                                        span: "L2 M2 S2",
                                    })
                                }),
                            ]
                        }),
                        //.addStyleClass('dialog__usersonlyChoice'),
 
                   
                       
                    ]
                }).addStyleClass('dialog__usersonlyChoice');
                dialog.open()
                spinner.stop();
                return;
               
            }
            else{
              if (istmindestmenge (oData,"","call")==true && redu!=true){
//              var menge2=menge/2 
                var menge2=menge/oData.BOMRABATT;
                directAddCartEntry (matnr, menge2,updateByField,m,true)
                return
               
              }
              else{
                produkt.MENGE = menge;
              }
             
            }          
           
        }else{
          if (istmindestmenge (oData,"","call")==true && redu!=true){
//          var menge2=menge/2  
            menge/oData.BOMRABATT;
        directAddCartEntry (matnr, menge2,updateByField,m,true)
            return
      }
      else{
        produkt.MENGE = menge;
      }
        }          
        var Preis;
        var model = m;
        var data = model.getData();
        var POSITIONEN = data.WK_POSITIONEN;
        var POSITION = null;
        var POSITIONIndex = -1;
        if (produkt.BACK_PREIS!=undefined && produkt.BACK_PREIS!=""){
            produkt.PREIS = produkt.BACK_PREIS  
        }
        //Position suchen
        for (var zxy = 0 ; zxy < POSITIONEN.length ; zxy ++) {
            if (POSITIONEN[zxy].MATNR === produkt.MATNR) {
                POSITION = POSITIONEN[zxy];
                POSITIONIndex = zxy;
                break;
            }
        }
        //Wenn Position nicht gefunden, neu hinzuf?gen...
        if (POSITION === null) {
            produkt.PAKETPREIS = produkt.PREIS;
            produkt.SPARPREIS = 0;
           
            //Aktionsstaffelpreise und Staffelpreise ber?cksichtigen           
             var Preis = parseFloat(produkt.PREIS).toFixed(2);
 
             if (produkt.STAFFELPREIS3!=""){
                if( (parseFloat(produkt.MENGE) >= parseFloat(produkt.STAFFELPREIS6)) && ( parseFloat(produkt.STAFFELPREIS3) < parseFloat(produkt.PREIS) ) ) {
                    produkt.PAKETPREIS  = produkt.STAFFELPREIS3;
                    produkt.SPARPREIS = ( parseFloat(produkt.PREIS) - parseFloat(produkt.STAFFELPREIS3));
                    var Preis = produkt.STAFFELPREIS3
                }
            }
            if (produkt.AKTIONSPREIS3!=""){
                if( (parseFloat(produkt.MENGE) >= parseFloat(produkt.AKTIONSPREIS6)) && ( parseFloat(produkt.AKTIONSPREIS3) < parseFloat(produkt.PREIS) ) ) {
                    produkt.PAKETPREIS  = produkt.AKTIONSPREIS3;
                    produkt.SPARPREIS = ( parseFloat(produkt.PREIS) - parseFloat(produkt.AKTIONSPREIS3));
                    var Preis = produkt.AKTIONSPREIS3
                }              
            }
            if (parseFloat(produkt.AKTIONSPREIS)< parseFloat(produkt.PREIS) && produkt.AKTIONSPREIS!=""){
                 var Preis = parseFloat(produkt.AKTIONSPREIS).toFixed(2);              
            }           // Vergleich Staffelpreis zu Aktionspreis, VRU 21.06.2016
            if ( (parseFloat(produkt.MENGE) >= parseFloat(produkt.AKTIONSPREIS6)) && (parseFloat(produkt.AKTIONSPREIS3)<parseFloat(produkt.STAFFELPREIS3)) && produkt.AKTIONSPREIS3!=""){
                var Preis = parseFloat(produkt.AKTIONSPREIS3).toFixed(2);  
            }
           
//          if( (produkt.MENGE >= 6) && ( parseFloat(produkt.AKTIONSPREIS6) < parseFloat(produkt.PREIS) ) ) {
//              produkt.PAKETPREIS  = produkt.AKTIONSPREIS6;
//              produkt.SPARPREIS = ( parseFloat(produkt.PREIS) - parseFloat(produkt.AKTIONSPREIS6));
//          }
//          if( (produkt.MENGE >= 12) && ( parseFloat(produkt.AKTIONSPREIS12) < parseFloat(produkt.PREIS) ) ) {
//              produkt.PAKETPREIS  = produkt.AKTIONSPREIS12;
//              produkt.SPARPREIS = ( parseFloat(produkt.PREIS) - parseFloat(produkt.AKTIONSPREIS12));
//          }
            produkt.GESAMTPREIS = parseFloat(Preis * menge).toFixed(2);
//          produkt.GESAMTPREIS = parseFloat(produkt.PAKETPREIS * produkt.MENGE).toFixed(2);
            produkt.GESAMTSPARPREIS = produkt.SPARPREIS * produkt.MENGE;
       
            produkt.MENGE=parseInt(produkt.MENGE);
          if (redu==true){
//        produkt.MENGE=produkt.MENGE*2
        produkt.MENGE=produkt.MENGE*produkt.BOMRABATT
      }
            // create new entry
            POSITION = {
                    MATNR:produkt.MATNR,
                    MAKTX:produkt.MAKTX,
                    MENGE:produkt.MENGE,
                    MAXMENGE:produkt.MAXMENGE,
                    KATTEXTKURZ:produkt.KATTEXTKURZ,
                    IMG_BIG:produkt.IMG_BIG.replace(locStatic,""),
                    IMG_THUMB:produkt.IMG_THUMB.replace(locStatic,""),
                    BACK_PREIS: produkt.PREIS,
                    PREIS:Preis,
                    GESAMTPREIS:parseFloat(produkt.GESAMTPREIS).toFixed(2),
                    STAFFELPREIS:produkt.STAFFELPREIS,
                    AKTIONSPREIS:produkt.AKTIONSPREIS,
                    WAEHRUNG:produkt.WAEHRUNG,
                    PAKETPREIS:produkt.PAKETPREIS,
                    SPARPREIS:produkt.SPARPREIS,
                    GESAMTSPARPREIS:produkt.GESAMTSPARPREIS,
                    AKTIONSPREIS3 : produkt.AKTIONSPREIS3,
                    AKTIONSPREIS6 : produkt.AKTIONSPREIS6,     
                    STAFFELPREIS3 : produkt.STAFFELPREIS3,
                    STAFFELPREIS6 : produkt.STAFFELPREIS6,
                    BOMRABATT: produkt.BOMRABATT,
                    VERPACKUNGSEINHEIT:produkt.VERPACKUNGSEINHEIT
            };
            data.WK_POSITIONEN[data.WK_POSITIONEN.length] = POSITION;
        } else {
        //...Ansonsten Menge aendern
            if(updateByField){
                POSITION.MENGE=parseInt(POSITION.MENGE) + parseInt(produkt.MENGE)
                if (redu!=undefined && redu ==true){
                  POSITION.MENGE = POSITION.MENGE/2 + parseInt(produkt.MENGE)/2  
                }
                var Preis = parseFloat(produkt.PREIS).toFixed(2);              
                if (produkt.STAFFELPREIS3!=""){                    
                    if( (parseFloat(POSITION.MENGE) >= parseFloat(produkt.STAFFELPREIS6)) && ( parseFloat(produkt.STAFFELPREIS3) < parseFloat(produkt.PREIS) ) ) {
                        POSITION.PAKETPREIS = produkt.STAFFELPREIS3;
                        POSITION.SPARPREIS = ( parseFloat(produkt.PREIS) - parseFloat(produkt.STAFFELPREIS3));
                        var Preis = produkt.STAFFELPREIS3
                    }
                }
                if (produkt.AKTIONSPREIS3!=""){                
                    if( (parseFloat(POSITION.MENGE) >= parseFloat(produkt.AKTIONSPREIS6)) && ( parseFloat(produkt.AKTIONSPREIS3) < parseFloat(produkt.PREIS) ) ) {
                        POSITION.PAKETPREIS = produkt.AKTIONSPREIS3;
                        POSITION.SPARPREIS = ( parseFloat(produkt.PREIS) - parseFloat(produkt.AKTIONSPREIS3));
                        var Preis = produkt.AKTIONSPREIS3
                    }              
                }
                if (parseFloat(produkt.AKTIONSPREIS)< parseFloat(produkt.PREIS) && produkt.AKTIONSPREIS!=""){
                     var Preis = parseFloat(produkt.AKTIONSPREIS).toFixed(2);              
                }
                // Vergleich Staffelpreis zu Aktionspreis, VRU 21.06.2016
                if ( (parseFloat(POSITION.MENGE) >= parseFloat(produkt.AKTIONSPREIS6)) && (parseFloat(produkt.AKTIONSPREIS3)<parseFloat(produkt.STAFFELPREIS3)) && produkt.AKTIONSPREIS3!=""){
                    var Preis = parseFloat(produkt.AKTIONSPREIS3).toFixed(2);  
                }
                POSITION.AKTIONSPREIS3 = produkt.AKTIONSPREIS3
                POSITION.AKTIONSPREIS6 = produkt.AKTIONSPREIS6     
                POSITION.STAFFELPREIS3 = produkt.STAFFELPREIS3
                POSITION.STAFFELPREIS6 = produkt.STAFFELPREIS6             
                POSITION.BACK_PREIS= produkt.PREIS,            
                POSITION.PREIS = parseFloat(Preis).toFixed(2);
                POSITION.AKTIONSPREIS6 = produkt.AKTIONSPREIS6;
                POSITION.AKTIONSPREIS12 = produkt.AKTIONSPREIS12;
                POSITION.PAKETPREIS = POSITION.PREIS;
                POSITION.SPARPREIS = 0;
//              if( (POSITION.MENGE >= 6) && ( parseFloat(POSITION.AKTIONSPREIS6) < parseFloat(POSITION.PREIS) ) ) {
//                  POSITION.PAKETPREIS = POSITION.AKTIONSPREIS6;  
//                  POSITION.SPARPREIS = ( parseFloat(POSITION.PREIS) - parseFloat(POSITION.AKTIONSPREIS6));
//              }
//              if( (POSITION.MENGE >= 12) && ( parseFloat(POSITION.AKTIONSPREIS12) < parseFloat(POSITION.PREIS) ) ) {
//                  POSITION.PAKETPREIS = POSITION.AKTIONSPREIS12; 
//                  POSITION.SPARPREIS = ( parseFloat(POSITION.PREIS) - parseFloat(POSITION.AKTIONSPREIS12));
//              }
                POSITION.GESAMTPREIS = parseFloat(POSITION.MENGE*POSITION.PAKETPREIS).toFixed(2);
                POSITION.GESAMTSPARPREIS = POSITION.MENGE*POSITION.SPARPREIS;
                POSITION.STAFFELPREIS = produkt.STAFFELPREIS;
                POSITION.AKTIONSPREIS = produkt.AKTIONSPREIS;
                POSITION.WAEHRUNG = produkt.WAEHRUNG;
                if (redu!=undefined && redu ==true){
//          POSITION.MENGE = POSITION.MENGE*2  
          POSITION.MENGE = POSITION.MENGE*POSITION.BOMRABATT
        }
               
            }else{
            }
            POSITION.AKTIONSPREIS3 = produkt.AKTIONSPREIS3
            POSITION.AKTIONSPREIS6 = produkt.AKTIONSPREIS6     
            POSITION.STAFFELPREIS3 = produkt.STAFFELPREIS3
            POSITION.STAFFELPREIS6 = produkt.STAFFELPREIS6
            POSITION.BOMRABATT = produkt.BOMRABATT
           
            POSITION.BACK_PREIS= produkt.PREIS,
            POSITION.PREIS = Preis;
            POSITION.GESAMTPREIS= toFixed(POSITION.GESAMTPREIS,2);
            POSITION.GESAMTSPARPREIS= toFixed(POSITION.GESAMTSPARPREIS,2);
            POSITIONEN[POSITIONIndex] = POSITION;
            data.WK_POSITIONEN = POSITIONEN;
        }
        // Gesamtpreis neu berechnen
        data.GESAMTPREIS = 0;
        data.GESAMTMENGE = 0;
        data.ZWISCHENSUMME = 0;
        data.GESAMTSPARPREIS = 0;      
        for (var xxy = 0 ; xxy < data.WK_POSITIONEN.length ; xxy ++) {
            data.GESAMTPREIS += parseFloat(data.WK_POSITIONEN[xxy].GESAMTPREIS);
            if (istmindestmenge (data.WK_POSITIONEN[xxy],"","call")==true && redu==true){
              data.GESAMTMENGE += parseInt(data.WK_POSITIONEN[xxy].MENGE);
            }else{
               data.GESAMTMENGE += parseInt(data.WK_POSITIONEN[xxy].MENGE);
            }
            //hier Gesamtmenge wieder erhoehen
           
           
            data.ZWISCHENSUMME += parseFloat(data.WK_POSITIONEN[xxy].GESAMTPREIS);
            data.GESAMTSPARPREIS += parseFloat(data.WK_POSITIONEN[xxy].GESAMTSPARPREIS);           
        }
        data.GESAMTPREIS = parseFloat(data.GESAMTPREIS).toFixed(2)
        data.GESAMTSPARPREIS = parseFloat(data.GESAMTSPARPREIS).toFixed(2);
//      data.INTERNETRABATT = parseFloat(menge) + parseFloat(menge)
//      data.GESAMTPREIS= parseFloat(data.GESAMTPREIS).toFixed(2);
        data.ENDPREIS=parseFloat(data.GESAMTPREIS).toFixed(2) - parseFloat(data.INTERNETRABATT).toFixed(2);
        data.ZWISCHENSUMME= toFixed(data.ZWISCHENSUMME,2);
        data.GESAMTSPARPREIS= toFixed(data.GESAMTSPARPREIS,2);
        data.ERSATZLIEFERUNG = data.ERSATZLIEFERUNG;
        data.LIEFERART=data.LIEFERART;
        data.LIEFERDATUM = data.LIEFERDATUM;
        data.KUNDENNACHRICHT=data.KUNDENNACHRICHT;
//      if(parseFloat(oData.INTERNETRABATT)>0){
//          //sap.ui.getCore().byId("app").getModel("warenkorb").setProperty("/INTERNETRABATT",parseFloat(oData.INTERNETRABATT).toFixed(2));
//          sap.ui.getCore().byId("app").getModel("warenkorb").setProperty("/ENDPREIS",(parseFloat(sap.ui.getCore().byId("app").getModel("warenkorb").getProperty("/GESAMTPREIS")) - parseFloat(oData.INTERNETRABATT)).toFixed(2));
//      }
//      // Model updaten
        model.setData(data,"warenkorb");
       
        if($.cookie("cookieUser")!=undefined){
            setBackendWK(sap.ui.getCore().byId("app").getModel("kategorie"),sap.ui.getCore().byId("app").getModel("userData").oData.KUNDENNR);
        }else{
            setCookieCart(model);
        }
        if (redu==true){
//        menge=menge*2;
          menge=menge*produkt.BOMRABATT;
        }
        showToastMessage(menge,produkt.MAKTX,produkt.MERKMAL.results);
        //Speichern des Warenkorbes in einen Cookie
        simulateOrder();
        spinner.stop();
    },function(oError){
        spinner.stop();
        messageToast(app.getModel("i18n").getProperty("keineWare"));
    });
};

Marigold adds: “I have no words for this. Make something up. I don’t care.”

It isn’t about what this code does, as much as the sheer mass of it, the weight of 350+ lines of code in one gigantic method which seems to do everything, makes me want to do nothing but eat a box of “einen Cookies” in one sitting.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

CryptogramZoom Vulnerability

The Zoom conferencing app has a vulnerability that allows someone to remotely take over the computer's camera.

It's a bad vulnerability, made worse by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user's permission.

On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you've ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install 'feature' continues to work to this day.

Zoom didn't take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019. This initial report included a proposed description of a 'quick fix' Zoom could have implemented by simply changing their server logic. It took Zoom 10 days to confirm the vulnerability. The first actual meeting about how the vulnerability would be patched occurred on June 11th, 2019, only 18 days before the end of the 90-day public disclosure deadline. During this meeting, the details of the vulnerability were confirmed and Zoom's planned solution was discussed. However, I was very easily able to spot and describe bypasses in their planned fix. At this point, Zoom was left with 18 days to resolve the vulnerability. On June 24th after 90 days of waiting, the last day before the public disclosure deadline, I discovered that Zoom had only implemented the 'quick fix' solution originally suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this seriously and fixing it for real.

Planet DebianHolger Levsen: 20190716-wanna-work-on-lts

Wanna work on Debian LTS (and get funded)?

If you are in Curitiba and are interested to work on Debian LTS (and get paid for that work), please come and talk to me, Debian LTS is still looking for more contributors! Also, if you want a bigger challenge, extended LTS also needs more contributors, though I'd suggest you start with regular LTS ;)

On Thursday, July 25th, there will also be a talk titled "Debian LTS, the good, the bad and the better" where we plan to present what we think works nicely and what doesn't work so nicely yet and where we also want to gather your wishes and requests.

If cannot make it to Curitiba, there will be a video stream (and the possibility to ask questions via IRC) and you can always send me an email or ping on IRC if you want to work on LTS.

Krebs on SecurityMeet the World’s Biggest ‘Bulletproof’ Hoster

For at least the past decade, a computer crook variously known as “Yalishanda,” “Downlow” and “Stas_vl” has run one of the most popular “bulletproof” Web hosting services catering to a vast array of phishing sites, cybercrime forums and malware download servers. What follows are a series of clues that point to the likely real-life identity of a Russian man who appears responsible for enabling a ridiculous amount of cybercriminal activity on the Internet today.

Image: Intel471

KrebsOnSecurity began this research after reading a new academic paper on the challenges involved in dismantling or disrupting bulletproof hosting services, which are so called because they can be depended upon to ignore abuse complaints and subpoenas from law enforcement organizations. We’ll get to that paper in a moment, but for now I mention it because it prompted me to check and see if one of the more infamous bulletproof hosters from a decade ago was still in operation.

Sure enough, I found that Yalishanda was actively advertising on cybercrime forums, and that his infrastructure was being used to host hundreds of dodgy sites. Those include a large number of cybercrime forums and stolen credit card shops, ransomware download sites, Magecart-related infrastructure, and a metric boatload of phishing Web sites mimicking dozens of retailers, banks and various government Web site portals.

I first encountered Yalishanda back in 2010, after writing about “Fizot,” the nickname used by another miscreant who helped customers anonymize their cybercrime traffic by routing it through a global network of Microsoft Windows computers infected with a powerful malware strain called TDSS.

After that Fizot story got picked up internationally, KrebsOnSecurity heard from a source who suggested that Yalishanda and Fizot shared some of the same infrastructure.

In particular, the source pointed to a domain that was live at the time called mo0be-world[.]com, which was registered in 2010 to an Aleksandr Volosovyk at the email address stas_vl@mail.ru. Now, normally cybercriminals are not in the habit of using their real names in domain name registration records, particularly domains that are to be used for illegal or nefarious purposes. But for whatever reason, that is exactly what Mr. Volosovyk appears to have done.

WHO IS YALISHANDA?

The one or two domain names registered to Aleksandr Volosovyk and that mail.ru address state that he resides in Vladivostok, which is a major Pacific port city in Russia that is close to the borders with China and North Korea. The nickname Yalishanda means “Alexander” in Mandarin (亚历山大).

Here’s a snippet from one of Yalishanda’s advertisements to a cybercrime forum in 2011, when he was running a bulletproof service under the domain real-hosting[.]biz:

-Based in Asia and Europe.
-It is allowed to host: ordinary sites, doorway pages, satellites, codecs, adware, tds, warez, pharma, spyware, exploits, zeus, IRC, etc.
-Passive SPAM is allowed (you can spam sites that are hosted by us).
-Web spam is allowed (Hrumer, A-Poster ….)

-Forbidden: Any outgoing Email spam, DP, porn, phishing (exclude phishing email, social networks)

There is a server with instant activation under botnets (zeus) and so on. The prices will pleasantly please you! The price depends on the specific content!!!!

Yalishanda would re-brand and market his pricey bulletproof hosting services under a variety of nicknames and cybercrime forums over the years, including one particularly long-lived abuse-friendly project aptly named abushost[.]ru.

In a talk given at the Black Hat security conference in 2017, researchers from Cisco and cyber intelligence firm Intel 471 labeled Yalishanda as one the “top tier” bulletproof hosting providers worldwide, noting that in just one 90-day period in 2017 his infrastructure was seen hosting sites tied to some of the most advanced malware contagions at the time, including the Dridex and Zeus banking trojans, as well as a slew of ransomware operations.

“Any of the actors that can afford his services are somewhat more sophisticated than say the bottom feeders that make up the majority of the actors in the underground,” said Jason Passwaters, Intel 471’s chief operating officer. “Bulletproof hosting is probably the biggest enabling service that you find in the underground. If there’s any one group operation or actor that touches more cybercriminals, it’s the bulletproof hosters.”

Passwaters told Black Hat attendees that Intel471 wasn’t convinced Alex was Yalishanda’s real name. I circled back with Intel 471 this week to ask about their ongoing research into this individual, and they confided that they knew at the time Yalishanda was in fact Alexander Volosovyk, but simply didn’t want to state his real name in a public setting.

KrebsOnSecurity uncovered strong evidence to support a similar conclusion. In 2010, this author received a massive data dump from a source that had hacked into or otherwise absconded with more than four years of email records from ChronoPay — at the time a major Russian online payment provider whose CEO and co-founders were the chief subjects of my 2014 book, Spam Nation: The Inside Story of Organized Cybercrime.

Querying those records on Yalishanda’s primary email address — stas_vl@mail.ru — reveal that this individual in 2010 sought payment processing services from ChronoPay for a business he was running which sold counterfeit designer watches.

As part of his application for service, the person using that email address forwarded six documents to ChronoPay managers, including business incorporation and banking records for companies he owned in China, as well as a full scan of his Russian passport.

That passport, pictured below, indicates that Yalishanda’s real name is Alexander Alexandrovich Volosovik. The document shows he was born in Ukraine and is approximately 36 years old.

The passport for Alexander Volosovyk, a.k.a. “Yalishanda,” a major operator of bulletproof hosting services.

According to Intel 471, Yalishanda lived in Beijing prior to establishing a residence in Vladivostok (that passport above was issued by the Russian embassy in Beijing). The company says he moved to St. Petersburg, Russia approximately 18 months ago.

His current bulletproof hosting service is called Media Land LLC. This finding is supported by documents maintained by Rusprofile.ru, which states that an Alexander Volosovik is indeed the director of a St. Petersburg company by the same name.

ARMOR-PIERCING BULLETS?

Bulletproof hosting administrators operating from within Russia probably are not going to get taken down or arrested, provided they remain within that country (or perhaps within the confines of the former republics of the Soviet Union, known as the Commonwealth of Independent States).

That’s doubly so for bulletproof operators who are careful to follow the letter of the law in those regions — i.e., setting up official companies that are required to report semi-regularly on various aspects of their business, as Mr. Volosovik clearly has done.

However, occasionally big-time bulletproof hosters from those CIS countries do get disrupted and/or apprehended. On July 11, law enforcement officials in Ukraine announced they’d conducted 29 searches and detained two individuals in connection with a sprawling bulletproof hosting operation.

The press release from the Ukrainian prosecutor general’s office doesn’t name the individuals arrested, but The Associated Press reports that one of them was Mikhail Rytikov, a man U.S. authorities say was a well-known bulletproof hoster who operated under the nickname “AbdAllah.”

Servers allegedly tied to AbdAllah’s bulletproof hosting network. Image: Gp.gov.ua.

In 2015, the U.S. Justice Department named Rytikov as a key infrastructure provider for two Russian hackersVladimir Drinkman and Alexandr Kalinin — in a cybercrime spree the government called the largest known data breach at the time.

According to the Justice Department, Drinkman and his co-defendants were responsible for hacks and digital intrusions against NASDAQ, 7-Eleven, Carrefour, JCP, Hannaford, Heartland, Wet Seal, Commidea, Dexia, JetBlue, Dow Jones, Euronet, Visa Jordan, Global Payment, Diners Singapore and Ingenicard.

Whether AbdAllah ever really faces justice for his alleged crimes remains to be seen. Ukraine does not extradite citizens, as the U.S. authorities have requested in this case. And we have seen time and again how major cybercriminals get raided and detained by local and federal authorities there, only to quickly re-emerge and resume operations shortly thereafter, while the prosecution against them goes nowhere.

Some examples of this include several Ukrainian men arrested in 2010 and accused of running an international crime and money laundering syndicate that used a custom version of the Zeus trojan to siphon tens of millions of dollars from hacked small businesses in the U.S. and Europe. To my knowledge, none of the Ukrainian men that formed the core of that operation were ever prosecuted, reportedly because they were connected to influential figures in the Ukrainian government and law enforcement.

Intel 471’s Passwaters said something similar happened in December 2016, when authorities in the U.S., U.K. and Europe dismantled Avalanche, a distributed, cloud-hosting network that was rented out as a bulletproof hosting enterprise for countless malware and phishing attacks.

Prior to that takedown, Passwaters said, somehow an individual using the nickname “Sosweet” who was connected to another bulletproof hoster that occurred around the same time as Avalanche got a tip about an impending raid.

“Sosweet was raided in December right before Avalanche was taken down, [and] we know that he was tipped off because of corruption [because] 24 hours later the guy was back in service and has all his stuff back up,” Passwaters said.

The same also appears to be true for several Ukrainian men arrested in 2011 on suspicion of building and disseminating Conficker, a malware strain that infected millions of computers worldwide and prompted an unprecedented global response from the security industry.

So if a majority of bulletproof hosting businesses operate primarily out of countries where the rule of law is not strong and/or where corruption is endemic, is there any hope for disrupting these dodgy businesses?

Here we come full circle to the academic report mentioned briefly at the top of this story: The answer seems to be — like most things related to cybercrime — “maybe,” provided the focus is on attempting to interfere with their ability to profit from such activities.

That paper, titled Platforms in Everything: Analyzing Ground-Truth Data on the Anatomy and Economics of Bulletproof Hosting, was authored by researchers at New York University, Delft University of Technology, King Saud University and the Dutch National High-Tech Crimes Unit. Unfortunately, it has not yet been released publicly, and KrebsOnSecurity does not have permission yet to publish it.

The study examined the day-to-day operations of MaxiDed, a bulletproof hosting operation based in The Netherlands that was dismantled last summer after authorities seized its servers. The paper’s core findings suggest that because profit margins for bulletproof hosting (BPH) operations are generally very thin, even tiny disruptions can quickly push these businesses into the red.

“We demonstrate the BPH landscape to have further shifted from agile resellers towards marketplace platforms with an oversupply of resources originating from hundreds of legitimate upstream hosting providers,” the researchers wrote. “We find the BPH provider to have few choke points in the supply chain amenable to intervention, though profit margins are very slim, so even a marginal increase in operating costs might already have repercussions that render the business unsustainable.”

Worse Than FailureCodeSOD: Brütäl Glöbs

Noam and a few friends decided it was time for them to launch their own product. They were young, optimistic about their career, and had some opinions about the best way to handle some basic network monitoring and scanning tasks. So they iterated on the idea a few times, until one day the program just started hanging. At first, Noam thought it was just a hang, but after walking away from the machine for a few minutes in frustration, he discovered that it was just running really slow.

After a little investigation, he tracked down the problem to the function responsible for checking if an IP matched a filter. That filter could contain globs, which made things a bit tricky, but his partner had some ideas about how best to handle them.

def ip_check(ip, rule):
    ret_value = False # Return Value
    if ip == rule['host']: # Compare against rule
        ret_value = True
    elif '*' in rule['host']: # Handle wildcards
        mask = rule['host'].split('.')
        length = mask.count('*')
        final = []
        for subset in itertools.permutations(range(256), length):
            final.append(list(subset))
        for item in final:
            address = rule['host'].split('.')
            for index in range(length):
                address[address.index('*')] = str(item[index])
            address = '.'.join(address)
            if address == ip:
                ret_value = True
    return ret_value

This code takes a long way around.

We start with for subset in itertools.permutations(range(256), length):. itertools.permutations does exactly what you think- in this case, it creates every possible permutation of the numbers in the range 0–255, taken length at a time- where length is the number of wildcards. So, for example, 10.1.*.*, is a mere 65,280 entries. *.*.*.*, which is what Noam was doing when testing, is a lot more. 4,195,023,360 entries, to be exact.

Then we iterate across every possible combination to put them into the final list. The permutations method is smart, it lazily evaluates the permutations, calculating the next one when you need it. As you can see, Python does allow you to iterate across it. So we don’t need the final variable at all, we could have simply done for item in itertols.permutations(…) and that would have been fine. Well, not fine, none of this is fine.

So, we populate a list with every possible permutation, then we iterate across every permutation. We incorrectly slam the permuted values into the test string, and if that test string matches our IP, we set the ret_value to True. And then we keep looping. This block doesn’t even take the simplest optimization of simply quitting when it finds what it’s looking for.

Noam rewrote this function, replacing it with a much simpler 3-line function using built-in methods. Then Noam went on to have a long conversation with his team about how something like this happened.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory DoctorowPodcast: Occupy Gotham

In my latest podcast (MP3), I read my essay Occupy Gotham, published in Detective Comics: 80 Years of Batman, commemorating the 1000th issue of Batman comics. It’s an essay about the serious hard problem of trusting billionaires to solve your problems, given the likelihood that billionaires are the cause of your problems.

A thousand issues have gone by, nearly 80 years have passed, and Batman still hasn’t cleaned up Gotham. If the formal definition of insanity it trying the same thing and expecting a different outcome, then Bruce Wayne belongs in a group therapy session in Arkham Asylum. Seriously, get that guy some Cognitive Behavioral Therapy before he gets into some *serious* trouble.

As Upton Sinclair wrote in his limited run of *Batman: Class War*[1], “It’s impossible to get a man to understand something when his paycheck depends on his not understanding it.”

Gotham is a city riven by inequality. In 1939, that prospect had a very different valence than it has in 2018. Back in 1939, the wealth of the world’s elites had been seriously eroded, first by the Great War, then by the Great Crash and the interwar Great Depression, and what was left of those vast fortunes was being incinerated on the bonfire of WWII. Billionaire plutocrats were a curious relic of a nostalgic time before the intrinsic instability of extreme wealth inequality plunged the world into conflict.

MP3

,

Krebs on SecurityIs ‘REvil’ the New GandCrab Ransomware?

The cybercriminals behind the GandCrab ransomware-as-a-service (RaaS) offering recently announced they were closing up shop and retiring after having allegedly earned more than $2 billion in extortion payments from victims. But a growing body of evidence suggests the GandCrab team have instead quietly regrouped behind a more exclusive and advanced ransomware program known variously as “REvil,” “Sodin,” and “Sodinokibi.”

“We are getting a well-deserved retirement,” the GandCrab administrator(s) wrote in their farewell message on May 31. “We are a living proof that you can do evil and get off scot-free.”

However, it now appears the GandCrab team had already begun preparations to re-brand under a far more private ransomware-as-a-service offering months before their official “retirement.”

In late April, researchers at Cisco Talos spotted a new ransomware strain dubbed Sodinokibi that was used to deploy GandCrab, which encrypts files on infected systems unless and until the victim pays the demanded sum. A month later, GandCrab would announce its closure.

A payment page for a victim of REvil, a.k.a. Sodin and Sodinokibi.

Meanwhile, in the first half of May an individual using the nickname “Unknown” began making deposits totaling more than USD $130,000 worth of virtual currencies on two top cybercrime forums. The down payments were meant to demonstrate the actor meant business in his offer to hire just a handful of affiliates to drive a new, as-yet unnamed ransomware-as-a-service offering.

“We are not going to hire as many people as possible,” Unknown told forum members in announcing the new RaaS program. “Five affiliates more can join the program and then we’ll go under the radar. Each affiliate is guaranteed USD 10,000. Your cut is 60 percent at the beginning and 70 percent after the first three payments are made. Five affiliates are guaranteed [USD] 50,000 in total. We have been working for several years, specifically five years in this field. We are interested in professionals.”

Asked by forum members to name the ransomware service, Unknown said it had been mentioned in media reports but that he wouldn’t be disclosing technical details of the program or its name for the time being.

Unknown said it was forbidden to install the new ransomware strain on any computers in the Commonwealth of Independent States (CIS), which includes Armenia, Belarus, Kazakhstan, Kyrgyzstan, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan.

The prohibition against spreading malware in CIS countries has long been a staple of various pay-per-install affiliate programs that are operated by crooks residing in those nations. The idea here is not to attract attention from local law enforcement responding to victim complaints (and/or perhaps to stay off the radar of tax authorities and extortionists in their hometowns).

But Kaspersky Lab discovered that Sodinokobi/REvil also includes one other nation on its list of countries that affiliates should avoid infecting: Syria. Interestingly, latter versions of GandCrab took the same unusual step.

What’s the significance of the Syria connection? In October 2018, a Syrian man tweeted that he had lost access to all pictures of his deceased children after his computer got infected with GandCrab.

“They want 600 dollars to give me back my children, that’s what they’ve done, they’ve taken my boys away from me for a some filthy money,” the victim wrote. “How can I pay them 600 dollars if I barely have enough money to put food on the table for me and my wife?”

That heartfelt appeal apparently struck a chord with the developer(s) of GandCrab, who soon after released a decryption key that let all GandCrab victims in Syria unlock their files for free.

But this rare display of mercy probably cost the GandCrab administrators and its affiliates a pretty penny. That’s because a week after GandCrab released decryption keys for all victims in Syria, the No More Ransom project released a free GandCrab decryption tool developed by Romanian police in collaboration with law enforcement offices from a number of countries and security firm Bitdefender.

The GandCrab operators later told affiliates that the release of the decryption keys for Syrian victims allowed the entropy used by the random number generator for the ransomware’s master key to be calculated. Approximately 24 hours after NoMoreRansom released its free tool, the GandCrab team shipped an update that rendered it unable to decrypt files.

There are also similarities between the ways that both GandCrab and REvil generate URLs that are used as part of the infection process, according a recent report from Dutch security firm Tesorion.

“Even though the code bases differ significantly, the lists of strings that are used to generate the URLs are very similar (although not identical), and there are some striking similarities in how this specific part of the code works, e.g., in the somewhat far-fetched way that the random length of the filename is repeatedly recalculated,” Tesorion observed.

My guess is the GandCrab team has not retired, and has simply regrouped and re-branded due to the significant amount of attention from security researchers and law enforcement investigators. It seems highly unlikely that such a successful group of cybercriminals would just walk away from such an insanely profitable enterprise.

CryptogramPalantir's Surveillance Service for Law Enforcement

Motherboard got its hands on Palantir's Gotham user's manual, which is used by the police to get information on people:

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that's associated with a license plate, they can use automatic license plate reader data to find out where they've been, and when they've been there. This can give a complete account of where someone has driven over any time period.

  • With a name, police can also find a person's email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it's in the agency's database.

  • The software can map out a person's family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

Read the whole article -- it has a lot of details. This seems like a commercial version of the NSA's XKEYSCORE.

Boing Boing post.

Meanwhile:

The FBI wants to gather more information from social media. Today, it issued a call for contracts for a new social media monitoring tool. According to a request-for-proposals (RFP), it's looking for an "early alerting tool" that would help it monitor terrorist groups, domestic threats, criminal activity and the like.

The tool would provide the FBI with access to the full social media profiles of persons-of-interest. That could include information like user IDs, emails, IP addresses and telephone numbers. The tool would also allow the FBI to track people based on location, enable persistent keyword monitoring and provide access to personal social media history. According to the RFP, "The mission-critical exploitation of social media will enable the Bureau to detect, disrupt, and investigate an ever growing diverse range of threats to U.S. National interests."

Google AdsenseUpcoming changes to the AdSense mobile experience

The web is mobile.

Nearly 70% of AdSense audiences experience the web on mobile devices. With new mobile web technologies such as responsive mobile sites, Accelerated Mobile Pages (AMP) and Progressive Web Apps (PWA) the mobile web works better and faster than ever.

We understand that using AdSense on the go is important to you. More than a third of our users access AdSense from mobile devices and this is an area where we continue to invest.

Our vision is an AdSense that does more to keep your account healthy, letting you focus on creating great content, and comes to you when issues or opportunities need your attention.

With this in mind, we have reviewed our mobile strategy. As a result, we will be focusing our investment on the AdSense mobile web interface and sunsetting the current iOS and Android apps. By investing in a common web application that supports all platforms, we will be able to deliver AdSense features optimized for mobile much faster than we can today.

Later this year we will announce improvements to the AdSense mobile web interface. The AdSense Android and iOS apps will be deprecated in the coming months, and will be discontinued and removed from the app stores by the end of 2019.

Like our publishers who have built their businesses around the mobile web, we look forward to leveraging great new web technologies to deliver an even better, more automated, and more useful mobile experience. Stay tuned for further announcements throughout the rest of the year.


Posted by: Andrew Gildfind
AdSense Product Manager

Worse Than FailureThe Enterprise Backup Batch

If a piece of software is described in any way, shape or form as being "enterprise", it's a safe bet that you don't actually want to use it. As a general rule, "enterprise" software packages mix the Inner-Platform Effect with trying to be all things to all customers, with thousands upon thousands of lines of legacy code that can't be touched because at least one customer depends on those quirks. There doesn't tend to be much competition in the "enterprise" space, so none of the vendors actually put any thought into making their products good. That's what salesbeasts and lawyers are for.

Kristoph M supports a deployment of Initech's data warehouse system. Since this system is a mix of stored procedures and SSIS packages, Kristoph can actually read a good portion of the code which makes the product work. They just choose not to. And that's usually a good choice.

But one day, while debugging, Kristoph decided that they needed a simple answer to a simple question: "For a SQLAgent Job, how do you create a backup of the database with the day appended to the filename?"

SQLAgent is SQL Server's scheduling system, used for triggering tasks. SSIS is SQL Server's "drag and drop" dataflow tool, designed to let users draw data pipelines to handle extract-transform-load tasks.

In this case, the SQLAgent job's first step was to launch an SSIS package. Already, we're in questionable territory. SSIS is, as stated, an ETL tool. Yes, you can use it to extract data, it's not really meant as a replacement for an actual database backup.

The good news is that this SSIS package doesn't actually do anything to backup the database. Instead, it contains a single task, and it isn't a data flow task, it's a "Visual Basic Script Task". Yes, SSIS lets you run a stripped down Visual Basic dialect in its context. What does this task do?

Public Sub Main() ' ' Add your code here ' Dim sToday As Date = Now Dim sDay As String = sToday.Day.ToString If CInt(sDay) < 10 Then sDay = "0" & sDay Dim sMonth As String = MonthName(Month(sToday), True) Dim sYear As String = Year(sToday).ToString Dim sPara1 As String = sDay '& sMonth & sYear Dim sPath As String = "D:\Initech\DailyProcess\" Using fso As StreamWriter = New StreamWriter(sPath & "runBackupBatch.bat") fso.WriteLine(sPath & "DailyExtractBackup.bat " & sPara1) fso.Close() End Using Dts.TaskResult = ScriptResults.Success End Sub

This figures out the current day, and then writes out a runBackupBatch.bat file with contents like this:

D:\Initech\DailyProcess\DailyExtractBackup.bat 02

Once that step is completed, the SQLAgent job continues, and runs the runBackupBatch.bat, which in turn runs DailyExtractBackup.bat, which does this:

D:\Initech\DailyProcess\DailyExtractBackup.bat @echo off @echo Dumping DailyExtract database... osql -E -Slocalhost -oD:\Initech\DailyProcess\DailyExtractDump.log -Q"backup database DailyExtract to DISK='D:\Initech\MSSQL\Backup\DailyExtractDump%1.bak' with INIT" if errorlevel 1 goto dumperror REM Check for SQL Errors findstr "Msg" D:\Initech\DailyProcess\DailyExtractDump.log if not errorlevel 1 goto dumperror :OK @echo All Done!! exit 0 :dumperror @echo Error dumping database. exit 1

The osql call is about the first reasonable step in this process. That actually does the backup using SQL server's backup tools. Then again, the mechanism to see if there were any errors in the logfile is troubling. findstr sets the errorlevel to 1 if Msg is not found in the log file. So, if Msg is not not found in the logfile, we'll go to dumperror.

After reading through this process, Kristoph decide it was best to take a step outside, get some air, and stop thinking about the other horrible things that might be lurking in Initech's data warehouse product.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianRuss Allbery: DocKnot 3.01

The last release of DocKnot failed a whole bunch of CPAN tests that didn't fail locally or on Travis-CI, so this release cleans that up and adds a few minor things to the dist command (following my conventions to run cppcheck and Valgrind tests). The test failures are moderately interesting corners of Perl module development that I hadn't thought about, so seem worth blogging about.

First, the more prosaic one: as part of the tests of docknot dist, the test suite creates a new Git repository because the release process involves git archive and needs a repository to work from. I forgot to use git config to set user.email and user.name, so that broke on systems without Git global configuration. (This would have been caught by the Debian package testing, but sadly I forgot to add git to the build dependencies, so that test was being skipped.) I always get bitten by this each time I write a test suite that uses Git; someday I'll remember the first time.

Second, the build system runs perl Build.PL to build a tiny test package using Module::Build, and it was using system Perl. Slaven Rezic pointed out that this fails if Module::Build isn't installed system-wide or if system Perl doesn't work for whatever reason. Using system Perl is correct for normal operation of docknot dist, but the test suite should use the same Perl version used to run the test suite. I added a new module constructor argument for this, and the test suite now passes in $^X for that argument.

Finally, there was a more obscure problem on Windows: the contents of generated and expected test files didn't match because the generated file content was supposedly just the file name. I think I fixed this, although I don't have Windows on which to test. The root of the problem is another mistake I've made before with Perl: File::Temp->new() does not return a file name, but it returns an object that magically stringifies to the file name, so you can use it that way in many situations and it appears to magically work. However, on Windows, it was not working the way that it was on my Debian system. The solution was to explicitly call the filename method to get the actual file name and use it consistently everywhere; hopefully tests will now pass on Windows.

You can get the latest version from CPAN or from the DocKnot distribution page. A Debian package is also available from my personal archive. I'll probably upload DocKnot to Debian proper during this release cycle, since it's gotten somewhat more mature, although I'd like to make some backward-incompatible changes and improve the documentation first.

,

Planet DebianFrançois Marier: Installing Debian buster on a GnuBee PC 2

Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.

Flashing the LibreCMC firmware with Debian support

Before we can install Debian, we need a firmware that includes all of the necessary tools.

On another machine, do the following:

  1. Download the latest librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin.
  2. Mount a vfat-formatted USB stick.
  3. Copy the file onto it and rename it to gnubee.bin.
  4. Unmount the USB stick

Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:

ssh 192.68.10.0
reboot

If you have a USB serial cable, you can use it to monitor the flashing process:

screen /dev/ttyUSB0 57600

otherwise keep an eye on the LEDs and wait until they are fully done flashing.

Getting ssh access to LibreCMC

Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.

Now enable SSH access via the built-in LibreCMC firmware:

  1. Plug a network cable between your laptop and the black network port.
  2. Open web-based admin panel at http://192.168.10.0.
  3. Go to System | Administration.
  4. Set a root password.
  5. Disable ssh password auth and root password logins.
  6. Paste in your RSA ssh public key.
  7. Click Save & Apply.
  8. Go to Network | Firewall.
  9. Select "accept" for WAN Input.
  10. Click Save & Apply.

Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.

Installing Debian

The first step is to install Debian jessie on the GnuBee.

Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:

ssh root@192.168.1.xxx

and the root password you set in the previous section.

Then use fdisk /dev/sda to create the following partition layout on the first drive:

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   8390655   8388608     4G Linux swap
/dev/sda2  8390656 234441614 226050959 107.8G Linux filesystem

Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.

Then format the swap partition:

mkswap /dev/sda1

and download the latest version of the jessie installer:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install

(Yes, the --no-check-certificate is really unfortunate. Please leave a comment if you find a way to work around it.)

The stock installer fails to bring up the correct networking configuration on my network and so I have modified the install script by changing the eth0.1 blurb to:

auto eth0.1
iface eth0.1 inet static
    address 192.168.10.1
    netmask 255.255.255.0

Then you should be able to run the installer succesfully:

sh ./debian-jessie-install

and reboot:

reboot

Restore ssh access in Debian jessie

Once the GnuBee has finished booting, login using the serial console:

  • username: root
  • password: GnuBee

and change the root password using passwd.

Look for the IPv4 address of eth0.2 in the output of the ip addr command and then ssh into the GnuBee from your desktop computer:

ssh root@192.168.1.xxx  # type password set above
mkdir .ssh
vim .ssh/authorized_keys  # paste your ed25519 ssh pubkey

Finish the jessie installation

With this in place, you should be able to ssh into the GnuBee using your public key:

ssh root@192.168.1.172

and then finish the jessie installation:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install
bash ./debian-modules-install
reboot

After rebooting, I made a few tweaks to make the system more pleasant to use:

update-alternatives --config editor  # choose vim.basic
dpkg-reconfigure locales  # enable the locale that your desktop is using

Upgrade to stretch and then buster

To upgrade to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main
deb http://httpredir.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

Next steps

At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:

  1. openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.

  2. The firmware is running an outdated version of the Linux kernel though this is being worked on by community members.

I hope to resolve these issues soon, and will update this blog post once I do, but you are more than welcome to leave a comment if you know of a solution I may have overlooked.

Planet DebianBenjamin Mako Hill: Hairdressers with Supposedly Funny Pun Names I’ve Visited Recently

Mika and I recently spent two weeks biking home to Seattle from our year in Palo Alto. The route was ~1400 kilometers and took us past 10 volcanoes and 4 hot springs.

Route of our bike trip from Davis, CA to Oregon City, OR. An elevation profile is also shown.

To my delight, the route also took us past at least 8 hairdressers with supposedly funny pun names! Plus two in Oakland on our way out.

As a result of this trip, I’ve now made 24 contributions to the Hairdressers with Supposedly Funny Pun Names Flickr group photo pool.

Planet DebianDaniel Silverstone: A quarter in review - Halfway to 2020

The 2019 plan - Second-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the second of those. The first can be found here.

1. Weight loss

So when I wrote in April, I was around 88.6kg and worried about how my body seemed to really like roughly 90kg. This is going to be a similar report. Despite managing to lose 10kg in the first quarter, the second quarter has been harder, and with me focussed on running rather than my full gym routine, loss has been less. I've recently started to push a bit lower though and I'm now around 83kg.

I could really shift my focus back to all-round gym exercise, but honestly I've been enjoying a lot of the spare time returned to me by switching back to my cycling and walking, plus now running a bit. I imagine as the weather returns to its more usual wet mess the gym will return to prominence for me, and with that maybe I'll shed a bit of more of this weight.

I continue give myself a solid "B" for this, though if I were generous, given everything else I might consider a "B+"

2. Couch to 5k

Last time I wrote, I'd just managed a 5k run for the first time. Since then I completed the couch-to-5k programme and have now done eight parkruns. I missed one week due to awful awful weather, but otherwise I've managed to be consistent and attended one parkrun per week. They've all been at the same course apart from one which was in Southampton. This gives me a clean ability to compare runs.

My first parkrun was 30m32s, though I remain aware that the course at Platt Fields is a smidge under 5k really, and I was really pleased with that. However as a colleague explained to me, It never gets easier… Each parkrun is just as hard, if not harder, than the previous one. However to continue his quote, …you just get faster. and I have. Since that first run, I have improved my personal record to 27m34s which is, to my mind at least, bloody brilliant. Even when this week I tried to force myself to go slower, aiming to pace out a 30m run, I ended up at 27m49s.

I am currently trying to convince myself that I can run a bit more slowly and thus increase my distance, but for now I think 5k is a stuck record for me. I'll continue to try and improve that time a little more.

I said last review that I'd be adjusting my goals in the light of how well I'd done with couch-2-5k at that point. Since I've now completed it, I'll be renaming this section the 'Fitness' section and hopefully next review I'll be able to report something other than running in it.

So far, so good, I'm continuing with giving myself an "A+"

3. Finishing projects

I did a bunch more on NetSurf this quarter. We had an amazing long-weekend where we worked on a whole bunch of NS stuff, and I've even managed to give up some of my other spare time to resolve bugs. I'm very pleased with how I've done with that.

Rob and I failed to do much with the pub software, but Lars and I continue to work on the Fable project.

So over-all, this one doesn't get better than the "C" from last time - still satisfactory but could do a lot better.

4. Be a net benefit

My efforts for Debian continue to be restricted, though I hope it continues to just about be a net benefit to the project. My efforts with the Lua community have not extended again, so pretty much the same.

I remain invested in Rust stuff, and have managed (just about) to avoid starting in on any other projects, so things are fairly much the same as before.

I remain doing "okay" here, and I want to be a little more positive than last review, so I'm upgrading to a "B".

5. Give back to the Rust community

My work with Rustup continues, though in the past month or so I've been pretty lax because I've had to travel a lot for work. I continue to be as heavily involved in Rust as I can be -- I've stepped up to the plate to lead the Rustup team, and that puts me into the Rust developer tools team proper. I attended a conference, in part to represent the Rust developer community, and I have some followup work on that which I still need to complete.

I still hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can, and I've been trying to teach my colleagues about Rust so that they might also contribute to the community.

Previously I gave myself an 'A' but thought I could manage an 'A+' if I tried harder. Since I've been a little lax recently I'm dropping myself to an 'A-'.

6. Be better at tidying up

Once again, I came out of the previous review fired up to tidy more. Once again, that energy ebbed after about a week. Every time I feel like I might have the mental space to begin building a cleaning habit, something comes along to knock the wind out of my sails. Sometimes that's a big work related thing, but sometimes it's something as small as "Our internet connection is broken, so suddenly instead of having time to clean, I have no time because it's broken and so I can't do anything, even things which don't need an internet connection."

This remains an "F" for fail, sadly.

7. Save up money for renovations

The savings process continues. I've not managed to put quite as much away in this quarter as I did the quarter before, but I have been doing as much as I can. I've finally consolidated most of my savings into one place which also makes them look a little healthier.

The renovations bills continue to loom, but we're doing well, so I think I get to keep the "A" here.

8. Go on a proper holiday

Well, I had that week "off" but ended up doing so much stuff that it doesn't count as much of a holiday. Rob is now in Japan, but I've not managed to take the time as a holiday because my main project at work needs me there since our project manager and his usual stand-in are both also away in Japan.

We have made a basic plan to take some time around the August Bank Holiday to perhaps visit family etc, so I'm going to upgrade us to "C+" since we're making inroads, even if we've not achieved a holiday yet.

Summary

Last quarter, my scores were B, A+, C, B-, A, F, A, C, which, if we ignore the F is an average of A, though the F did ruin things a little.

This quarter I have a B+, A+, C, B, A-, F, A, C+, which ignoring the F is a little better, though still not great. I guess here's to another quarter.

Planet DebianBen Hutchings: Talk: What goes into a Debian package?

Some months ago I gave a talk / live demo at work about how Debian source and binary packages are constructed.

Yesterday I repeated this talk (with minor updates) for the Chicago LUG. I had quite a small audience, but got some really good questions at the end. I have now put the notes up on my talks page.

No, I'm not in Chicago. This was a trial run of giving a talk remotely, which I'll also be doing for DebConf this year. I set up an RTMP server in the cloud (nginx) and ran OBS Studio on my laptop to capture and transmit video and audio. I'm generally very impressed with OBS Studio, although the X window capture source could do with improvement. I used the built-in camera and mic, but the mic picked up a fair amount of background noise (including fan noise, since the video encoding keeps the CPU fairly busy). I should probably switch to a wearable mic in future.

Planet DebianMartin Pitt: Lightweight i3 developer desktop with OSTree and chroots

Introduction I’ve always liked a clean, slim, lightweight, and robust OS on my laptop (which is my only PC) – I’ve been running the i3 window manager for years, with some custom configuration to enable the Fn keys and set up my preferred desktop session layout. Initially on Ubuntu, for the last two and a half years under Fedora (since I moved to Red Hat). I started with a minimal server install and then had a post-install script that installed the packages that I need, restore my /etc files from git, and some other minor bits.

,

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I'm speaking at Black Hat USA 2019 in Las Vegas on Wednesday, August 7 and Thursday, August 8, 2019.

  • I'm speaking on "Information Security in the Public Interest" at DefCon 27 in Las Vegas on Saturday, August 10, 2019.

The list is maintained on this page.

Cory DoctorowI appeared on Nanowrimo’s awesome Write-Minded podcast to talk about Radicalized

It turned out really well!

Today’s dystopian fiction seems to be closer to reality than the dystopian fiction of the past. Brooke and Grant explore this new reality with Cory Doctorow, whose socially conscientious science fiction novels delve into topics of political consequence. From the ways in which anxieties fuel science fiction writers to how fiction has the power to change the way we think and operate in the world, today’s episode emphasizes the importance of dystopian fiction for its capacity to shed light on what is true, and what might happen, ideally, as Cory suggests, so that we might fix things before it’s too late.

,

CryptogramFriday Squid Blogging: When the Octopus and Squid Lost Their Shells

Cephalopod ancestors once had shells. When did they lose them?

With the molecular clock technique, which allowed him to use DNA to map out the evolutionary history of the cephalopods, he found that today's cuttlefish, squids and octopuses began to appear 160 to 100 million years ago, during the so-called Mesozoic Marine Revolution.

During the revolution, underwater life underwent a rapid change, including a burst in fish diversity. Some predators became better suited for crushing shellfish, while some smaller fish became faster and more agile.

"There's a continual arms race between the prey and the predators," said Mr. Tanner. "The shells are getting smaller, and the squids are getting faster."

The evolutionary pressures favored being nimble over being armored, and cephalopods started to lose their shells, according to Mr. Tanner. The adaptation allowed them to outcompete their shelled relatives for fast food, and they were able to better evade predators. They were also able to keep up with competitors seeking the same prey.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJonathan Carter: My Debian 10 (buster) Report

In the early hours of Sunday morning (my time), Debian 10 (buster) was released. It’s amazing to be a part of an organisation where so many people work so hard to pull together and make something like this happen. Creating and supporting a stable release can be tedious work, but it’s essential for any kind of large-scale or long-term deployments. I feel honored to have had a small part in this release

Debian Live

My primary focus area for this release was to get Debian live images in a good shape. It’s not perfect yet, but I think we made some headway. The out of box experiences for the desktop environments on live images are better, and we added a new graphical installer that makes Debian easier to install for the average laptop/desktop user. For the bullseye release I intend to ramp up quality efforts and have a bunch of ideas to make that happen, but more on that another time.

Calamares installer on Cinnamon live image.

Other new stuff I’ve been working on in the Buster cycle

Gamemode

Gamemode is a library and tool that changes your computer’s settings for maximum performance when you launch a game. Some new games automatically invoke Gamemode when they’re launched, but for most games you have to do it manually, check their GitHub page for documentation.

Innocent de Marchi Packages

I was sad to learn about the passing of Innocent de Marchi, a math teacher who was also a Debian contributor for whom I’ve sponsored a few packages before. I didn’t know him personally but learned that he was really loved in his community, I’m continuing to maintain some of his packages that I also had an interest in:

  • calcoo – generic lightweight graphical calculator app that can be useful on desktop environments that doesn’t have one
  • connectagram – a word unscrambling game that gets its words from wiktionary
  • fracplanet – fractal planet generator
  • fractalnow – fast, advanced fractal generator
  • gnubik – 3D Rubik’s cube game
  • tanglet – single player word finding game based on Boggle
  • tetzle – jigsaw puzzle game (was also Debian package of the Day #44)
  • xabacus – simulation of the ancient calculator

Powerline Goodies

I wrote a blog post on vim-airline and powerlevel9k shortly after packaging those: New powerline goodies in Debian.

Debian Desktop

I helped co-ordinate the artwork for the Buster release, although Laura Arjona did most of the heavy lifting on that. I updated some of the artwork in the desktop-base package and in debian-installer. Working on the artwork packages exposed me to some of their bugs but not in time to fix them for buster, so that will be a goal for bullseye. I also packaged the font that’s widely used in the buster artwork called quicksand (Debian package: fonts-quicksand). This allows SVG versions of the artwork in the system to display with the correct font.

Bundlewrap

Bundlewrap is a configuration management system written in Python. If you’re familiar with bcfg2 and Ansible, the concepts in Bundlewrap will look very familiar to you. It’s not as featureful as either of those systems, but what it lacks in advanced features it more than makes up for in ease of use and how easy it is to learn. It’s immediately useful for the large amount of cases where you want to install some packages and manage some config files based on conditions with templates. For anything else you might need you can write small Python modules.

Catimg

catimg is a tool that converts jpeg, png, ico and gif files to terminal output. This was also Debian Package of the day #26.

Gnome Shell Extensions

  • gnome-shell-extension-dash-to-panel: dash-to-panel is an essential shell extension for me, and does more for me to make Gnome 3 feel like Gnome 2.x for me than the classic mode does. It’s the easiest way to get a nice single panel on the top of the screen that contains everything that’s useful.
  • gnome-shell-extension-hide-veth: If you use LXC or Docker (or similar), you’ll probably be somewhat annoyed at all the ‘veth’ interfaces you see in network manager. This extension will hide those from the GUI.
  • gnome-shell-extension-no-annoyance: No annoyance fixes something that should really be configurable in Gnome by default. It removes all those nasty “Window is ready” notifications that are intrusive and distracting.

Other

That’s a wrap for my new Debian packages I maintain in Buster. There’s a lot more that I’d like to talk about that happened during this cycle, like that crazy month when I ran for DPL! And also about DebConf stuff, but I’m all out of time and on that note, I’m heading to DebCamp/DebConf in around 12 hours and look forward to seeing many of my Debian colleagues there :-)

Sociological ImagesSoccer Stars & Soc Majors

Sociology Twitter lit up after the US Women’s National Team’s World Cup win with the revelation that many of their players were sociology majors in college. It is an inspiration to see the team succeed at the highest levels and call for social change while doing so.

This news also raised an interesting question: do student athletes major in sociology because it is a compelling field (yay, us!) or because they are tracked into the major by academic advisors who see it as an “easy” choice to balance with sports?

According to data from the NCAA, the most common majors for both student athletes and the wider student body at Division 1 schools are business, STEM, and social sciences. Trend data show the biggest difference is in the choice between business and STEM; both groups seem to pick up social science majors at similar rates.

Source: NCAA D1 Diploma Dashboard

While the rate of majors is not that different, there is something special that sociology can do for these students. Student athlete lives are heavily administered. Between practice, conditioning, scheduled events, meals, and classes, many barely have a few hours to complete a full load of course work. In grad school, I tutored many student athletes who were sociology majors, and I watched them juggle their work with the demands of heavy travel schedules and intense workouts, all under the watchful eye of an army of advisors, coaches, mentors, and doctors. The experience is very close to what Erving Goffman called a “total institution” in Asylums:

“A total institution may be defined as a place of residence and work where a large number of like-situated individuals, cut off from the wider society for an appreciable period of time, together lead an enclosed, formally administered round of life. (1961, p. xiii)”

We usually associate total institutions with prisons and punishment, but this definition highlights the intense management that defines the college experience for many student athletes. When I tutored athletes in sociology, we spent a lot of time comparing their readings to the world around them. Sociological thinking about institutions, bureaucracy, and work gave them a language to think about and talk about their experiences in context.

Athletic programs can be complicated for colleges and universities, and there is ongoing debate about how the “student” status in student athlete shapes their obligation to pay for all this work. As debates about college athletics continue, it is important for players, fans, and administrators to think sociologically about their industry to see how it can better serve players as both students and athletes.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston (Fall, 2019). You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowWhere to catch me at San Diego Comic-Con!

I’m headed back to San Diego for Comic-Con next weekend, and you can catch me on Friday, Saturday and Sunday:

Friday, 5PM: Signing in AA04

Saturday, 5PM: Panel: Writing: Craft, Community, and Crossover (with James Killen, Seanan McGuire, Charlie Jane Anders,, Annalee Newitz, and Sarah Gailey), Room 23ABC

Sunday, 10AM: Signing and giveaway for Radicalized, Tor Booth, #2701.

I hope to see you there!

Planet DebianJonathan McDowell: Burn it all

I am generally positive about my return to Northern Ireland, and decision to stay here. Things are much better than when I was growing up and there’s a lot more going on here these days. There’s an active tech scene and the quality of life is pretty decent. That said, this time of year is one that always dampens my optimism. TLDR: This post brings no joy. This is the darkest timeline.

First, we have the usual bonfire issues. I’m all for setting things on fire while having a drink, but when your bonfire is so big it leads to nearby flat residents being evacuated to a youth hostel for the night or you decide that adding 1800 tyres to your bonfire is a great idea, it’s time to question whether you’re celebrating your cultural identity while respecting those around you, or just a clampit (thanks, @Bolster). If you’re starting to displace people from their homes, or releasing lots of noxious fumes that are a risk to your health and that of your local community you need to take a hard look at the message you’re sending out.

Secondly, we have the House of Commons vote on Tuesday to amend the Northern Ireland (Executive Formation) Bill to require the government to bring forward legislation to legalise same-sex marriage and abortion in Northern Ireland. On the face of it this is a good thing; both are things the majority of the NI population want legalised and it’s an area of division between us and the rest of the UK (and, for that matter, Ireland). Dig deeper and it doesn’t tell a great story about the Northern Ireland Assembly. The bill is being brought in the first place because (at the time of writing) it’s been 907 days since Northern Ireland had a government. The current deadline for forming an executive is August 25th, or another election must be held. The bill extends this to October 21st, with an option to extend it further to January 13th. That’ll be 3 years since the assembly sat. That’s not what I voted for; I want my elected officials to actually do their jobs - I may not agree with all of their views, but it serves NI much more to have them turning up and making things happen than failing to do so. Especially during this time of uncertainty about borders and financial stability.

It’s also important to note that the amendments only kick in if an executive is not formed by October 21st - if there’s a functioning local government it’s expected to step in and enact the appropriate legislation to bring NI into compliance with its human rights obligations, as determined by the Supreme Court. It’s possible that this will provide some impetus to the DUP to re-form the assembly in NI. Equally it’s possible that it will make it less likely that Sinn Fein will rush to re-form it, as both amendments cover issues they have tried to resolve in the past.

Equally while I’m grateful to Stella Creasy and Conor McGinn for proposing these amendments, it’s a rare example of Westminster appearing to care about Northern Ireland at all. The ‘backstop’ has been bandied about as a political football, with more regard paid to how many points Tory leadership contenders can score off each other than what the real impact will be upon the people in Northern Ireland. It’s the most attention anyone has paid to us since the Good Friday Agreement, but it’s not exactly the right sort of attention.

I don’t know what the answer is. Since the GFA politics in Northern Ireland has mostly just got more polarised rather than us finding common ground. The most recent EU elections returned an Alliance MEP, Naomi Long, for the first time, which is perhaps some sign of a move to non-sectarian politics, but the real test would be what a new Assembly election would look like. I don’t hold out any hope that we’d get a different set of parties in power.

Still, I suppose at least it’s a public holiday today. Here’s hoping the pub is open for lunch.

CryptogramPresidential Candidate Andrew Yang Has Quantum Encryption Policy

At least one presidential candidate has a policy about quantum computing and encryption.

It has two basic planks. One: fund quantum-resistant encryption standards. (Note: NIST is already doing this.) Two, fund quantum computing. (Unlike many far more pressing computer security problems, the market seems to be doing this on its own quite nicely.)

Okay, so not the greatest policy -- but at least one candidate has a policy. Do any of the other candidates have anything else in this area?

Yang has also talked about blockchain: "

"I believe that blockchain needs to be a big part of our future," Yang told a crowded room at the Consensus conference in New York, where he gave a keynote address Wednesday. "If I'm in the White House, oh boy are we going to have some fun in terms of the crypto currency community."

Okay, so that's not so great, either. But again, I don't think anyone else talks about this.

Note: this is not an invitation to talk more general politics. Not even an invitation to explain how good or bad Andrew Yang's chances are. Or anyone else's. Please.

LongNowBrian Eno’s Soundtrack for the Apollo 11 Moon Landing

50 years ago, the Apollo 11 moon landing was televised live to some 600 million viewers back on planet Earth. One of them was future Long Now co-founder Brian Eno, then 21. He found himself underwhelmed by what he saw. 

Footage from the television transmission of the moon landing.

Surely, there was more gravitas to the experience than the grainy, black and white footage suggested. In the months that followed, the same few seconds of Neil Armstrong’s small steps played on an endless loop on TV as anchors and journalists offered their analysis and patriotic platitudes as a soundtrack. The experts, he later wrote, “[obscured] the grandeur and strangeness of the event with a patina of down-to-earth chatter.”

In 01983, Eno decided to add his own soundtrack to the momentous event. His ninth solo album, Apollo: Atmospheres and Soundtracks was produced to accompany a documentary, Apollo, that consisted solely of 35mm footage from the Apollo 11 mission, with no narration. The first iteration of the film was too experimental for most audiences; it was recut with commentary from Apollo astronauts when it was eventually re-released as For All Mankind in 01989. 

The remastered and extended edition of Brian Eno’s Apollo album will be released on July 19.

This year, on the occasion of the moon landing’s 50th anniversary, Eno has revisited the Apollo project. He reunited with original producers Daniel Lanois and Roger Eno to remaster the album and record 11 new instrumental compositions. The album, Apollo: Extended Edition, will be released on July 19. A new music video for the album’s most well-known track, “An Ending (Ascent)” has also been released with visuals from a 02016 Earth overview.

A new music video for Brian Eno’s “An Ending (Ascent).”

To celebrate the album’s release and the moon landing anniversary, Long Now will be hosting a Brian Eno album listening event at The Interval on the evenings of July 23, 24, 30, and 31. 

The album will be played on our Meyer Sound System, accompanied by footage of the Apollo missions as well as a special mini menu of cocktails inspired by the album. Tickets are $20 and are expected to go quickly. 

The Apollo missions have always been a point of inspiration for Long Now over the years, both for the Big Here perspective they provided as well as for the long-term thinking they utilized. Below are links to some of our Apollo-related blog posts and articles:

Worse Than FailureError'd: Errors Don't Always Ad up

"You know, I'm thinking that The guys working on AT&T's DIRECTV service must have not done well with fractions in school," Andrew T. writes.

 

"Come on, DNS Exit, you shouldn't objectify your users!" writes Lance G.

 

Tom G. wrote, "I guess if I wanted my actual name to appear here I should have signed in with my Microsoft account and not my 20 year old Skype account."

 

Michael P. wrote, "Tripp Lite's site is pretty smart to detect a mismatched password before I've even registered on their site!"

 

"I was using my Ubuntu laptop, when my app crashed," writes Will B., "I didn't realize that my OS needed to crash my app whenever it was ready."

 

"Well, it looks like I'm about to enter the Matrix. So, do I click the gray pill or the white pill?" writes Jim S.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityFEC: Campaigns Can Use Discounted Cybersecurity Services

The U.S. Federal Election Commission (FEC) said today political campaigns can accept discounted cybersecurity services from companies without running afoul of existing campaign finance laws, provided those companies already do the same for other non-political entities. The decision comes amid much jostling on Capitol Hill over election security at the state level, and fresh warnings from U.S. intelligence agencies about impending cyber attacks targeting candidates in the lead up to the 2020 election.

Current campaign finance law prohibits corporate contributions to campaigns, and election experts have worried this could give some candidates pause about whether they can legally accept low- to no-cost services from cybersecurity companies.

But at an FEC meeting today, the commission issued an advisory opinion (PDF) that such assistance does not constitute an in-kind contribution, as long as the cybersecurity firm already offers discounted solutions to similarly situated non-political organizations, such as small nonprofits.

The FEC’s ruling comes in response to a petition by California-based Area 1 Security, whose core offering focuses on helping clients detect and block phishing attacks. The company said it asked the FEC’s opinion on the matter after several campaigns that had reached out about teaming up expressed hesitation given the commission’s existing rules.

In June, Area 1 petitioned the FEC for clarification on the matter, saying it currently offers free and low-cost services to certain clients which are capped at $1,337. The FEC responded with a draft opinion indicating such offering likely would amount to an in-kind contribution that might curry favor among politicians, and urged the company to resubmit its request focusing on the capped-price offering.

Area 1 did so, and at today’s hearing the FEC said “because Area 1 is proposing to charge qualified federal candidates and political committees the same as it charges its qualified non-political clients, the Commission concludes that its proposal is consistent with Area 1’s ordinary business practices and therefore would not result in Area 1 making prohibited in-kind contributions to such federal candidates and political committees.”

POLICY BY PIECEMEAL

The decision is the latest in a string of somewhat narrowly tailored advisories from the FEC related to cybersecurity offerings aimed at federal candidates and political committees. Most recently, the commission ruled that the nonprofit organization Defending Digital Campaigns could provide free cybersecurity services to candidates, but according to The New York Times that decision only applied to nonpartisan, nonprofit groups that offer the same services to all campaigns.

Last year, the FEC granted a similar exemption to Microsoft Corp., ruling that the software giant could offer “enhanced online account security services to its election-sensitive customers at no additional cost” because Microsoft would be shoring up defenses for its existing customers and not seeking to win favor among political candidates.

Dan Petalas is a former general counsel at the FEC who represents Area 1 as an attorney at the law firm Garvey Schubert Barer. Petalas praised today’s ruling, but said action by Congress is probably necessary to clarify the matter once and for all.

“Congress could take the uncertainty away by amending the law to say security services provided to campaigns to do not constitute an in-kind contribution,” Petalas said. “These candidates are super vulnerable and not well prepared to address cybersecurity threats, and I think that would be a smart thing for Congress to do given the situation we’re in now.”

‘A RECIPE FOR DISASTER’

The FEC’s decision comes as federal authorities are issuing increasingly dire warnings that the Russian phishing attacks, voter database probing, and disinformation campaigns that marked the election cycles in 2016 and 2018 were merely a dry run for what campaigns could expect to face in 2020.

In April, FBI Director Christopher Wray warned that Russian election meddling posed an ongoing “significant counterintelligence threat,” and that the shenanigans from 2016 — including the hacking of the Democratic National Committee and the phishing of Hillary Clinton’s campaign chairman and the subsequent mass leak of internal emails — were just “a dress rehearsal for the big show in 2020.”

Adav Noti, a former FEC general counsel who is now senior director of the nonprofit, nonpartisan Campaign Legal Center, said the commission is “incredibly unsuited to the danger that the system is facing,” and that Congress should be taking a more active roll.

“The FEC is an agency that can’t even do the most basic things properly and timely, and to ask them to solve this problem quickly before the next election in an area where they don’t really have any expertise is a recipe for disaster,” Noti said. “Which is why we see these weird advisory opinions from them with no real legal basis or rationale. They’re sort of making it up as they go along.”

In May, Sen. Ron Wyden (D-Ore.) introduced the Federal Campaign Cybersecurity Assistance Act, which would allow national party committees to provide cybersecurity assistance to state parties, individuals running for office and their campaigns.

Sen. Wyden also has joined at least a dozen other senators — including many who are currently running as Democratic candidates in the 2020 presidential race — in introducing the “Protecting American Votes and Elections (PAVE) Act,” which would mandate the use of paper ballots in U.S. elections and ban all internet, Wi-Fi and mobile connections to voting machines in order to limit the potential for cyber interference.

As Politico reports, Wyden’s bill also would give the Department of Homeland Security the power to set minimum cybersecurity standards for U.S. voting machines, authorize a one-time $500 million grant program for states to buy ballot-scanning machines to count paper ballots, and require states to conduct risk-limiting audits of all federal elections in order to detect any cyber hacks.

BIPARTISAN BLUES

Earlier this week, FBI Director Wray and Director of National Intelligence Dan Coats briefed lawmakers in the House and Senate on threats to the 2020 election in classified hearings. But so far, action on any legislative measures to change the status quo has been limited.

Democrats blame Senate Majority Leader Mitch McConnell for blocking any action on the bipartisan bills to address election security. Prior to meeting with intelligence officials, McConnell took to the Senate floor Wednesday to allege Democrats had “already made up their minds before we hear from the experts today that a brand-new, sweeping Washington, D.C. intervention is just what the doctor ordered.”

“Make no mistake,” McConnell said. “Many of the proposals labeled by Democrats to be ‘election security’ measures are indeed election reform measures that are part of the left’s wish list I’ve called the Democrat Politician Protection Act.”

But as Politico reporter Eric Geller tweeted yesterday, if lawmakers are opposed to requiring states to follow the almost universally agreed-upon best practices for election security, they should just say so.

“Experts have been urging Congress to adopt tougher standards for years,” Geller said. “Suggesting that the jury is still out on what those best practices are is factually inaccurate.”

Noti said he had hoped election security would emerge as a rare bipartisan issue in this Congress. After all, no candidate wants to have their campaign hacked or elections tampered with by foreign powers — which could well call into question the results of a race for both sides.

These days he’s not so sanguine.

“This is a matter of national security, which is one of the core functions of the federal government,” Noti said. “Members of Congress are aware of this issue and there is a desire to do something about it. But right now the prospect of Congress doing something — even if most lawmakers would agree with it — is small.”

Planet DebianMarkus Koschany: My Free Software Activities in June 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

First of all I want to thank Debian’s Release Team. Whenever there was something to unblock for Buster, I always got feedback within hours and in almost all cases the package could just migrate to testing. Good communication and clear rules helped a lot to make the whole freeze a great experience.

Debian Games

  • I reviewed and sponsored a couple of packages again this month.
  • Reiner Herrmann provided a complete overhaul of xbill, so that we all can fight those Wingdows Viruses again.
  • He also prepared a new upstream release of Supertuxkart, which is currently sitting in experimental but will hopefully be uploaded to unstable within the next days.
  • Bernhard Übelacker fixed two annoying bugs in Freeorion (#930417) and Warzone2100 (#930942).  Unfortunately it was too late to include the fixes for Debian 10 in time but I will prepare an update for the next point release.
  • Well, the freeze is over now (hooray) and I intend to upgrade a couple of games in the warm (if you live in the northern hemisphere) month of July again .

Debian Java

  • I prepared another security update for jackson-databind to fix CVE-2019-12814 and CVE-2019-12384 (#930750).
  • I worked on a security update for Tomcat 8 but have not finished it yet.

Debian LTS

This was my fortieth month as a paid contributor and I have been paid to work 17 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 10.06.2019 until 16.06.2019 and from 24.06.2019 until 30.06.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in wordpress, ansible, libqb, radare2, lemonldap-ng, irssi, libapache2-mod-auth-mellon and openjpeg2.
  • DLA-1827-1. Issued a security update for gvfs fixing 1 CVE.
  • DLA-1831-1. Issued a security update for jackson-databind fixing 2 CVE.
  • DLA-1822-1. Issued a security update for php-horde-form fixing 1 CVE.
  • DLA-1839-1. Issued a security update for expat fixing 1 CVE.
  • DLA-1845-1.  Issued a security update for dosbox fixing 2 CVE.
  • DLA-1846-1.  Issued a security update for unzip fixing 1 CVE.
  • DLA-1851-1. Issued a security update for openjpeg2 fixing 2 CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my thirteenth month and I have been paid to work 22 hours on ELTS (15 hours were allocated + 7 hours from last month).

  • ELA-133-1. Issued a security update for linux fixing 9 CVE.
  • ELA-137-1. Issued a security update for libvirt fixing 1 CVE.
  • ELA-139-1. Issued a security update for bash fixing 1 CVE.
  • ELA-140-1. Issued a security update for glib2.0 fixing 3 CVE.
  • ELA-141-1. Issued a security update for unzip fixing 1 CVE.
  • ELA-142-1. Issued a security update for libxslt fixing 2 CVE.

Thanks for reading and see you next time.

Planet DebianVincent Sanders: We can make it better than it was. Better...stronger...faster.

It is not a novel observation that computers have become so powerful that a reasonably recent system has a relatively long life before obsolescence. This is in stark contrast to the period between the nineties and the teens where it was not uncommon for users with even moderate needs from their computers to upgrade every few years.

This upgrade cycle was mainly driven by huge advances in processing power, memory capacity and ballooning data storage capability. Of course the software engineers used up more and more of the available resources and with each new release ensured users needed to update to have a reasonable experience.

And then sometime in the early teens this cycle slowed almost as quickly as it had begun as systems had become "good enough". I experienced this at a time I was relocating for a new job and had moved most of my computer use to my laptop which was just as powerful as my desktop but was far more flexible.

As a software engineer I used to have a pretty good computer for myself but I was never prepared to spend the money on "top of the range" equipment because it would always be obsolete and generally I had access to much more powerful servers if I needed more resources for a specific task.

To illustrate, the system specification of my desktop PC at the opening of the millennium was:
  • Single core Pentium 3 running at 500Mhz
  • Socket 370 motherboard with 100 Mhz Front Side Bus
  • 128 Megabytes of memory
  • A 25 Gigabyte Deskstar hard drive
  • 150 Mhz TNT 2 graphics card
  • 10 Megabit network card
  • Unbranded 150W PSU
But by 2013 the specification had become:
    2013 PC build still using an awesome beige case from 1999
  • Quad core i5-3330S Processor running at 2700Mhz
  • FCLGA1155 motherboard running memory at 1333 Mhz
  • 8 Gigabytes of memory
  • Terabyte HGST hard drive
  • 1,050 Mhz Integrated graphics
  • Integrated Intel Gigabit network
  • OCZ 500W 80+ PSU
The performance change between these systems was more than tenfold in fourteen years with an upgrade roughly once every couple of years.

I recently started using that system again in my home office mainly for Computer Aided Design (CAD), Computer Aided Manufacture (CAM) and Electronic Design Automation (EDA). The one addition was to add a widescreen monitor as there was not enough physical space for my usual dual display setup.

To my surprise I increasingly returned to this setup for programming tasks. Firstly because being at my desk acts as an indicator to family members I am concentrating where the laptop was no longer had that effect. Secondly I really like the ultra wide display for coding it has become my preferred display and I had been saving for a UWQHD

Alas last month the system started freezing, sometimes it would be stable for several days and then without warning the mouse pointer would stop, my music would cease and a power cycle was required. I tried several things to rectify the situation: replacing the thermal compound, the CPU cooler and trying different memory, all to no avail.

As fixing the system cheaply appeared unlikely I began looking for a replacement and was immediately troubled by the size of the task. Somewhere in the last six years while I was not paying attention the world had moved on, after a great deal of research I managed to come to an answer.

AMD have recently staged something of a comeback with their Ryzen processors after almost a decade of very poor offerings when compared to Intel. The value for money when considering the processor and motherboard combination is currently very much weighted towards AMD.

My timing also seems fortuitous as the new Ryzen 2 processors have just been announced which has resulted in the current generation being available at a substantial discount. I was also encouraged to see that the new processors use the same AM4 socket and are supported by the current motherboards allowing for future upgrades if required.

I Purchased a complete new system for under five hundred pounds, comprising:
    New PC assembled and wired up
  • Hex core Ryzen 5 2600X Processor 3600Mhz
  • MSI B450 TOMAHAWK AMD Socket AM4 Motherboard
  • 32 Gigabytes of PC3200 DDR4 memory
  • Aero Cool Project 7 P650 80+ platinum 650W Modular PSU
  • Integrated RTL Gigabit networking
  • Lite-On iHAS124 DVD Writer Optical Drive
  • Corsair CC-9011077-WW Carbide Series 100R Silent Mid-Tower ATX Computer Case
to which I added some recycled parts:
  • 250 Gigabyte SSD from laptop upgrade
  • GeForce GT 640 from a friend
I installed a fresh copy of Debian and all my CAD/CAM applications and have been using the system for a couple of weeks with no obvious issues.

An example of the performance difference is compiling NetSurf from a clean with empty ccache used to take 36 seconds and now takes 16 which is a nice improvement, however a clean build with the results cached has gone from 6 seconds to 3 which is far less noticeable and during development a normal edit, build, debug cycle affecting only of a small number of files has gone from 400 milliseconds to 200 which simply feels instant in both cases.

My conclusion is that the new system is completely stable but that I have gained very little in common usage. Objectively the system is over twice as fast as its predecessor but aside from compiling large programs or rendering huge CAD drawings this performance is not utilised. Given this I anticipate this system will remain unchanged until it starts failing and I can only hope that will be at least another six years away.

Planet DebianArturo Borrero González: Netfilter workshop 2019 Malaga summary

Header

This week we had the annual Netfilter Workshop. This time the venue was in Malaga (Spain). We had the hotel right in the Malaga downtown and the meeting room was in University ETSII Malaga. We had plenty of talks, sessions, discussions and debates, and I will try to summarice in this post what it was about.

Florian Westphal, Linux kernel hacker, Netfilter coreteam member and engineer from Red Hat, started with a talk related to some work being done in the core of the Netfilter code in the kernel to convert packet processing to lists. He shared an overview of current problems and challenges. Processing in a list rather than per packet seems to have several benefits: code can be smarter and faster, so this seems like a good improvement. On the other hand, Florian thinks some of the pain to refactor all the code may not worth it. Other approaches may be considered to introduce even more fast forwarding paths (apart from the flow table mechanism which is already available).

Florian also followed up with the next topic: testing. We are starting to have a lot of duplicated code to do testing. Suggestion by Pablo is to introduce some dedicated tools to ease in maintenance and testing itself. Special mentions to nfqueue and tproxy, 2 mechanisms that require quite a bit of code to be well tested (and could be hard to setup anyway).

Ahmed Abdelsalam, engineer from Cisco, gave a talk on SRv6 Network programming. This protocol allows to simplify some interesting use cases from the network engineering point of view. For example, SRv6 aims to eliminate some tunneling and overlay protocols (VXLAN and friends), and increase native multi-tenancy support in IPv6 networks. Network Services Chaining is one of the main uses cases, which is really interesting in cloud environments. He mentioned that some Kubernetes CNI mechanisms are going to implement SRv6 soon. This protocol does not looks interesting only for the cloud use cases, but also from the general network engineering point of view. By the way, Ahmed shared some really interesting numbers and graphs regarding global IPv6 adoption. Ahmed shared the work that has been done in Linux in general and in nftables in particular to support such setups. I had the opportunity to talk more personally with Ahmed during the workshop to learn more about this mechanism and the different use cases and applications it has.

Fernando, GSoC student, gave us an overview of the OSF functionality of nftables to identify different operating systems from inside the ruleset. He shared some of the improvements he has been working on, and some of them are great, like version matching and wildcards.

Brett, engineer from Untangle, shared some plans to use a new nftables expression (nft_dict) to arbitrarily match on metadata. The proposed design is interesting because it covers some use cases from new perspectives. This triggered a debate on different design approaches and solutions to the issues presented.

Next day, Pablo Neira, head of the Netfilter project, started by opening a debate about extra features for nftables, like the ones provided via xtables-addons for iptables. The first we evaluated was GeoIP. I suggested having some generic infrastructure to be able to write/query external metadata from nftables, given we have more and more use cases looking for this (OSF, the dict expression, GeoIP). Other exhotics extension were discussed, like TARPIT, DELUDE, DHCPMAC, DNETMAP, ECHO, fuzzy, gradm, iface, ipp2p, etc.

A talk on connection tracking support for the linux bridge followed, led by Pablo. A status update on latest work was shared, and some debate happened regarding different use cases for ebtables & nftables.

Next topic was a complex one with no easy solutions: hosting of the Netfilter project infrastructure: git repositories, mailing lists, web pages, wiki deployments, bugzilla, etc. Right now the project has a couple of physical servers housed in a datacenter in Seville. But nobody has time to properly maintain them, upgrade them, and such. Also, part of our infra is getting old, for example the webpage. Some other stuff is mostly unmaintained, like project twitter accounts. Nobody actually has time to keep things updated, and this is probably the base problem. Many options were considered, including moving to github, gitlab, or other hosting providers.

After lunch, Pablo followed up with a status update on hardware flow offload capabilities for nftables. He started with an overview of the current status of ethtool_rx and tc offloads, capabilities and limitations. It should be possible for most commodity hardware to support some variable amount of offload capabilities, but apparently the code was not in very good shape. The new flow block API should improve this situation, while also giving support for nftables offload. There is a related article in LWN.

Next talk was by Phil, engineer at Red Hat. He commented on user-defined strings in nftables, which presents some challenges. Some debate happened, mostly to get to an agreement on how to proceed.

Group photo

Next day, Phil was the one to continue with the workshop talks. This time the talk was about sharing his TODO list for iptables-nft, presentation and discussion of planned work. This triggered a discussion on how to handle certain bugs in Debian Buster, which have a lot of patch dependencies (so we cannot simply cherry-pick a single patch for stable). It turns out I maintain most of the Debian Netfilter packages, and Sebastian Delafond was attending the workshop too, who is also a Debian Developer. We provided some Debian-side input on how to better proceed with fixes for specific bugs in Debian. Phil continued pointing out several improvements that we require in nftables in order to support some rather exhotic uses cases in both iptables-nft and ebtables-nft.

Yi-Hung Wei, engineer working in OpenVSwitch shared some intresting features related to using the conntrack engine in certain scenarios. OVS is really useful in cloud environments. Specifically, the open discussion was around the zone based timeout policy support for other Netfilter use cases. It was pointed out by Pablo that nftables already support this. By the way, the Wikimedia Cloud Services team plans to use OVS in the near future by means of Neutron (a VXLAN+OVS setup)

Phil gave another talk related to nftables undefined behaviour situations. He has been working lately in polishing the last gaps between -legacy and -nft flavors of iptables and friends. Mostly what we have yet to solve are some corner cases. Also some weird ICMP situation. Thanks to Phil for taking care of this. Actually, Phils has been contributing a lot to the Netfilter project in the past few years.

Stephen, engineer from secunet, followed up after lunch to bring up a couple of topics about improvements to the kernel datapath using XDP. Also, he commented on partitioning the system into control and dataplace CPUs. The nftables flow table infra is doing exactly this, as pointed out by Florian.

Florian continued with some open-for.discussion topics for pending features in nftables. It looks like every day we have more requests for more different setups and use cases with nftables. We need to define uses cases as well as possible, and also try to avoid reinventing the wheel for some stuff.

Laura, engineer from Zevenet, followed up with a really interesting talk on load balancing and clustering using nftables. The amount of features and improvements added to nftlb since the last year is amazing: stateless DNAT topologies, L7 helpers support, more topologies for virtual services and backends, improvements for affinities, security policies, diffrerent clustering architectures, etc. We had an interesting conversation about how we integrate with etcd in the Wikimedia Foundation for sharing information between load balancer and for pooling/depooling backends. They are also spearheading a proposal to include support for nftables into Kubernetes kube-proxy.

Abdessamad El Abbassi, also engineer from Zevenet, shared the project that this company is developing to create a nft-based L7 proxy capable of offloading. They showed some metrics in which this new L7 proxy outperforms HAproxy for some setups. Quite interesting. Also some debate happened around SSL termination and how to better handle that situation.

That very afternoon the core team of the Netfilter project had a meeting in which some internal topics were discussed. Among other things, we decided to invite Phil Sutter to join the Netfilter coreteam.

I really enjoyed this round of Netfilter workshop. Pretty much enjoyed the time with all the folks, old friends and new ones.

CryptogramResetting Your GE Smart Light Bulb

If you need to reset the software in your GE smart light bulb -- firmware version 2.8 or later -- just follow these easy instructions:

Start with your bulb off for at least 5 seconds.

  1. Turn on for 8 seconds
  2. Turn off for 2 seconds
  3. Turn on for 8 seconds
  4. Turn off for 2 seconds
  5. Turn on for 8 seconds
  6. Turn off for 2 seconds
  7. Turn on for 8 seconds
  8. Turn off for 2 seconds
  9. Turn on for 8 seconds
  10. Turn off for 2 seconds
  11. Turn on
Bulb will flash on and off 3 times if it has been successfully reset.

Welcome to the future!

Planet DebianWouter Verhelst: DebConf Video player

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

Planet DebianSteve Kemp: Building a computer - part 1

I've been tinkering with hardware for a couple of years now, most of this is trivial stuff if I'm honest, for example:

  • Wiring a display to a WiFi-enabled ESP8266 device.
    • Making it fetch data over the internet and display it.
  • Hooking up a temperature/humidity sensor to a device.
    • Submit readings to an MQ bus.

Off-hand I think the most complex projects I've built have been complex in terms of software. For example I recently hooked up a 933Mhz radio-receiver to an ESP8266 device, then had to reverse engineer the protocol of the device I wanted to listen for. I recorded a radio-burst using an SDR dongle on my laptop, broke the transmission into 1 and 0 manually, worked out the payload and then ported that code to the ESP8266 device.

Anyway I've decided I should do something more complex, I should build "a computer". Going old-school I'm going to stick to what I know best the Z80 microprocessor. I started programming as a child with a ZX Spectrum which is built around a Z80.

Initially I started with BASIC, later I moved on to assembly language mostly because I wanted to hack games for infinite lives. I suspect the reason I don't play video-games so often these days is because I'm just not very good without cheating ;)

Anyway the Z80 is a reasonably simple processor, available in a 40PIN DIP format. There are the obvious connectors for power, ground, and a clock-source to make the thing tick. After that there are pins for the address-bus, and pins for the data-bus. Wiring up a standalone Z80 seems to be pretty trivial.

Of course making the processor "go" doesn't really give you much. You can wire it up, turn on the power, and barring explosions what do you have? A processor executing NOP instructions with no way to prove it is alive.

So to make a computer I need to interface with it. There are two obvious things that are required:

  • The ability to get your code on the thing.
    • i.e. It needs to read from memory.
  • The ability to read/write externally.
    • i.e. Light an LED, or scan for keyboard input.

I'm going to keep things basic at the moment, no pun intended. Because I have no RAM, because I have no ROM, because I have no keyboard I'm going to .. fake it.

The Z80 has 40 pins, of which I reckon we need to cable up over half. Only the arduino mega has enough pins for that, but I think if I use a Mega I can wire it to the Z80 then use the Arduino to drive it:

  • That means the Arduino will generate a clock-signal to make the Z80 tick.
  • The arduino will monitor the address-bus
    • When the Z80 makes a request to read the RAM at address 0x0000 it will return something from its memory.
    • When the Z80 makes a request to write to the RAM at address 0xffff it will store it away in its memory.
  • Similarly I can monitor for requests for I/O and fake that.

In short the Arduino will run a sketch with a 1024 byte array, which the Z80 will believe is its memory. Via the serial console I can read/write to that RAM, or have the contents hardcoded.

I thought I was being creative with this approach, but it seems like it has been done before, numerous times. For example:

  • http://baltazarstudios.com/arduino-zilog-z80/
  • https://forum.arduino.cc/index.php?topic=60739.0
  • https://retrocomputing.stackexchange.com/questions/2070/wiring-a-zilog-z80

Anyway I've ordered a bunch of Z80 chips, and an Arduino mega (since I own only one Arduino, I moved on to ESP8266 devices pretty quickly), so once it arrives I'll document the process further.

Once it works I'll need to slowly remove the arduino stuff - I guess I'll start by trying to build an external RAM/ROM interface, or an external I/O circuit. But basically:

  • Hook the Z80 up to the Arduino such that I can run my own code.
  • Then replace the arduino over time with standalone stuff.

The end result? I guess I have no illusions I can connect a full-sized keyboard to the chip, and drive a TV. But I bet I could wire up four buttons and an LCD panel. That should be enough to program a game of Tetris in Z80 assembly, and declare success. Something like that anyway :)

Expect part two to appear after my order of parts arrives from China.

Sam VargheseThe Rise and Fall of the Tamil Tigers is full of errors

How many mistakes should one accept in a book before it is pulled from sale? In the normal course, when a book is accepted for publication by a recognised publishing company, there are experienced editors who go through the text, correct it and ensure that there are no major bloopers.

Then there are fact-checkers who ensure that what is stated within the book is, at least, mostly aligned with public versions of events from reliable sources.

In the case of The Rise and Fall of the Tamil Tigers, a third-rate book that is being sold by some outlets online, neither of these exercises has been carried out. And it shows.

If the author, Damian Tangram, had voiced his views or even put the entire book online as a free offering, that would be fine. He is entitled to his opinion. But when he is trying to trick people into buying what is a very poor-quality book, then warnings are in order.

Here are just a few of the screw-ups in the first 14 pages (the book is 375 pages!):

In the foreword, the words “Civil War” are capitalised. This is incorrect and would be right only if the civil war were exclusive to Sri Lanka. This is not the case; there are numerous civil wars occurring around the world.

Next, the foreword claims the war started in 1985. This, again, is incorrect. It began in July 1983. The next claim is that this war “had its origins in the post-war political exploitation of socially divisive policies.” Really? Post-war means after the war – this conflict must be the first in the world to begin after it was over!

There is a further line indicating that the author does not know how to measure time: “After spanning three decades…” A decade is 10 years, three decades would be 30 years. The war lasted a little less than 26 years – July 23, 1983 to May 19, 2009.

Again, in the foreword, the author claims that the Liberation Tigers of Tamil Eelam “grew from being a small despot insurgency to the most dangerous and effective terrorist organizations the world has ever seen.” The LTTE was started by Velupillai Pirapaharan in the 1970s. By 1983, it was already a well-organised fighting force. Further, the English is all wonky here, the word should be “organization”, not the plural “organizations”.

And this is just the first paragraph of the book!

The second paragraph of the foreword claims about the year 2006: “Just when things could not be worse Sri Lanka was plunged into all-out war.” The war started much earlier and was in a brief hiatus. The final effort to eliminate the LTTE began on April 25, 2006. And a comma would be handy there.

Then again, the book claims in the foreword that the only person who refused to compromise in the conflict had been Pirapaharan. This is incorrect as the government was also equally stubborn until 2002.

To go on, the foreword says the book gives “an example of how a terrorist organisation like the LTTE can proliferate and spread its murderous ambitions”. The book suffers from numerous generalisations of this kind, all of which are standout examples of malapropism. And one’s ambitions grow, one does not “spread ambitions”.

Again, and we are still in the foreword, the book says the LTTE “was a force that lasted for more than twenty-five years…” Given that it took shape in the 1970s, this is again incorrect.

Next, there is a section titled “About this Book”. Again, misplaced capitalisation of the word “Book”. The author says he visited Sri Lanka for the first time in 1989 soon after he “met and married wife….” Great use of butler English, that. Additionally, he could not have married his wife; the woman in question became his wife only after he married her.

That year, he claims the “most frightening organization” was the JVP or Janata Vimukti Peramuna or People’s Liberation Front. Two years later, when he returned for a visit, the JVP had been defeated but “the enemy to peace was the LTTE”. This is incorrect as the LTTE did not offer any let-up while the JVP was engaging the Sri Lankan army.

Of the Tigers he says, “the power that they had acquired over those short years had turned them into a mythical unstoppable force.” This is incorrect; the Tigers became a force to be reckoned with many years earlier. They did not undergo any major evolution between 1989 and 1991.

The author’s only connection to Sri Lanka is through marrying a Sri Lankan woman. This, plus his visits, he claims give him a “close connection” to the island!

So we go on: “I returned to Sri Lankan several times…” The word is Lanka, not Lankan. More proof of a lack of editing, if any is needed by now.

“Lives were being lost; freedoms restricted and the economy being crushed under a financial burden.” The use of that semi-colon illustrates Tangram’s level of ignorance of English. Factually, this is all stating the bleeding obvious as all these fallouts of the war had begun much earlier.

The author claims that one generation started the war, a second continued to fight and a third was about to grow up and be thrown into a conflict. How three generations can come and go in the space of 26 years is a mystery and more evidence that this man just flings words about and hopes that they make sense.

More in this same section: “To know Sri Lanka without war was once an impossible dream…” Rubbish, I lived in Sri Lanka from 1957 till 1972 and I knew peace most of the time.

Ending this section is another screw-up: “I returned to Sri Lanka in 2012, after the war had ended, to witness the one thing I had not seen in over 25 years: Peace.” Leaving aside the wrong capitalisation of the word “peace”, since the author’s first visit was in 1989, how does 2012 make it “over 25 years”? By any calculation, that comes to 23 years. This is a ruse used throughout the book to give the impression that the author has a long connection to Sri Lanka when in reality he is just an opportunist trying to turn some bogus observations about a conflict he knows nothing about into a cash cow.

And so far I have covered hardly three full pages!!!

Let’s have a brief look at Ch-1 (one presumes that means Chapter 1) which is titled “Understanding Sri Lanka” with a sub-heading “Introduction Understanding Sri Lanka: The impossible puzzle”. (If it is impossible as claimed, how does the author claim he can explain it?)

So we begin: “…there is very little information being proliferated into the general media about the nation of Sri Lanka.” The author obviously does not own a dictionary and is unaware how the word “proliferated” should be used.

There are several strange conglomerations of words which mean nothing; for example, take this: “Without referring to a map most people would struggle to name any other city than Colombo. Even the name of the island may reflect some kind of echo of when it changed from being called Ceylon to when it became Sri Lanka.” Apart from all the missing punctuation, and the mixing up of the order of words, what the hell does this mean? Echo?

On the next page, the book says: “At the bottom corner of India is the small teardrop-shaped island of Sri Lankan.” That sentence could have done without the last “n”. Once again, no editor. Only Tangram the great.

The word Sinhalese is spelt that way; there is nobody who spells it “Singhalese”. But since the author is unable to read Sinhala, the local language, he makes errors of this kind over and over again. Again, common convention for the usage of numbers in print dictates that one to nine be spelt out and any higher number be used as a figure. The author is blissfully unaware of this too.

The percentage of Sinhalese-speakers is given as “about 70%” when the actual figure is 74.9%. And then in another illustration of his sloppiness, the author writes “The next largest groups are the Tamils who make up about 15% of the population.” The Tamils are not a single group, being made up of plantation Tamils who were brought in by the British from India to work in the tea estates (4.2%) and the local Tamils (11.2%) who have been there much longer.

He then refers to a group whom he calls Burgers – which is something sold in a fast-food outlet. The Sri Lankan ethnic group is called Burghers, who are the product of inter-marriages between Sinhalese and Portuguese, British or Dutch invaders. There is a reference made to a group of indigenous people, whom the author calls “Vedthas.” Later, on the same page, he calls these people Veddhas. This is not the first time that it is clear that he could not be bothered to spell-check this bogus tome.

There’s more: the “Singhalese” (the author’s spelling) are claimed to be of “Arian” origin. The word is Aryan. Then there is a claim that the Veddhas are related to the “Australian Indigenous Aborigines”. One has yet to hear of any non-Indigenous Aborigines. Redundant words are one thing at which Tangram excels.

There is reference to some king of Sri Lanka known as King Dutigama. The man’s name was Dutugemunu. But then what’s the difference, eh? We might as well have called him Charlie Chaplin!

Referring to the religious groups in Sri Lanka, Tangram writes: “Hinduism also has a long history in Sri Lanka with Kovils…” The word is temples, unless one is writing in the vernacular. He claims Buddhists make up 80%; the correct figure is 70.2%.

Then referring to the Bo tree under which Gautama Buddha is claimed to have found enlightenment, Tangram claims it is more than 2000 years old and the oldest cultivated tree alive today. He does not know about the Bristlecone pine trees that date back more than 4700 years. Or the redwoods that carbon dating has shown to be more than 3000 years old.

This brings me to page 14 and I have crossed 1500 words! The entire book would probably take me a week to cover. But this number of errors should serve to prove my point: this book should not be sold. It is a fraud on the public.

Worse Than FailureCodeSOD: Null Error Handling

Oliver works for a very large company. Just recently, someone decided that it was time to try out those “newfangled REST services.”

Since this was “new”, at least within the confines of the organization, that meant there were a lot more eyes on the project and a more thorough than average code review process. That’s how Oliver found this.

@SuppressWarning("null")
public void uploadData(Payload payload) {
  // preparation of payload and httpClient
  //...
  @NonNull
  HttpResponse response = null;
  response = httpClient.execute(request);
  if (response.getStatus() != 200) {
    throw new RuntimeException(
      String.format(
        "Http request failed with status %s",
        response.getStatus()
      );
  }
}

The purpose of this code is to upload a JSON-wrapped payload to one of the restful endpoints.

Let’s start with the error handling. Requiring a 200 status code is probably not a great idea. Hope the server side doesn’t say, “Oh, this creates a new resource, I had better return a 201.” And you might be thinking to yourself, “Wait, shouldn’t any reasonable HTTP client raise an exception if the status code isn’t a success?” You’d be right, and that is what this httpClient object does. So if the status isn’t a success code, the error throw statement will never execute, meaning we only error if there isn't one.

The HttpResponse variable is annotated with a @NonNull annotation, which means at compile time, any attempt to set the variable to null should trigger an error. Which, you’ll note, is exactly what happens on the same line.

It’s okay, though, as the dependency setup in the build file was completely wrong, and they never loaded the library which enforced those annotations in the first place. So @NonNull was just for decoration, and had absolutely no function anyway.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet DebianSven Hoexter: Frankenstein JVM with flavour - jlink your own JVM with OpenJDK 11

While you can find a lot of information regarding the Java "Project Jigsaw", I could not really find a good example on "assembling" your own JVM. So I took a few minutes to figure that out. My usecase here is that someone would like to use Instana (non free tracing solution) which requires the java.instrument and jdk.attach module to be available. From an operations perspektive we do not want to ship the whole JDK in our production Docker Images, so we've to ship a modified JVM. Currently we base our images on the builds provided by AdoptOpenJDK.net, so my examples are based on those builds. You can just download and untar them to any directory to follow along.

You can check the available modules of your JVM by runnning:

$ jdk-11.0.3+7-jre/bin/java --list-modules | grep -E '(instrument|attach)'
java.instrument@11.0.3

As you can see only the java.instrument module is available. So let's assemble a custom JVM which includes all the modules provided by the default AdoptOpenJDK.net JRE builds and the missing jdk.attach module:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.attach --output myjvm

$ ./myjvm/bin/java --list-modules | grep -E '(instrument|attach)'
java.instrument@11.0.3
jdk.attach@11.0.3

Size wise the increase is, as expected, rather minimal:

$ du -hs myjvm jdk-11.0.3+7-jre jdk-11.0.3+7
141M    myjvm
121M    jdk-11.0.3+7-jre
310M    jdk-11.0.3+7

For the fun of it you could also add the compiler so you can execute source files directly:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.compiler --output myjvm2
$ ./myjvm2/bin/java HelloWorld.java
Hello World!

CryptogramDetails of the Cloud Hopper Attacks

Reuters has a long article on the Chinese government APT attack called Cloud Hopper. It was much bigger than originally reported.

The hacking campaign, known as "Cloud Hopper," was the subject of a U.S. indictment in December that accused two Chinese nationals of identity theft and fraud. Prosecutors described an elaborate operation that victimized multiple Western companies but stopped short of naming them. A Reuters report at the time identified two: Hewlett Packard Enterprise and IBM.

Yet the campaign ensnared at least six more major technology firms, touching five of the world's 10 biggest tech service providers.

Also compromised by Cloud Hopper, Reuters has found: Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology. HPE spun-off its services arm in a merger with Computer Sciences Corporation in 2017 to create DXC.

Waves of hacking victims emanate from those six plus HPE and IBM: their clients. Ericsson, which competes with Chinese firms in the strategically critical mobile telecoms business, is one. Others include travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America's nuclear submarines at a Virginia shipyard.


Planet DebianJonathan Dowland: Bose on-ear wireless headphones

Azoychka modelling the headphones

Azoychka modelling the headphones

Earlier this year, and after about five years, I've had to accept that my beloved AKG K451 fold-able headphones have finally died, despite the best efforts of a friendly colleague in the Newcastle Red Hat office, who had replaced and re-soldered all the wiring through the headband, and disassembled the left ear-cup to remove a stray metal ring that got jammed in the jack, most likely snapped from one of several headphone wires I'd gone through.

The K451's were really good phones. They didn't sound quite as good as my much larger, much less portable Sennheisers, but the difference was close, and the portability aspect gave them fantastic utility. They remained comfortable to wear and listen to for hours on end, and were surprisingly low-leaking. I became convinced that on-ear was a good form factor for portable headphones.

To replace them, I decided to finally give wireless headphones a try. There are not a lot of on-ear, smaller form-factor wireless headphone models. I really wanted to like the Sony WH-H800s, which (I thought) looked stylish, and reviews for their bigger brother (the 1000 series over-ear) are fantastic. The 800s proved very hard to audition. I could only find one vendor in Newcastle with a pair for evaluation, Currys PC World, but the circumstances were very poor: a noisy store, the headphones tethered to a security frame on a very short leash, so I had to stoop to put them on; no ability to try my own music through the headset. The headset in actuality seemed poorly constructed, with the hard plastic seeming to be ill-fitting such that the headset rattled when I picked it up.

I therefore ended up buying the Bose on-ear wireless headphones. I was able to audition them in several different environments, using my own music, both over Bluetooth and via a cable. They are very comfortable, which is important for the use-case. I was a little nervous about reports on Bose sound quality, which is described as more sculpted than true to the source material, but I was happy with what I could hear in my demonstrations. What clinched it was a few other circumstances (that I won't elaborate on here) which brought the price down to comparable to what I paid for the AKG K451s.

A few months in, and the only criticism I have of the Bose headphones is I can get some mild discomfort on my helix if I have positioned them poorly. This has not turned out to be a big problem. One consequence of having wireless headphones, asides from increased convenience in the same listening circumstances I used wired headphones, is all the situations that I can use them that I wouldn't have bothered before, including a far wider range of house work chores, going up and down ladders, DIY jobs, etc. I'm finding myself consuming a lot more podcasts and programmes from BBC Radio, and experimenting more with streaming music.

Worse Than FailureCodeSOD: Structured Searching

It’s hard to do any non-trivial programming in C without having to use a struct. Structs are great! A single variable holds access to multiple pieces of data, and all the nasty details of how they’re laid out in memory are handled by the compiler.

In more modern OO languages, we take that kind of thing for granted. We’re so abstracted from the details of how memory is laid out it’s easy to forget how tricky and difficult actually managing that kind of memory layout is.

Of course, if you’re Jean-Yves R’s co-worker, letting structs manage your memory layout is beginner mode stuff.

Jean-Yves was trying to understand why a bunch of structs were taking up huge amounts of memory, relative to how much they should take. Every bit of memory mattered, as this was an embedded application. Already, these structs weren’t actually stored in RAM, but in the flash memory on the device. They served as a database- when a request came in over Modbus or CAN or I2C, the ID on the request would be used to look up the struct containing metadata for handling that request. It was complex software, so there were a lot of these structs taking up flash memory.

It didn’t take long to see that the structs were defined with padding to ensure every field fell on a multiple of 32-bits, which meant there were huge gaps in every struct. Why? Well, this is an example of how they’d search the database:

/* These lines are actually in an included header */
#define DAT_calcOffset(address,offset)	 (address += offset)
#define DAT_MODBUS_ID_OFFSET					0x00000002	/* (32 bits pointer) */

/*
 * Database search code
 */
/*
 * Setting up start adress at beginning of flash zone + an 
 * offset corresponding to the member of struct
 */
DAT_calcOffset(pu32SearchBaseAddress,DAT_MODBUS_ID_OFFSET);
            
/* Return : Status */
if(u16ModBusID >= 0x8000)
{
    u16ModBusID -= 0x8000;
    bReturnStatus = TRUE;
}


/* Increment until we find the correct ID  */
while((*pu32SearchBaseAddress != u16ModBusID) && (u16Index < u16NbOfData))
{
    pu32SearchBaseAddress += (sizeof(DAT_typDataArray)/4);
    u16Index++;
}

if(u16Index == u16NbOfData)
{
    if(penFlagReturn)
        *penFlagReturn = (DAT_typFlag)DAT_enCFlagErrOF;

    xSemaphoreGive(DAT_tMutexDatabase);
    return DAT_tNullStr;
}

pu32SearchBaseAddress is a pointer to a struct in the flash memory, at least until the DAT_calcOffset macro does a little pointer arithmetic to point it at a field within the struct- specifically the modbus message ID. Then, in the while loop, we keep incrementing that pointer based on sizeof(DAT_typDataArray)/4- which is the size of our struct.

You might be wondering, why not do something sane and readable, like pu32SearchBaseAddress->modbus_id? Well, obviously this is “optimized”.

Like all “optimizations”, this one is a tradeoff. In this case, the memory layout of the structs is now fixed, and the structs cannot ever evolve in the future without completely breaking this code. It’s also not portable, due to memory sizes.

On the flip side, this also offers many benefits. The code is cryptic and unreadable, which helps ensure job security. The fact that each type of message- modbus, CANbus, I2C, etc.- has its own database means that this code has to be written for each one of those protocols, ensuring that the developer always has lots of code to copy/paste between those datasets with minor changes to constants and variable names. This keeps their lines-of-code count up in the source control metrics.

It probably didn’t do anything for performance of course, as pretty much any C compiler is going to compile the -> operator into a static memory offset anyway, thus making this “optimization” pretty useless for performance.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianEddy Petrișor: Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without sutdents going mad?

I'm trying to go through Sergio Benitez's CS140E class and I am currently at Implementing StackVec. StackVec is something that currently, looks like this:

/// A contiguous array type backed by a slice.
///
/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`
/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`
/// requires no memory allocation as it is backed by a user-supplied slice. As a
/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This
/// results in `push` being fallible: if `push` is called when the vector is
/// full, an `Err` is returned.
#[derive(Debug)]
pub struct StackVec<'a, T: 'a> {
    storage: &'a mut [T],
    len: usize,
    capacity: usize,
}
The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.

Now I am trying to understand what needs to happens behind:
  1. IntoIterator
  2. when in no_std
  3. with a custom type which has generics
  4. and has to use lifetimes
I don't now what I'm doing, I might have managed to do it:

pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {
    type Item = &'a mut T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = &'a mut T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop();
        self.index += 1;

        result
    }
}
I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.

That was until I found this wonderful example on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:

struct Pixel {
    r: i8,
    g: i8,
    b: i8,
}

impl IntoIterator for Pixel {
    type Item = i8;
    type IntoIter = PixelIntoIterator;

    fn into_iter(self) -> Self::IntoIter {
        PixelIntoIterator {
            pixel: self,
            index: 0,
        }

    }
}

struct PixelIntoIterator {
    pixel: Pixel,
    index: usize,
}

impl Iterator for PixelIntoIterator {
    type Item = i8;
    fn next(&mut self) -> Option {
        let result = match self.index {
            0 => self.pixel.r,
            1 => self.pixel.g,
            2 => self.pixel.b,
            _ => return None,
        };
        self.index += 1;
        Some(result)
    }
}


fn main() {
    let p = Pixel {
        r: 54,
        g: 23,
        b: 74,
    };
    for component in p {
        println!("{}", component);
    }
}
The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.

Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.


Still, the fact there are so many new things at once, one of them being lifetimes - which can not be taught, only experienced @oli_obk - makes things very confusing.

Even if I think I managed it for IntoIterator, I am similarly confused about implementing "Deref for StackVec" for the same reasons.

I think I am seeing on my own skin what Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, Oli?

All that aside, what should be the signature of the impl? Is this OK?

impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {
    type Target = T;

    fn deref(&self) -> &Self::Target;
}
Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.

I don't know what I'm doing, but I hope things will become clear with more exercise.

,

Cory DoctorowSteering with the Windshield Wipers

In my latest podcast (MP3), I read my May Locus column: Steering with the Windshield Wipers. It makes the argument that much of the dysfunction of tech regulation — from botched anti-sex-trafficking laws to the EU’s plan to impose mass surveillance and censorship to root out copyright infringement — are the result of trying to jury-rig tools to fix the problems of monopolies, without using anti-monopoly laws, because they have been systematically gutted for 40 years.

A lack of competition rewards bullies, and bullies have insatiable appetites. If your kid is starving because they keep getting beaten up for their lunch money, you can’t solve the problem by giving them more lunch money – the bullies will take that money too. Likewise: in the wildly unequal Borkean inferno we all inhabit, giving artists more copyright will just enrich the companies that control the markets we sell our works into – the media companies, who will demand that we sign over those rights as a condition of their patronage. Of course, these companies will be subsequently menaced and expropriated by the internet distribution companies. And while the media companies are reluctant to share their bounties with us artists, they reliably expect us to share their pain – a bad quarter often means canceled projects, late payments, and lower advances.

And yet, when a lack of competition creates inequities, we do not, by and large, reach for pro-competitive answers. We are the fallen descendants of a lost civilization, destroyed by Robert Bork in the 1970s, and we have forgotten that once we had a mighty tool for correcting our problems in the form of pro-competitive, antitrust enforcement: the power to block mergers, to break up conglomerates, to regulate anticompetitive conduct in the marketplace.

But just because we know where to find the copyright lever, it doesn’t follow that yanking on it hard enough will make it do the work of antitrust law.

MP3

Krebs on SecurityPatch Tuesday Lowdown, July 2019 Edition

Microsoft today released software updates to plug almost 80 security holes in its Windows operating systems and related software. Among them are fixes for two zero-day flaws that are actively being exploited in the wild, and patches to quash four other bugs that were publicly detailed prior to today, potentially giving attackers a head start in working out how to use them for nefarious purposes.

Zero-days and publicly disclosed flaws aside for the moment, probably the single most severe vulnerability addressed in this month’s patch batch (at least for enterprises) once again resides in the component of Windows responsible for automatically assigning Internet addresses to host computers — a function called the “Windows DHCP server.”

The DHCP weakness (CVE-2019-0785) exists in most supported versions of Windows server, from Windows Server 2012 through Server 2019.

Microsoft said an unauthenticated attacker could use the DHCP flaw to seize total, remote control over vulnerable systems simply by sending a specially crafted data packet to a Windows computer. For those keeping count, this is the fifth time this year that Redmond has addressed such a critical flaw in the Windows DHCP client.

All told, only 15 of the 77 flaws fixed today earned Microsoft’s most dire “critical” rating, a label assigned to flaws that malware or miscreants could exploit to commandeer computers with little or no help from users. It should be noted that 11 of the 15 critical flaws are present in or are a key component of the browsers built into Windows — namely, Edge and Internet Exploder Explorer.

One of the zero-day flaws — CVE-2019-1132 — affects Windows 7 and Server 2008 systems. The other — CVE-2019-0880 — is present in Windows 8.1, Server 2012 and later operating systems. Both would allow an attacker to take complete control over an affected system, although each is what’s known as an “elevation of privilege” vulnerability, meaning an attacker would already need to have some level of access to the targeted system.

CVE-2019-0865 is a denial-of-service bug in a Microsoft open-source cryptographic library that could be used to tie up system resources on an affected Windows 8 computer. It was publicly disclosed a month ago by Google’s Project Zero bug-hunting operation after Microsoft reportedly failed to address it within Project Zero’s stated 90-day disclosure deadline.

The other flaw publicly detailed prior to today is CVE-2019-0887, which is a remote code execution flaw in the Remote Desktop Services (RDP) component of Windows. However, this bug also would require an attacker to already have compromised a target system.

Mercifully, there do not appear to be any security updates for Adobe Flash Player this month.

Standard disclaimer: Patching is important, but it usually doesn’t hurt to wait a few days before Microsoft irons out any wrinkles in the fixes, which sometimes introduce stability or usability issues with Windows after updating (KrebsOnSecurity will endeavor to update this post in the event that any big issues with these patches emerge).

As such, it’s a good idea to get in the habit of backing up your system — or at the very least your data — before applying any updates. The thing is, newer versions of Windows (e.g. Windows 10+) by default will go ahead and decide for you when that should be done (often this is in the middle of the night). But that setting can be changed.

If you experience any problems installing any of the patches this month, please feel free to leave a comment about it below; there’s a better-than-even chance that other readers have experienced the same and may even chime in with some helpful advice and tips.

Further reading:

Qualys Patch Tuesday Blog

Rapid7

Tenable [full disclosure: Tenable is an advertiser on this blog].

Planet DebianMatthew Garrett: Bug bounties and NDAs are an option, not the standard

Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comment count unavailable comments

Planet DebianSean Whitton: Upload to Debian with just 'git tag' and 'git push'

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, in Debian testing, unstable and buster-backports at the time of writing).

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u
    % cd ~/tmp/t2u
    % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \
        debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \
        https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23
    

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.


  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.

Planet DebianThomas Lange: Talks, articles and a podcast in German

In April I gave a talk about FAI a the GUUG-Frühjahrsfachgespräch in Karlsruhe (Slides). At this nice meeting Ingo from RadioTux made an interview with me (https://www.radiotux.de/index.php?/archives/8050-RadioTux-Sendung-April-2019.html) (from 0:35:30).

Then I found an article in the iX Special 2019 magazine about automation in the data center which mentioned FAI. Nice. But I was very supprised and happy when I saw a whole article about FAI in the Linux Magazin 7/2019. A very good article with a some focus on network things, but also the class system and installing other distributions is described. And they will also publish another article about the FAI.me service in a few months. I'm excited!

In a few days, I going to DebConf19 in Curitiba for two weeks. I will work on Debian web stuff, check my other packages (rinse, dracut, tcsh) and hope to meet a lot of friendly people.

And in August I'm giving a talk at FrOSCon about FAI.

FAI

LongNowThe Global Tree Restoration Potential

Earlier this month, a study appeared in Science that found that a global reforestation effort could capture 205 gigatons of CO2 over the next 40-100 years—two thirds of all the CO2 humans have generated since the industrial revolution:

The restoration of trees remains among the most effective strategies for climate change mitigation. We mapped the global potential tree coverage to show that 4.4 billion hectares of canopy cover could exist under the current climate. Excluding existing trees and agricultural and urban areas, we found that there is room for an extra 0.9 billion hectares of canopy cover, which could store 205 gigatonnes of carbon in areas that would naturally support woodlands and forests. This highlights global tree restoration as our most effective climate change solution to date. However, climate change will alter this potential tree coverage. We estimate that if we cannot deviate from the current trajectory, the global potential canopy cover may shrink by ~223 million hectares by 2050, with the vast majority of losses occurring in the tropics. Our results highlight the opportunity of climate change mitigation through global tree restoration but also the urgent need for action.

Via Science.

Scientific American unpacked the study and its potential implications:

The study team analyzed almost 80,000 satellite photo measurements of tree cover worldwide and combined them with enormous global databases about soil and climate conditions, evaluating one hectare at a time. The exercise generated a detailed map of how many trees the earth could naturally support—where forests grow now and where they could grow, outside of areas such as deserts and savannahs that support very few or no trees. The team then subtracted existing forests and also urban areas and land used for agriculture. That left 0.9 billion hectares that could be forested but have not been. If those spaces were filled with trees that already flourish nearby, the new growth could store 205 gigatons of carbon by the time the forests mature.

After 40 to 100 years, of course, the storage rate would flatten as forest growth levels off—but the researchers say the 205 gigatons would be maintained as old trees die and new ones grow. There would be “a bank of excess carbon that is no longer in the atmosphere,” Crowther says.

Via Scientific American.

CryptogramCell Networks Hacked by (Probable) Nation-State Attackers

A sophisticated attacker has successfuly infiltrated cell providers to collect information on specific users:

The hackers have systematically broken in to more than 10 cell networks around the world to date over the past seven years to obtain massive amounts of call records -- including times and dates of calls, and their cell-based locations -- on at least 20 individuals.

[...]

Cybereason researchers said they first detected the attacks about a year ago. Before and since then, the hackers broke into one cell provider after the other to gain continued and persistent access to the networks. Their goal, the researchers believe, is to obtain and download rolling records on the target from the cell provider's database without having to deploy malware on each target's device.

[...]

The researchers found the hackers got into one of the cell networks by exploiting a vulnerability on an internet-connected web server to gain a foothold onto the provider's internal network. From there, the hackers continued to exploit each machine they found by stealing credentials to gain deeper access.

Who did it?

Cybereason did say it was with "very high probability" that the hackers were backed by a nation state but the researchers were reluctant to definitively pin the blame.

The tools and the techniques ­- such as the malware used by the hackers ­- appeared to be "textbook APT 10," referring to a hacker group believed to be backed by China, but Div said it was either APT 10, "or someone that wants us to go public and say it's [APT 10]."

Original report:

Based on the data available to us, Operation Soft Cell has been active since at least 2012, though some evidence suggests even earlier activity by the threat actor against telecommunications providers.

The attack was aiming to obtain CDR records of a large telecommunications provider.

The threat actor was attempting to steal all data stored in the active directory, compromising every single username and password in the organization, along with other personally identifiable information, billing data, call detail records, credentials, email servers, geo-location of users, and more.

The tools and TTPs used are commonly associated with Chinese threat actors.

During the persistent attack, the attackers worked in waves -- abandoning one thread of attack when it was detected and stopped, only to return months later with new tools and techniques.

Boing Boing post.

Planet DebianSteve Kemp: Upgraded my first host to buster

I upgrade the first of my personal machines to Debian's new stable release, buster, yesterday. So far two minor niggles, but nothing major.

My hosts are controlled, sometimes, by puppet. The puppet-master is running stretch and has puppet 4.8.2 installed. After upgrading my test-host to the new stable I discovered it has puppet 5.5 installed:

root@git ~ # puppet --version 5.5.10

I was not sure if there would be compatibility problems, but after reading the release notes nothing jumped out. Things seemed to work, once I fixed this immediate problem:

     # puppet agent --test
     Warning: Unable to fetch my node definition, but the agent run will continue:
     Warning: SSL_connect returned=1 errno=0 state=error: dh key too small
     Info: Retrieving pluginfacts
     ..

This error-message was repeated multiple times:

SSL_connect returned=1 errno=0 state=error: dh key too small

To fix this comment out the line in /etc/ssl/openssl.cnf which reads:

CipherString = DEFAULT@SECLEVEL=2

The second problem was that I use borg to run backups, once per day on most systems, and twice per day on others. I have an invocation which looks like this:

borg create ${flags} --compression=zlib  --stats ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) \
   --exclude=/proc \
   --exclude=/swap.file \
   --exclude=/sys  \
   --exclude=/run  \
   --exclude=/dev  \
   --exclude=/var/log \
   /

That started to fail :

borg: error: unrecognized arguments: /

I fixed this by re-ordering the arguments such that it ended "destination path", and changing --exclude=x to --exclude x:

borg create ${flags} --compression=zlib  --stats \
   --exclude /proc \
   --exclude /swap.file \
   --exclude /sys  \
   --exclude /run  \
   --exclude /dev  \
   --exclude /var/log \
   ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S)  /

That approach works on my old and new hosts.

I'll leave this single system updated for a few more days to see what else is broken, if anything. Then I'll upgrade them in turn.

Good job!

Worse Than FailureProcess by Management

Alice's team was thirty developers, taking up most of the floor of a nondescript office building in yet another office park. Their team was a contractor-to-a-contractor for a branch of the US military, which meant a variety of things. First, bringing a thumb drive into the office was a firing offense. Second, they were used to a certain level of bureaucracy. You couldn't change a line of code unless you had four different documents confirming the change was necessary and was authorized, and actually deploying a change was a milestone event with code freezes and expected extra hours.

Despite all this, the thirty person team had built a great working relationship. They had made their process as efficient as they could, and their PM, Doug, understood the work well enough to keep things streamlined. In fact, Doug did such a good job that Doug got promoted. Enter Millie, his replacement.

Millie had done a stint in the Air Force and then went back to school for her MBA. She had bounced around a few different companies, and had managed to turn every job change into a small promotion. This was Millie's first time overseeing a pool of software developers, but she had an MBA. Management was management, and there was no reason she had to understand what developers did, so long as she understood the key performance indicators (KPI).

Like the quantity of defects. That was a great KPI, because it was measurable, had a clear negative impact, and it could be mitigated. Mitigated with a process.

After a few weeks of getting her bearings, Millie called a meeting. "Alright, everyone, I've been observing a little bit of how we work, and I think there may be some communication and organization issues, so I wanted to address that. I've looked at our current workflow, and I've made a few small changes that I wanted to review."

On one side of the white board, she drew a bubble labeled "In Production". "This is where we want our code to be, right? Working, quality-controlled code, in production, with no defects." On the opposite side of the board, she added a bubble for "PCCB Ticket." "And any code change starts with one of these- the Product Change Control Board reviews an open ticket. They'll then turn that ticket into a Functional Requirement Document." Millie added another bubble for that.

Alice had some questions already, but not quite about the inputs or outputs.

A simple bubble diagram

"Great, okay, so… we need to iterate on the FRD, and once the PCCB signs off we'll convert that to a System Requirement Document. Either a PM or a SME will decompose the SRD into one or more Work Packages."

Millie continued scribbling furiously as she explained exactly what a work package was, as this wasn't currently a term in use at their organization. Her explanation wasn't terribly clear, as Millie explained it as the set of steps required to implement a single feature, but a Functional Requirement was a feature, so how was the Work Package (WP) any different than the FRD?

"Please, hold your questions until the end, we have a lot to get through."

a more complex bubble diagram

Finally, once the Work Package was analyzed, you could create a "Ticket Lifecycle Document", a new document which would hold all information about all of the efforts put towards the PCCB ticket. Which meante the TLD contained all the WPs, which raised questions about the point of adding work packages. From the TLD to a new PCCB ticket- a "Ready" ticket, then finally those requirements could go onto a Release backlog and a release management plan could be created.

"Finally," Milile explained, "we're ready to write code." In the center of the board, she added a single bubble: "Code".

the kind of bubble diagram that gives you hives

And on and on the meeting went. The diagram grew. Lines kept getting added. Bubbles got inserted between existing bubbles. Arrows pointed to labels, or to bubbles, or maybe to arrows? By the end of Millie's meeting, it looked something like this.

a bubble diagram that could be used to summon the elder things from beyond the realm of darkness

"There, that lays out the correct pattern for getting our software to production, with a feedback loop that prevents defects. Any questions?"

There weren't any questions at the meeting, no. But boy, were there questions. Loads of questions. Questions like, "What font should I use on my resume?" and "is it time to stop listing my VBA experience on my resume?"

Over the next few months, under Millie's leadership, 17 developers from the 30 person team left the company, Alice among them. Every once in awhile, Alice checks the job listings for that company, to see if those developer positions have been filled. They're still hiring for software developers. Unfortunately, Alice hasn't seen any openings for a PM, so Millie is probably still there.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianDaniel Kahn Gillmor: DANE OPENPGPKEY for debian.org

DANE OPENPGPKEY for debian.org

I recently announced the publication of Web Key Directory for @debian.org e-mail addresses. This blog post announces another way to fetch OpenPGP certificates for @debian.org e-mail addresses, this time using only the DNS. These two mechanisms are complementary, not in competition. We want to make sure that whatever certificate lookup scheme your OpenPGP client supports, you will be able to find the appropriate certificate.

The additional mechanism we're now supporting (since a few days ago) is DANE OPENPGPKEY, specified in RFC 7929.

How does it work?

DANE OPENPGPKEY works by storing a minimized OpenPGP certificate in the DNS, ideally in a subdomain at label based on a hashed version of the local part of the e-mail address.

With modern GnuPG, if you're interested in retrieving the OpenPGP certificate for dkg as served by the DNS, you can do:

gpg --auto-key-locate clear,nodefault,dane --locate-keys dkg@debian.org

If you're interested in how this DNS zone is populated, take a look at can the code that handles it. Please request improvements if you see ways that this could be improved.

Unfortunately, GnuPG does not currently do DNSSEC validation on these records, so the cryptographic protections offered by this client are not as strong as those provided by WKD (which at least checks the X.509 certificate for a given domain name against the list of trusted root CAs).

Why offer both DANE OPENPGPKEY and WKD?

I'm hoping that the Debian project can ensure that no matter whatever sensible mechanism any OpenPGP client implements for certificate lookup, it will be able to find the appropriate OpenPGP certificate for contacting someone within the @debian.org domain.

A clever OpenPGP client might even consider these two mechanisms -- DANE OPENPGPKEY and WKD -- as corroborative mechanisms, since an attacker who happens to compromise one of them may find it more difficult to compromise both simultaneously.

How to update?

If you are a Debian developer and you want your OpenPGP certificate updated in the DNS, please follow the normal procedures for Debian keyring maintenance like you always have. When a new debian-keyring package is released, we will update these DNS records at the same time.

Thanks

Setting this up would not have been possible without help from weasel on the Debian System Administration team, and Noodles from the keyring-maint team providing guidance.

DANE OPENPGPKEY was documented and shepherded through the IETF by Paul Wouters.

Thanks to all of these people for making it possible.

,

Rondam RamblingsThe Trouble with Many Worlds

Ten years ago I wrote an essay entitled "The Trouble with Shadow Photons" describing a problem with the dramatic narrative of what is commonly called the "many-worlds" interpretation of quantum mechanics (but which was originally and IMHO more appropriately called the "relative state" interpretation) as presented by David Deutsch in his (otherwise excellent) book, "The Fabric of Reality."  At the

CryptogramCardiac Biometric

MIT Technology Review is reporting about an infrared laser device that can identify people by their unique cardiac signature at a distance:

A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. "I don't want to say you could do it from space," says Steward Remaly, of the Pentagon's Combatting Terrorism Technical Support Office, "but longer ranges should be possible."

Contact infrared sensors are often used to automatically record a patient's pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

[...]

Remaly's team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it's likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. "Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy," he says.


I have my usual questions about false positives vs false negatives, how stable the biometric is over time, and whether it works better or worse against particular sub-populations. But interesting nonetheless.

Krebs on SecurityWho’s Behind the GandCrab Ransomware?

The crooks behind an affiliate program that paid cybercriminals to install the destructive and wildly successful GandCrab ransomware strain announced on May 31, 2019 they were terminating the program after allegedly having earned more than $2 billion in extortion payouts from victims. What follows is a deep dive into who may be responsible for recruiting new members to help spread the contagion.

Image: Malwarebytes.

Like most ransomware strains, the GandCrab ransomware-as-a-service offering held files on infected systems hostage unless and until victims agreed to pay the demanded sum. But GandCrab far eclipsed the success of competing ransomware affiliate programs largely because its authors worked assiduously to update the malware so that it could evade antivirus and other security defenses.

In the 15-month span of the GandCrab affiliate enterprise beginning in January 2018, its curators shipped five major revisions to the code, each corresponding with sneaky new features and bug fixes aimed at thwarting the efforts of computer security firms to stymie the spread of the malware.

“In one year, people who worked with us have earned over US $2 billion,” read the farewell post by the eponymous GandCrab identity on the cybercrime forum Exploit[.]in, where the group recruited many of its distributors. “Our name became a generic term for ransomware in the underground. The average weekly income of the project was equal to US $2.5 million.”

The message continued:

“We ourselves have earned over US $150 million in one year. This money has been successfully cashed out and invested in various legal projects, both online and offline ones. It has been a pleasure to work with you. But, like we said, all things come to an end. We are getting a well-deserved retirement. We are a living proof that you can do evil and get off scot-free. We have proved that one can make a lifetime of money in one year. We have proved that you can become number one by general admission, not in your own conceit.”

Evil indeed, when one considers the damage inflicted on so many individuals and businesses hit by GandCrab — easily the most rapacious and predatory malware of 2018 and well into 2019.

The GandCrab identity on Exploit[.]in periodically posted updates about victim counts and ransom payouts. For example, in late July 2018, GandCrab crowed that a single affiliate of the ransomware rental service had infected 27,031 victims in the previous month alone, receiving about $125,000 in commissions.

The following month, GandCrab bragged that the program in July 2018 netted almost 425,000 victims and extorted more than one million dollars worth of cryptocurrencies, much of which went to affiliates who helped to spread the infections.

Russian security firm Kaspersky Lab estimated that by the time the program ceased operations, GandCrab accounted for up to half of the global ransomware market.

ONEIILK2

It remains unclear how many individuals were active in the core GandCrab malware development team. But KrebsOnSecurity located a number of clues that point to the real-life identity of a Russian man who appears to have been put in charge of recruiting new affiliates for the program.

In November 2018, a GandCrab affiliate posted a screenshot on the Exploit[.]in cybercrime forum of a private message between himself and a forum member known variously as “oneiilk2” and “oneillk2” that showed the latter was in charge of recruiting new members to the ransomware earnings program.

Oneiilk2 also was a successful GandCrab affiliate in his own right. In May 2018, he could be seen in multiple Exploit[.]in threads asking for urgent help obtaining access to hacked businesses in South Korea. These solicitations go on for several weeks that month — with Oneiilk2 saying he’s willing to pay top dollar for the requested resources. At the same time, Oneiilk2 can be seen on Exploit asking for help figuring out how to craft a convincing malware lure using the Korean alphabet.

Later in the month, Oneiilk2 says he no longer needs assistance on that request. Just a few weeks later, security firms began warning that attackers were staging a spam campaign to target South Korean businesses with version 4.3 of GandCrab.

HOTTABYCH

When Oneiilk2 registered on Exploit in January 2015, he used the email address hottabych_k2@mail.ru. That email address and nickname had been used since 2009 to register multiple identities on more than a half dozen cybercrime forums.

In 2010, the hottabych_k2 address was used to register the domain name dedserver[.]ru, a site which marketed dedicated Web servers to individuals involved in various cybercrime projects. That domain registration record included the Russian phone number +7-951-7805896, which mail.ru’s password recovery function says is indeed the phone number used to register the hottabych_k2 email account.

At least four posts made in 2010 to the hosting review service makeserver.ru advertise Dedserver and include images watermarked with the nickname “oneillk2.”

Dedserver also heavily promoted a virtual private networking (VPN) service called vpn-service[.]us to help users obfuscate their true online locations. It’s unclear how closely connected these businesses were, although a cached copy of the Dedserver homepage at Archive.org from 2010 suggests the site’s owners claimed it as their own.

Vpn-service[.]us was registered to the email address sec-service@mail.ru by an individual who used the nickname (and sometimes password) — “Metall2” — across multiple cybercrime forums.

Around the same time the GandCrab affiliate program was kicking into high gear, Oneiilk2 had emerged as one of the most trusted members of Exploit and several other forums. This was evident by measuring the total “reputation points” assigned to him, which are positive or negative feedback awarded by other members with whom the member has previously transacted.

In late 2018, Oneiilk2 was one of the top 20 highest-rated members among thousands of denizens on the Exploit forum, thanks in no small part to his association with the GandCrab enterprise.

Searching on Oneiilk2’s registration email address hottabych_k2@mail.ru via sites that track hacked or leaked databases turned up some curious results. Those records show this individual routinely re-used the same password across multiple accounts: 16061991.

For instance, that email address and password shows up in hacked password databases for an account “oneillk2” at zismo[.]biz, a Russian-language forum dedicated to news about various online money-making affiliate programs.

In a post made on Zismo in 2017, Oneiilk2 states that he lives in a small town with a population of around 400,000, and is engaged in the manufacture of furniture.

HEAVY METALL

Further digging revealed that the hottabych_k2@mail.ru address had also been used to register at least two accounts on the social networking site Vkontakte, the Russian-language equivalent of Facebook.

One of those accounts was registered to a “Igor Kashkov” from Magnitogorsk, Russia, a metal-rich industrial town in southern Russia of around 410,000 residents which is home to the largest iron and steel works in the country.

The Kashkov account used the password “hottabychk2,” the phone number 890808981338, and at one point provided the alternative email address “prokopenko_k2@bk.ru.” However, this appears to have been simply an abandoned account, or at least there are only a couple of sparse updates to the profile.

The more interesting Vkontakte account tied to the hottabych_k2@mail.ru address belongs to a profile under the name “Igor Prokopenko,” who says he also lives in Magnitogorsk. The Igor Prokopenko profile says he has studied and is interested in various types of metallurgy.

There is also a Skype voice-over-IP account tied to an “Igor” from Magnitogorsk whose listed birthday is June 16, 1991. In addition, there is a fairly active Youtube account dating back to 2015 — youtube.com/user/Oneillk2 — that belongs to an Igor Prokopenko from Magnitogorsk.

That Youtube account includes mostly short videos of Mr. Prokopenko angling for fish in a local river and diagnosing problems with his Lada Kalina — a Russian-made automobile line that is quite common across Russia. An account created in January 2018 using the Oneillk2 nickname on a forum for Lada enthusiasts says its owner is 28 years old and lives in Magnitogorsk.

Sources with the ability to check Russian citizenship records identified an Igor Vladimirovich Prokopenko from Magnitogorsk who was born on June 16, 1991.  Recall that “16061991” was the password used by countless online accounts tied to both hottabych_k2@mail.ru and the Oneiilk2/Oneillk2 identities.

To bring all of the above research full circle, Vkontakte’s password reset page shows that the Igor Prokopenko profile is tied to the mobile phone number +7-951-7805896, which is the same number used to set up the email account hottabych_k2@mail.ru almost 10 years ago.

Mr. Prokopenko did not respond to multiple requests for comment.

It is entirely possible that whoever is responsible for operating the GandCrab affiliate program developed an elaborate, years-long disinformation campaign to lead future would-be researchers to an innocent party.

At the same time, it is not uncommon for many Russian malefactors to do little to hide their true identities — at least early on in their careers — perhaps in part because they perceive that there is little likelihood that someone will bother connecting the dots later on, or because maybe they don’t fear arrest and/or prosecution while they reside in Russia. Anyone doubtful about this dynamic would do well to consult the Breadcrumbs series on this blog, which used similar methods as described above to unmask dozens of other major malware purveyors.

It should be noted that the GandCrab affiliate program took measures to prevent the installation of its ransomware on computers residing in Russia or in any of the countries that were previously part of the Soviet Union — referred to as the Commonwealth of Independent States and including Armenia, Belarus, Kazakhstan, Kyrgyzstan, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan. This is a typical precaution taken by cybercriminals running malware operations from one of those countries, as they try to avoid making trouble in their own backyards that might attract attention from local law enforcement.

KrebsOnSecurity would like to thank domaintools.com (an advertiser on this site), as well as cyber intelligence firms Intel471, Hold Security and 4IQ for their assistance in researching this post.

Update, July 9, 2:53 p.m. ET: Mr. Prokopenko responded to my requests for comment, although he declined to answer any of the questions I put to him about the above findings. His response was simply, “Hey. You’re wrong. I’m not doing this.” Silly me.

CryptogramRansomware Recovery Firms Who Secretly Pay Hackers

ProPublica is reporting on companies that pretend to recover data locked up by ransomware, but just secretly pay the hackers and then mark up the cost to the victims.

Worse Than FailureCodeSOD: The Bogus Animation

Animations have become such an omnipresent part of our UI designs anymore that we tend to only notice them when they're bad.

Ben is working on an iOS application which appeared to have a "bad" animation. In this case, it's bad because it's slow. How slow? Well, they have a table view with ten items in it, and the items should quickly tween to their new state- position, text, colors all could change in this process. And it was taking four seconds.

Four seconds to update ten items is a lot. Now, their application does have a lot of animations, and the first suspicion was that there was some ugly interaction between animations that was causing it to take a long time. But upon digging in, Ben discovered it wasn't the animations at all.

- (NSArray<NSString *> *)_combineTitles:(NSArray<NSString *> *)oldTitles with:(NSArray<NSString *> *)newTitles { NSMutableSet<NSString *> *mergedSet = [NSMutableSet setWithArray:oldTitles]; [mergedSet unionSet:[NSSet setWithArray:newTitles]]; NSMutableArray<NSString *> *combinedTitles = [mergedSet.allObjects mutableCopy]; // TODO - this is a terrible method! // We should be able to properly determine the/a correct order of combinedTitles, not // simply trying random orders until we find the right one // Note unless the assumption stated in _reloadTableDataAnimatedAdvanced is met, this could loop forever // A better way to do this would be to consider the diff between oldTitles and newTitles // E.g. the diff for ABC -> CDE has insertions at 1, 2 and deletions at 0, 1 and therefore the diff is based // on the deletions being performed first - instead we want to construct the intermediate ABCDE which has insertions at 3, 4 while (YES) { DiffUpdate *diffUpdate1 = [DifferUtils getDiffWithOldData:oldTitles newData:combinedTitles]; DiffUpdate *diffUpdate2 = [DifferUtils getDiffWithOldData:combinedTitles newData:newTitles]; if (diffUpdate1.moves.count == 0 && diffUpdate2.moves.count == 0) { return [NSArray arrayWithArray:combinedTitles]; } // Fisher-Yates shuffle // https://stackoverflow.com/a/33840745 for (NSUInteger i = combinedTitles.count; i > 1; i--) { [combinedTitles exchangeObjectAtIndex:i - 1 withObjectAtIndex:arc4random_uniform((u_int32_t)i)]; } } return nil; }

Obj-C is a bit odd to read, but this defines a method that combines a list of old item titles with a list of new item titles. The resulting output should be sorted based on the insertion order of the items. And how do we do this in this code?

Well, we check if the output list is in the right order by taking a diff between the output and the two inputs. If it's in the right order, great, return our results. If it's not… we do a Fisher-Yates shuffle, which is to say, this is an actual bogosort in the wild.

At least the documentation is useful. Not only does it include the accurate This is a terrible method, but goes on to lay out exactly what a better method might look like.

That's not the best part of the comment, though. It's this one:

// Note unless the assumption stated in _reloadTableDataAnimatedAdvanced is met, this could loop forever

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

CryptogramFriday Squid Blogging: Squid Cars

Jalopnik asks the important question: "If squids ruled the earth, what would their cars be like?"

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramApplied Cryptography is Banned in Oregon Prisons

My Applied Cryptography is on a list of books banned in Oregon prisons. It's not me -- and it's not cryptography -- it's that the prisons ban books that teach people to code. The subtitle is "Algorithms, Protocols, and Source Code in C" -- and that's the reason.

My more recent Cryptography Engineering is a much better book for prisoners, anyway.

CryptogramResearch on Human Honesty

New research from Science: "Civic honesty around the globe":

Abstract: Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

I am surprised, too.

Worse Than FailureClassic WTF: Working Around, Over and Through the Process

It's still a holiday weekend in the US; after playing with fireworks yesterday, most of us have to spend today trying to find the fingers we lost. There are no fireworks in this classic story, but there may be some karma… Original --Remy

When Kevin landed a job at Townbank in the late 1980s, he came face-to-face with the same thing that thousands of newly minted developers had encountered before and since – there is more to being a corporate programmer than just writing code – there’s the process.

Second only, perhaps, to the strict rules commanded by the world’s religions, the process keeps the code consistent. Glory to the process – praised be the process - the process is good, the process should always be followed, and above all, the process is good for you!

For nearly everybody, the process isn’t all that bad. It just takes some getting used to. Fill out a form, get a sign off, file the test plan, write a build document - all in a day's work. However, as Kevin would soon find out, at Townbank, there were some processes both veterans and new grads alike couldn’t adjust to.

The Shiva Factor

Kevin’s first assignment was to work within a group involved with Townbank’s IT department’s largest project to date - their huge migration from their aging mainframe to a row of shiny new VAX systems. On paper, the process looked good - consultants met with the business to identify which systems would be migrated, specs would be written, developers would be assigned, QA would confirm that the new system worked like the old one and the code would be promoted into production.

When the project managers first set up the process, the expectation was that the extra steps would only ever be responsible for a small fraction of the total cost or amount of time necessary for implementing a feature, and at first, this was the case. However, as the project continued and more environments were finally brought online, Kevin and his fellow developers knew to add an extra bit of padding to their estimates. Officially, the developers called it many names – systems integration testing, server configuration, environment compatibility testing, but only in hushed voices could it be called out by its true name: “The Shiva Factor”

Serious Business Indeed!

Now, it wasn’t that Shiva was an incompetent or inexperienced system administrator – not by a long shot. In fact, at Townbank, when the decision was made to migrate away from the mainframe to VAX, Shiva’s name was at the top of a very short list of individuals who should manage the infrastructure migration and Shiva took his responsibility very seriously and enacted several of his own policies to address what he felt to be loopholes in the process. For example, every morning, before any developers, analysts, and QA staff could sign-in to an environment, they first had to literally sign-in on a clipboard on Shiva’s desk, to confirm their physical presence. Also, feeling that the process did not track developer actions to a high enough granularity, Shiva arranged source control security so that every code check in and promote between environments required a write up with 2 signatures and had to be performed using his user id. At his terminal.

On quiet days, a quick change could be turned around in one day, however, the quiet days were often holidays or weekends. Frustrated developers took their case to upper management arguing that the policies were hindering progress and seemed to be completely useless. In response, management shrugged – Shiva made his case at the beginning of the project – the environments secure and free from cross-contamination by other instances and developer incompetence because, after all, the VAX servers were still very new and even many of the senior developers were not entirely up to speed.

The masses grumbled and cursed under their breath, but rather than rising up and overthrowing Shiva and ending his iron-fisted reign, everybody just kind of sucked it up and moved forward. Albeit annoying, the process continued in spite of Shiva’s efforts, however there was one situation that Shiva seemingly neglected – what if he was unavailable?

Programmers' Little Helper

Though Kevin’s terminal showed that he was on a clone of the Production environment, his tell-tale customer names “Nosmo King” and “Joe Blow” made him realize the he had made a grave error – the application was connecting to the Development environment’s database by mistake and it was to be tested by the QA team later that afternoon. Ordinarily, fixing this was a piece of cake - make a few changes to the config file in the Development environment and re-promote, however, as fate would have it, Shiva was in a day-long meeting and would not be available until the next day.

Hoping that maybe Shiva left his meeting early, Kevin stopped by Shiva’s desk but was met with only his empty chair, however, a detail about Shiva’s keyboard stood out. The letters A, S, V, H, and I all had their letters worn away. Kevin knew that Shiva was drunk with power, but was he so narcissistic so as to type his name in over and over? …or perhaps it was a hint. For fun, Kevin navigated to a command prompt and typed in “shiva” for both the username and password. Expecting Shiva to sneak up on him at any moment, Kevin pressed enter and was shocked and amazed to discover that he was now logged in.

This was amazing. This was huge. Kevin knew he had to find someone to share his discovery with, however, after tracking down and relaying his discovery with one of the gray beards who had mentored him earlier in his tenure, the reaction was not at all what Kevin expected.

As it turned out, Shiva’s username and password combination were a favorite Townbank “secret” that carried over from Shiva’s days as a mainframe admin.

“To keep Shiva from catching on,” the more senior developer explained, “we would play Shiva’s game once every other promotion.”

“Also, for future reference,” he continued,” if you want to avoid getting caught, and ruining it for everybody else, you might want to log in from your own terminal and NOT from his desk.”

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Sam VargheseWhatever happened to the ABC’s story of the century?

In the first three weeks of June last year, the ABC’s Sarah Ferguson presented a three-part saga on the channel’s Four Corners program, which the ABC claimed was the “story of the century”.

It was a rehashing of all the claims against US President Donald Trump, which the American TV stations had gone over with a fine-toothed comb but which Ferguson seemed convinced still had something to chew over.

At the time, a special counsel, former FBI chief Robert Mueller, was conducting an investigation into claims that Trump colluded with Russia to win the presidential election.

Earlier this year, Mueller announced the results of his probe: zilch. Zero. Nada. Nothing. A big cipher.

Given that Ferguson echoed all the same claims by interviewing a number of rather dubious individuals, one would think that it was time for a mea culpa – that is, if one had even a semblance of integrity, a shred of honesty in one’s being.

But Ferguson seems to have disappeared off the face of the earth. The ABC has been silent about it too. Given that she and her entourage spent the better part of six weeks traipsing the streets and corridors of power in the US and the UK, considerable funds would have been spent.

This, by an organisation that is always weeping about its budget cuts. One would think that such a publicly-funded organisation would be a little more circumspect and not allow anyone to indulge in such an exercise of vanity.

If Ferguson had unearthed even one morsel of truth, one titbit of information that the American media had not found, then one would not be writing this. But she did nothing of the sort; she just raked over all the old bones.

One hears Ferguson is now preparing a program on the antics that the government indulged in last year by dumping its leader, Malcolm Turnbull. This issue has also been done to death and there has already been a two-part investigation by the Sky News’ presenter David Speers, a fine reporter. There has been one book published, by the former political aide Niki Savva, and more are due.

It looks like Ferguson will again be acting in the manner of a dog that returns to its own vomit. She appears to have cultivated considerable skill in this art.

,

CryptogramUS Journalist Detained When Returning to US

Pretty horrible story of a US journalist who had his computer and phone searched at the border when returning to the US from Mexico.

After I gave him the password to my iPhone, Moncivias spent three hours reviewing hundreds of photos and videos and emails and calls and texts, including encrypted messages on WhatsApp, Signal, and Telegram. It was the digital equivalent of tossing someone's house: opening cabinets, pulling out drawers, and overturning furniture in hopes of finding something -- anything -- illegal. He read my communications with friends, family, and loved ones. He went through my correspondence with colleagues, editors, and sources. He asked about the identities of people who have worked with me in war zones. He also went through my personal photos, which I resented. Consider everything on your phone right now. Nothing on mine was spared.

Pomeroy, meanwhile, searched my laptop. He browsed my emails and my internet history. He looked through financial spreadsheets and property records and business correspondence. He was able to see all the same photos and videos as Moncivias and then some, including photos I thought I had deleted.

The EFF has extensive information and advice about device searches at the US border, including a travel guide:

If you are a U.S. citizen, border agents cannot stop you from entering the country, even if you refuse to unlock your device, provide your device password, or disclose your social media information. However, agents may escalate the encounter if you refuse. For example, agents may seize your devices, ask you intrusive questions, search your bags more intensively, or increase by many hours the length of detention. If you are a lawful permanent resident, agents may raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents may deny you entry.

The most important piece of advice is to think about this all beforehand, and plan accordingly.

Worse Than FailureRepresentative Line: Classic WTF: The Backup Snippet

It's "Independence Day" here in the US, which is the day in which developers celebrate their independence from DBAs and switch everything over to NoSQL, no matter what the cost. Or something like that, the history is a little fuzzy. But it's a holiday here, so in honor of that, here's a related story. Original --Remy

Generally speaking, Andrew tries his best to avoid the DBA team. It's not just because database administrators tend to be a unique breed (his colleagues were certainly no exception), but because of the "things" that he'd heard about the team. The sort of "things" that keep developers up at night and make them regret not becoming an accountant.

One day, while debugging an issue with their monitoring scripts, Andrew had no choice but to check with Thom, a member of Team DBA. It turned out that one of DBA's had recently updated their database backup script, but Thom wasn't really sure who did it, why it was done, or what it looked like before. So, he just sent Andrew the entire backup script.

Following is a single line of code, line-wrapped by yours truly, that should give a fair idea of what the script was like.

file=$WORKSPACE/ewprd1_$DATECODE.dmp,$WORKSPACE/ewprd2_$DATECODE.dm
p,$WORKSPACE/ewprd3_$DATECODE.dmp,$WORKSPACE/ewprd4_$DATECODE.dmp,$
WORKSPACE/ewprd5_$DATECODE.dmp,$WORKSPACE/ewprd6_$DATECODE.dmp,$WOR
KSPACE/ewprd7_$DATECODE.dmp,$WORKSPACE/ewprd8_$DATECODE.dmp,$WORKSP
ACE/ewprd9_$DATECODE.dmp,$WORKSPACE/ewprd10_$DATECODE.dmp,$WORKSPAC
E/ewprd11_$DATECODE.dmp,$WORKSPACE/ewprd12_$DATECODE.dmp,$WORKSPACE
/ewprd13_$DATECODE.dmp,$WORKSPACE/ewprd14_$DATECODE.dmp,$WORKSPACE/
ewprd15_$DATECODE.dmp,$WORKSPACE/ewprd16_$DATECODE.dmp,$WORKSPACE/e
wprd17_$DATECODE.dmp,$WORKSPACE/ewprd18_$DATECODE.dmp,$WORKSPACE/ew
prd19_$DATECODE.dmp,$WORKSPACE/ewprd20_$DATECODE.dmp,$WORKSPACE/ewp
rd21_$DATECODE.dmp,$WORKSPACE/ewprd22_$DATECODE.dmp,$WORKSPACE/ewpr
d23_$DATECODE.dmp,$WORKSPACE/ewprd24_$DATECODE.dmp,$WORKSPACE/ewprd
25_$DATECODE.dmp,$WORKSPACE/ewprd26_$DATECODE.dmp,$WORKSPACE/ewprd2
7_$DATECODE.dmp,$WORKSPACE/ewprd28_$DATECODE.dmp,$WORKSPACE/ewprd29
_$DATECODE.dmp,$WORKSPACE/ewprd30_$DATECODE.dmp,$WORKSPACE/ewprd31_
$DATECODE.dmp,$WORKSPACE/ewprd32_$DATECODE.dmp,$WORKSPACE/ewprd33_$
DATECODE.dmp,$WORKSPACE/ewprd34_$DATECODE.dmp,$WORKSPACE/ewprd35_$D
ATECODE.dmp,$WORKSPACE/ewprd36_$DATECODE.dmp,$WORKSPACE/ewprd37_$DA
TECODE.dmp,$WORKSPACE/ewprd38_$DATECODE.dmp,$WORKSPACE/ewprd39_$DAT
ECODE.dmp,$WORKSPACE/ewprd40_$DATECODE.dmp,$WORKSPACE/ewprd41_$DATE
CODE.dmp,$WORKSPACE/ewprd42_$DATECODE.dmp,$WORKSPACE/ewprd43_$DATEC
ODE.dmp,$WORKSPACE/ewprd44_$DATECODE.dmp,$WORKSPACE/ewprd45_$DATECO
DE.dmp,$WORKSPACE/ewprd46_$DATECODE.dmp,$WORKSPACE/ewprd47_$DATECOD
E.dmp,$WORKSPACE/ewprd48_$DATECODE.dmp,$WORKSPACE/ewprd49_$DATECODE
.dmp,$WORKSPACE/ewprd50_$DATECODE.dmp,$WORKSPACE/ewprd51_$DATECODE.
dmp,$WORKSPACE/ewprd52_$DATECODE.dmp,$WORKSPACE/ewprd53_$DATECODE.d
mp,$WORKSPACE/ewprd54_$DATECODE.dmp,$WORKSPACE/ewprd55_$DATECODE.dm
p,$WORKSPACE/ewprd56_$DATECODE.dmp,$WORKSPACE/ewprd57_$DATECODE.dmp
,$WORKSPACE/ewprd58_$DATECODE.dmp,$WORKSPACE/ewprd59_$DATECODE.dmp,
$WORKSPACE/ewprd60_$DATECODE.dmp,$WORKSPACE/ewprd61_$DATECODE.dmp,$
WORKSPACE/ewprd62_$DATECODE.dmp,$WORKSPACE/ewprd63_$DATECODE.dmp,$W
ORKSPACE/ewprd64_$DATECODE.dmp,$WORKSPACE/ewprd65_$DATECODE.dmp,$WO
RKSPACE/ewprd66_$DATECODE.dmp,$WORKSPACE/ewprd67_$DATECODE.dmp,$WOR
KSPACE/ewprd68_$DATECODE.dmp,$WORKSPACE/ewprd69_$DATECODE.dmp,$WORK
SPACE/ewprd70_$DATECODE.dmp,$WORKSPACE/ewprd71_$DATECODE.dmp,$WORKS
PACE/ewprd72_$DATECODE.dmp,$WORKSPACE/ewprd73_$DATECODE.dmp,$WORKSP
ACE/ewprd74_$DATECODE.dmp,$WORKSPACE/ewprd75_$DATECODE.dmp,$WORKSPA
CE/ewprd76_$DATECODE.dmp,$WORKSPACE/ewprd77_$DATECODE.dmp,$WORKSPAC
E/ewprd78_$DATECODE.dmp,$WORKSPACE/ewprd79_$DATECODE.dmp,$WORKSPACE
/ewprd80_$DATECODE.dmp,$WORKSPACE/ewprd81_$DATECODE.dmp,$WORKSPACE/
ewprd82_$DATECODE.dmp,$WORKSPACE/ewprd83_$DATECODE.dmp,$WORKSPACE/e
wprd84_$DATECODE.dmp,$WORKSPACE/ewprd85_$DATECODE.dmp,$WORKSPACE/ew
prd86_$DATECODE.dmp,$WORKSPACE/ewprd87_$DATECODE.dmp,$WORKSPACE/ewp
rd88_$DATECODE.dmp,$WORKSPACE/ewprd89_$DATECODE.dmp,$WORKSPACE/ewpr
d90_$DATECODE.dmp,$WORKSPACE/ewprd91_$DATECODE.dmp,$WORKSPACE/ewprd
92_$DATECODE.dmp,$WORKSPACE/ewprd93_$DATECODE.dmp,$WORKSPACE/ewprd9
4_$DATECODE.dmp,$WORKSPACE/ewprd95_$DATECODE.dmp,$WORKSPACE/ewprd96
_$DATECODE.dmp,$WORKSPACE/ewprd97_$DATECODE.dmp,$WORKSPACE/ewprd98_
$DATECODE.dmp,$WORKSPACE/ewprd99_$DATECODE.dmp,$WORKSPACE/ewprd100_
$DATECODE.dmp,$WORKSPACE/ewprd101_$DATECODE.dmp,$WORKSPACE/ewprd102
_$DATECODE.dmp,$WORKSPACE/ewprd103_$DATECODE.dmp,$WORKSPACE/ewprd10
4_$DATECODE.dmp,$WORKSPACE/ewprd105_$DATECODE.dmp,$WORKSPACE/ewprd1
06_$DATECODE.dmp,$WORKSPACE/ewprd107_$DATECODE.dmp,$WORKSPACE/ewprd
108_$DATECODE.dmp,$WORKSPACE/ewprd109_$DATECODE.dmp,$WORKSPACE/ewpr
d110_$DATECODE.dmp,$WORKSPACE/ewprd111_$DATECODE.dmp,$WORKSPACE/ewp
rd112_$DATECODE.dmp,$WORKSPACE/ewprd113_$DATECODE.dmp,$WORKSPACE/ew
prd114_$DATECODE.dmp,$WORKSPACE/ewprd115_$DATECODE.dmp,$WORKSPACE/e
wprd116_$DATECODE.dmp,$WORKSAPCE/ewprd117_$DATECODE.dmp,$WORKSPACE/
ewprd118_$DATECODE.dmp

Andrew eventually found the problem, and offered a helpful tip to Thom for shortening up their script:

file="$WORKSPACE/ewprd1_$DATECODE.dmp";
for ((i=2;$i<119;i++)); do {
   FILE="$FILE,$WORKSPACE/ewprd$i_$DATECODE.dmp";
}; done

It accomplished the exact same thing in four little lines. Thom passed on the suggestion, however, perhaps because it would have increased the line count... or, most likely, to keep the developers out.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

CryptogramDigital License Plates

They're a thing:

Developers say digital plates utilize "advanced telematics" -- to collect tolls, pay for parking and send out Amber Alerts when a child is abducted. They also help recover stolen vehicles by changing the display to read "Stolen," thereby alerting everyone within eyeshot.

This makes no sense to me. The numbers are static. License plates being low-tech are a feature, not a bug.

Worse Than FailureCodeSOD: Answer the Questions on this Test

Jake works in Python. Python is a very flexible language, and Jake has some co-workers who like to exploit it as much as possible.

Specifically, they’re notorious for taking advantage of how any symbol can be redefined, as needed. Is int a built-in function? Yes. But what if you want to have a variable called int? No problem! Just go ahead and do it. Python won’t mind if you do int = int(5).

Sure, any linter will complain about the possible confusion, but do you really think this developer checks the linter output?

Now, this particular block of code happens to be a test, which Jake provided because it’s one of the more head-scratching versions of this pattern, but it’s representative of this approach.

def test_parameters(self, name, value, float = True):
    fcb = read_parameters(SERVER_ADDRESS)
    type = ''
    if valid_key():
        type = 'float'
        if type == 'float':
            self.assertTrue(float)
        else:
            self.assertFalse(float)
    if float:
        self.assertEqual(global_name, name)

    default = fcb.get_default(name)
    self.assertEqual(default, value)
    range = fcb.get_range(name)
    ...

type, float and range are all built-in types in Python. Using variables with that name are not a great choice, but honestly, that’s little stuff compared to the general weirdness of this test.

First, we have the pattern where we set a variable and then immediately check its value. It’s probably a case where they indented incorrectly, but honestly, it’s a little hard to be sure.

global_name is one of the rare things that has a vaguely accurate name, as it’s actually a global variable. In a test. That’s definitely a bad idea.

But the real problem, and the real WTF here, is: what is this test about? Are we checking whether or not parameters are properly assigned their type? Are we checking that the name matches up with a global variable? How is that global variable getting set? Does it tie into the mysterious no-parameter valid_key method? If this is what one test looks like, what does the rest of the code look like?

There are a lot of questions that this block opens up, but the real question is: do I want the answer to any of those questions?

Probably not.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Cory DoctorowFake News is an Oracle

In my latest podcast, I read my new Locus column: Fake News is an Oracle. For many years, I’ve been arguing that while science fiction can’t predict the future, it can reveal important truths about the present: the stories writers tell reveal their hopes and fears about technology, while the stories that gain currency in our discourse and our media markets tell us about our latent societal aspirations and anxieties.

Fake news is another important barometer of our societal pressure: when we talk about conspiratorial thinking, we tend to do so ideologically, asking ourselves how it is that the same old conspiracy theories have become so much more convincing in recent years (anti-vax is as old as vaccination, after all), and treating the proponents of conspiracies as though they had acquired the ability to convince people by sharpening their arguments (possibly with the assistance of machine-learning systems).

But when you actually pay attention to the things that conspiracy-pushers say, there’s no evidence that they’re particularly convincing. Instead of ideological answers to the spread of conspiracies, we can look for material answers for the change in our public discourse.

Fake news, in this light, reveals important truth about what our material conditions have led us to fear (that the ship is sinking and their aren’t enough life-boats for all of us) and hope (that we can get a seat in the lifeboat if we help the powerful and ruthless push other people out).

Ten years ago, if you came home from the doctor’s with a prescription for oxy, and advice that they were not to be feared for their addictive potential, and an admonition that pain was “the fourth vital sign,” and its under-treatment was a great societal cruelty, you might have met someone who said that this was all bullshit, that you were being set up to be murdered by a family of ruthless billionaires whose watchdog had switched sides.

You might have called that person an “opioid denier.”

Today, we worry that anti-vaxers represent the resurgence of long-dormant epidemic. Tomorrow, we may find that they presaged an epidemic of collapsed trust in our shared ability to determine the truth.

MP3

(Image: Todd Dailey, CC-BY-SA)

CryptogramGoogle Releases Basic Homomorphic Encryption Tool

Google has released an open-source cryptographic tool: Private Join and Compute. From a Wired article:

Private Join and Compute uses a 1970s methodology known as "commutative encryption" to allow data in the data sets to be encrypted with multiple keys, without it mattering which order the keys are used in. This is helpful for multiparty computation, where you need to apply and later peel away multiple layers of encryption without affecting the computations performed on the encrypted data. Crucially, Private Join and Compute also uses methods first developed in the '90s that enable a system to combine two encrypted data sets, determine what they have in common, and then perform mathematical computations directly on this encrypted, unreadable data through a technique called homomorphic encryption.

True homomorphic encryption isn't possible, and my guess is that it will never be feasible for most applications. But limited application tricks like this have been around for decades, and sometimes they're useful.

Boing Boing article.

Worse Than FailureCodeSOD: The Wizard of Speed and Time

Christopher started a new job as a “full-stack” developer. Unfortunately, most of the developers are still on the “why do you need anything other than jQuery” school of front-end development, despite efforts to transition their UIs to Vue.

This meant that Christopher didn’t do much backend, and ended up being the mid-level front-end dev, in practice if not in job title.

One of the Vue components was a “Wizard” style component, which was designed to be highly configurable. You supply a JSON description of the Wizard process, and it would generate the UI to navigate you screen-by-screen. Since Christopher was new at the organization, he wanted to understand how the Wizard worked, so he started poking at the code.

He got as far as the stepBack function before deciding he needed to rewrite it from scratch. Christopher assumed that stepBack could be as simple as popping the last element off the array of previous steps, and then update what’s currently displayed. That, however, isn’t what it did at all.

stepBack () {
  let pastItems = []
  for (var i in this.pastStepsIds) {
    if (pastItems.length === 0) {
      pastItems.push(this.pastStepsIds[i])
    }
    for (var j in pastItems) {
      if (pastItems[j] === this.pastStepsIds[i]) {
        continue
      }
      pastItems.push(this.pastStepsIds[i])
    }
  }
  if (pastItems.length) {
    this.showAnswer = false
    this.turnOffSteps(this.stepAtual)
    this.currentStep = this.changeTextStep(this.getStepById(pastItems[pastItems.length - 1]))
    this.turnOnStep(true)
  }
}

Step IDs were named things like "welcome", "more-info", and "collect-user-info". pastStepsIds would contain an array of all of those.

What this code does is take the pastStepsIds and copies it into a local pastItems array. Except it’s not a straightforward copy, because of the inner for loop, which examines each item in the local array, and if the current item in the pastStepsIds array is equal to that item, we skip back to the beginning of the inner for loop.

The result is that this doesn’t copy the array, but exponentiates it. If the pastStepsIds looked something like ["welcome", "more-info", "collect-user-info", "confirm"], the loops would create: ["welcome", "more-info", "collect-user-info", "collect-user-info", "confirm", "confirm", "confirm"].

If that array has more than zero elements in it, we’ll return to the last step in the pastItems array.

Christopher spent a good bit of time thinking, “WHYYYYYYY?”. After some consideration, he decided that he didn’t deserve this code, and re-wrote it. In one day, he reimplemented the Wizard component, which probably saved him months of effort trying to fight with it in the future.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

TEDTrailblazers: A night of talks in partnership with The Macallan

Curators David Biello and Chee Pearlman host TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater in New York City on June 27, 2019. (Photo: Photo: Ryan Lash / TED)

The event: TED Salon: Trailblazers, hosted by TED design and arts curator Chee Pearlman and TED science curator David Biello

When and where: Thursday, June 27, 2019, at the TED World Theater in New York City

The partner: The Macallan

Music: Sammy Rae & The Friends

The talks in brief:

Marcus Bullock, entrepreneur and justice reform advocate

  • Big idea: Over his eight-year prison sentence, Marcus Bullock was sustained by his mother’s love — and her photos of cheeseburgers. Years later, as an entrepreneur, he asked himself, “How can I help make it easier for other families to deliver love to their own incarcerated loved ones?”
    Communicating with prisoners is notoriously difficult and dominated by often-predatory telecommunications companies. By creating Flikshop — an app that allows inmates’ friends and families to send physical picture postcards into prison with the ease of texting — Marcus Bullock is bypassing the billion-dollar prison telecommunications industry and allowing hundreds of thousands of prisoners access to the same love and motivation that his mother gave him.
  • Quote of the talk: “I stand today with a felony, and just like millions of others around the country who also have that ‘F’ on their chest, just as my mom promised me many years ago, I wanted to show them that there was still life after prison.”

“It’s always better to collaborate with different communities rather than trying to speak for them,” says fashion designer Becca McCharen-Tran. She speaks at TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater, June 27, 2019, New York, NY. (Photo: Ryan Lash / TED)

Becca McCharen-Tran, founder and creative director of bodywear line CHROMAT

  • Big idea: Fashion designers have a responsibility to create inclusive designs suited for all gender expressions, ages, ability levels, ethnicities and races — and by doing so, they can shatter our limited definition of beauty.
    From day one in school, fashion designers are taught to create for a certain type of body, painting “thin, white, cisgender, able-bodied, young models as the ideal,” says fashion designer Becca McCharen-Tran. This has made body-shaming a norm for so many who strive to assimilate to the illusion of perfection in fashion imagery. McCharen-Tran believes creators are responsible for reimagining and expanding what a “bikini body” is. Her swimwear focused clothing line CHROMAT celebrates beauty in all its forms. They unapologetically counter the narrative through inclusive, explosive designs that welcome all of the uniqueness that comes with being a human.
  • Quote of the talk: “Inclusivity means nothing if it’s only surface level … who is making the decisions behind the scenes is just as important. It’s imperative to include diverse decision-makers in the process, and it’s always better to collaborate with different communities rather than trying to speak for them.”

Amy Padnani, editor at the New York Times (or, as some of her friends call her, the “Angel of Death”)

  • Big idea: No one deserves to be overlooked in life, even in death.
    Padnani created “Overlooked,” a New York Times series that recognizes the stories of dismissed and marginalized people. Since 1851, the newspaper has published thousands of obituaries for individuals like heads of state and celebrities, but only a small amount of those obits chronicled the lives of women and people of color. With “Overlooked,” Padnani forged a path for the publication to right the wrongs of the past while refocusing society’s lens on who’s considered important. Powerful in its ability to perspective-shift and honor those once ignored, “Overlooked” is also on track to become a Netflix series.
  • Fun fact: Prior to Padnani’s breakout project, the New York Times had yet to publish obituaries on notable individuals in history such as Ida B. Wells, Sylvia Plath, Ada Lovelace and Alan Turing.

Sam Van Aken shares the work behind the “Tree of 40 Fruit,” an ongoing series of hybridized fruit trees that grow multiple varieties of stone fruit. He speaks at TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater, June 27, 2019, New York, NY. (Photo: Ryan Lash / TED)

Sam Van Aken, multimedia contemporary artist, art professor at Syracuse University in New York and creator of the Tree of 40 Fruit

  • Big idea: Many of the fruits that have been grown in the US were originally brought there by immigrants. But due to industrialization, disease and climate change, American farmers produce just a fraction of the types available a century ago. Sam Van Aken has hand-grafted heirloom varieties of stone fruit — peaches, plums, apricots, nectarines and cherries — to make the “Tree of 40 Fruit.” What began as an art project to showcase their multi-hued blossoms has become a living archive of rare specimens and their histories; a hands-on (and delicious!) way to teach people about conservation and cultivation; and a vivid symbol of the need for biodiversity in order to ensure food security. Van Aken has created and planted his trees at museums and at people’s homes, and his largest project to date is the 50-tree Open Orchard — which, in total, will possess 200 varieties originated or historically grown in the region — on Governor’s Island in New York City.
  • Fun fact: One hundred years ago, there were over 2,000 varieties of peaches, nearly 2,000 varieties of plums, and nearly 800 named apple varieties grown in the United States.
  • Quote of the talk: “More than just food, embedded in these fruit is our culture. It’s the people who cared for and cultivated them, who valued them so much that they brought them here with them as a connection to their homes, and it’s the way they passed them on and shared them. In many ways, these fruit are our story.”

Removing his primetime-ready makeup, Lee Thomas shares his personal story of living with vitiligo. He speaks at TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater, June 27, 2019, New York, NY. (Photo: Ryan Lash / TED)

Lee Thomas, broadcast journalist

  • Big idea: Despite having a disease that left him vulnerable to stares in public, Lee Thomas discovered he could respond to ignorance and fear with engagement and dialogue.
    As a news anchor, Lee Thomas used makeup to hide the effects of vitiligo, an autoimmune disorder that left large patches of his skin without pigmentation. But without makeup, he was vulnerable to derision — until he decided to counter misunderstanding with eye contact and conversation. Ultimately, an on-camera story on his condition led him to start a support group and join others in celebrating World Vitiligo Day.
  • Quote of the talk: “Positivity is something worth fighting for — and the fight is not with others, it’s internal. If you want to make positive changes in your life, you have to consistently be positive.”

TEDRethink: A night of talks in partnership with Brightline Initiative

If we want to do things differently, where do we begin? Curators Corey Hajim and Alex Moura host TED Salon: “Rethink,” in partnership with Brightline Initiative at the TED World Theater in New York City on June 6, 2019. (Photo: Dian Lofton / TED)

The event: TED Salon: “Rethink,” hosted by TED business curator Corey Hajim and TED tech curator Alex Moura

When and where: Thursday, June 6, 2019, at the TED World Theater in New York City

The partner: Brightline Initiative, with Brightline executive director Ricardo Vargas warming up the audience with opening remarks

Music: Dark pop bangers from the Bloom Twins

The Bloom Twins, sisters Anna and Sofia Kuprienko, perform their special brand of “dark pop” at TED Salon: “Rethink,” in partnership with Brightline Initiative. (Photo: Jasmina Tomic / TED)

The talks in brief:

Heidi Grant, social psychologist, chief science officer of the Neuroleadership Institute and associate director of Columbia University’s Motivation Science Center  

  • Big idea: Asking for help can be awkward and embarrassing, but we all need to get comfortable with doing it.
    The most important thing about asking for help is to do it — out loud, explicitly, directly. Grant provides four tips to ensure that your ask will get a yes. First, be clear about what kind of help you need. No one wants to give “bad” help, so if they don’t understand what you’re looking for, they probably won’t respond. Next, avoid disclaimers, apologies and bribes — no prefacing your ask with, “I really hate to do this” or offering to pay for assistance, which makes others feel uneasy and self-conscious. Third, don’t ask for help over email or text, because it’s too easy for someone to say “no” electronically; do it face-to-face or in a phone call. And last, follow up after and tell the other person exactly how their help benefited you.
  • Quote of the talk: “The reality of modern work and modern life is that nobody does it alone. Nobody succeeds in a vacuum. More than ever, we actually do have to rely on other people, on their support and their collaboration, in order to be successful.”

Stuart Oda, urban farm innovator, cofounder and CEO of Alesca Life

  • Big idea: The future of farming is looking up — literally.
    Recent innovations in food production technology allows us to grow up — 40 stories, even — rather than across, like in traditional farming. The efficiency of this vertical method lessens the amount of soil, water, physical space and chemical pesticides used to generate year-round yields of quality vegetables, for less money and more peace of mind. Oda shares a vision for a not-too-distant future where indoor farms are integrated seamlessly into cityscapes, food deserts no longer exist, and nutrition for all reigns supreme.
  • Fun fact: In 2050, our global population is projected to reach 9.8 billion. We’ll need to grow more food in the next 30 to 40 years than in the previous 10,000 years combined to compensate.

Efosa Ojomo researches global prosperity, analyzing why and how corruption arises. He discusses how we could potentially eliminate it by investing in businesses focused on wiping out scarcity. (Photo: Jasmina Tomic / TED)

Efosa Ojomo, global prosperity researcher and senior fellow at Christensen Institute for Disruptive Innovation

  • Big idea: We can eliminate corruption by investing in innovative businesses that target scarce products.
    Conventional thinking about reducing corruption goes like this: in order to eliminate it, you put laws in place, development inspires investment, and the economy booms. Prosperity researcher Efosa Ojomo thinks we have this equation backwards. Through years of researching what makes societies prosperous, he’s found that the best way to stem corruption is to encourage investment in businesses that can wipe out the scarcity that spurs coercion, extortion and fraud. “Corruption, especially for most people in poor countries, is a workaround. It’s a utility in a place where there are fewer options to solve a problem. It’s their best solution to the problem of scarcity,” Ojomo says. Entrepreneurs who address scarcity in corruption-ridden regions could potentially eliminate it across entire sectors of markets, he explains. Take, for example, Mo Ibraham, the founder of mobile telecommunications company Celtel. His highly criticized idea to create an African cellular carrier put affordable cell phones in several sub-Saharan African countries for the first time, and today nearly every country there has its own carrier. It’s “market-creating innovations” like these that ignite major economic progress — and make corruption obsolete.
  • Quote of talk: “Societies don’t develop because they’ve reduced corruption; they’re able to reduce corruption because they’ve developed.”

Shannon Lee, podcaster and actress

  • Big idea: Shannon Lee’s famous father Bruce Lee died when she was only four years old, yet she still treasures his philosophy of self-actualization: how to be yourself in the best way possible.
    Our lives benefit when we can connect our “why” (our passions and purpose) to our “what” (our jobs, homes and hobbies). But how to do it? Like a martial artist, Lee says: by finding the connecting “how” that consistently and confidently expresses our values. If we show kindness and love in one part of our life yet behave harshly in another, then we are fragmented — and we cannot progress gracefully from our “why” to our “what.” To illustrate this philosophy, Lee asks the audience to consider the question, “How are you?” Or rather, “How can I fully be me?”
  • Quote of the talk: “There were not multiple Bruce Lees: there was not private and public Bruce Lee, or teacher Bruce Lee and actor Bruce Lee and family-man Bruce Lee. There was just one, unified, total Bruce Lee.”

When’s the last time you ate more, and exercised less, than you should? Dan Ariely explores why we make certain decisions — and how we can change our behavior for the better. (Photo: Dian Lofton / TED)

Dan Ariely, behavioral economist and author of Payoff: The Hidden Logic That Shapes Our Motivations

  • Big idea: To change people’s behavior, you can’t just give them information on what they should do. You have to actually change the environment in which they’re making decisions.
    To bridge the gap between a current behavior and a desired behavior, you must first reduce the friction, or remove the little obstacles and annoyances between those two endpoints. Then you need to think broadly about what new motivations you could bring into that person’s life. Financial literacy is great, for instance, but the positive impact of such information wears off after a few days. What else could be done to help people put more away for a rainy day? You could ask their kids to send a weekly text reminding them to save money, or you could give them some kind of visual reminder — perhaps a coin — to help even more. There’s a lot we can do to spark behavioral change, Ariely says. The key is to get creative and experiment with the ways we do it.
  • Quote of the talk: “Social science has made lots of strides, and the basic insight is … the right way is not to change people — it’s to change the environment.”

Cory Doctorow“Fake News is an Oracle”: how the falsehoods we believe reveal the truth about our fears and aspirations

For many years, I’ve been arguing that while science fiction can’t predict the future, it can reveal important truths about the present: the stories writers tell reveal their hopes and fears about technology, while the stories that gain currency in our discourse and our media markets tell us about our latent societal aspirations and anxieties. In Fake News is an Oracle, my latest Locus Magazine column, I use this tool to think about the rise of conspiratorial thinking and ask what it says about our world.

Fake news is another important barometer of our societal pressure: when we talk about conspiratorial thinking, we tend to do so ideologically, asking ourselves how it is that the same old conspiracy theories have become so much more convincing in recent years (anti-vax is as old as vaccination, after all), and treating the proponents of conspiracies as though they had acquired the ability to convince people by sharpening their arguments (possibly with the assistance of machine-learning systems).

But when you actually pay attention to the things that conspiracy-pushers say, there’s no evidence that they’re particularly convincing. Instead of ideological answers to the spread of conspiracies, we can look for material answers for the change in our public discourse.

Fake news, in this light, reveals important truth about what our material conditions have led us to fear (that the ship is sinking and their aren’t enough life-boats for all of us) and hope (that we can get a seat in the lifeboat if we help the powerful and ruthless push other people out).

Ten years ago, if you came home from the doctor’s with a prescription for oxy, and advice that they were not to be feared for their addictive potential, and an admonition that pain was “the fourth vital sign,” and its under-treatment was a great societal cruelty, you might have met someone who said that this was all bullshit, that you were being set up to be murdered by a family of ruthless billionaires whose watchdog had switched sides.

You might have called that person an “opioid denier.”

Today, we worry that anti-vaxers represent the resurgence of long-dormant epidemic. Tomorrow, we may find that they presaged an epidemic of collapsed trust in our shared ability to determine the truth.

Fake News Is an Oracle [Cory Doctorow/Locus]

(Image: Todd Dailey, CC-BY-SA)

CryptogramYubico Security Keys with a Crypto Flaw

Wow, is this an embarrassing bug:

Yubico is recalling a line of security keys used by the U.S. government due to a firmware flaw. The company issued a security advisory today that warned of an issue in YubiKey FIPS Series devices with firmware versions 4.4.2 and 4.4.4 that reduced the randomness of the cryptographic keys it generates. The security keys are used by thousands of federal employees on a daily basis, letting them securely log-on to their devices by issuing one-time passwords.

The problem in question occurs after the security key powers up. According to Yubico, a bug keeps "some predictable content" inside the device's data buffer that could impact the randomness of the keys generated. Security keys with ECDSA signatures are in particular danger. A total of 80 of the 256 bits generated by the key remain static, meaning an attacker who gains access to several signatures could recreate the private key.

Boing Boing post.

Worse Than FailureTales from the Interview: A Passion for Testing

Absolute Value

The interview was going well—as well as one could possibly expect. Alarik, the candidate, had a no-nonsense attitude, a high degree of precision to his speech, and a heavy German accent. He was applying for a job with Erik's company as a C# developer working on an inherited legacy codebase, and he'd almost earned himself the job. There was just one routine question left in the interview:

"So, why did you leave your previous job?"

"Ach," he said, his face twisting with disgust. "It was my superiors. They did not wish to improve the codebase."

This was the first red flag. Erik's company had a terrible codebase, littered with technical debt. He was all for improving it, but it had to be done carefully to avoid breaking their bread-and-butter application while they struggled to keep the company solvent. He'd been leading careful efforts to tackle the worst bits while still delivering new functionality. "Can you tell me more about this?" he asked, frowning just a touch.

"I was all set to develop unit tests for every unit," Alarik replied. "It would have saved effort in the long run, but the bosses did not see return on investment. It is always about money with them, never craftsmanship."

Erik chuckled weakly. "Well, we're a small company, run by developers and former developers. We don't have MBAs here running the show, so we definitely appreciate craftsmanship. But I'll warn you off the bat, the product you'd be working on has no unit tests. We're working towards that, but we're not there yet."

Another disgusted noise. "Tch. If you hire me, I will need time to improve this. Perhaps not the entire codebase, but at least the core functionality."

"I appreciate your candor." And Erik did—it meant this leading candidate was pushed back to second place. After all, he might be a great developer, but dictating demands in the interview wasn't exactly a strong recommendation.

They proceeded with the interview process, but their first choice fell through, so they ended up inviting Alarik back for a second interview. After all, not everyone who appreciates craftsmanship and prefers unit tests is necessarily inflexible.

"Do you have code samples we could look at?" Erik asked, hoping to convince himself the value of a good developer was worth some butting heads about priorities.

"Yes, yes, of course. Here, let me show you. I have brought you a unit test I wrote."

"Great," said Erik, feigning some degree of enthusiasm. This again ...

"This is a unit test for the Absolute Value function in the Math library," Alarik continued, pulling up the code with a few deft keystrokes.

Please tell me it's just a sample, Erik thought. Does he think I don't know what a unit test is? Or does he really not trust the built-in absolute value function? What kind of person has a unit test for Math.Abs() pre-written on their computer? Will he tell me some war story about some platform where Math.Abs() failed to work on some edge case?!

Erik mustered an insincere smile, sitting through Alarik's walkthrough. It was a well-written test, for certain, but he just couldn't shake the questions about why it was being shown in the first place. "This is a little simple. Do you have an example of a test you wrote for some actual business functionality?".

"Ah, yes. Here is my test for `String.split`."

Changing the subject, Erik asked about another sticking point: "You mentioned in our first interview that you wanted to break up class files that are too large."

"Yes. No file should be over 400 lines."

Erik thought back to one code file in their codebase that was over 11,000 lines, wincing a little. We're overdue to break this up, but I'd hate for this guy to come in and rearrange every one of our hundreds of files. "How strictly would you apply that rule to a legacy codebase?" he asked.

"It is good to be flexible," Alarik replied. "It's not a strict rule."

Feeling relieved, Erik drilled further: "What if you found a class file that's a thousand lines long?"

"That I would break up, of course."

"How about 500?"

"I would break it up."

"What if there's no natural way to break it up?"

"There is always a natural way to break up complexity."

"Well, how do you find that? Let's say you found a 11,000 line file. What steps would you take to break it up?"

"I would move blocks of code into other files."

That did it. Erik thanked Alarik for his time, deciding to wait 24 hours and then give him the bad news. In rejecting him, Erik still felt guilty—he felt like the team was sloppy for not using unit tests and having a huge range of source code file sizes.

No doubt we'll falter and fail, he thought sarcastically. It would serve us right for not employing the only person who could have taught us proper software development processes.

And yet, three years on, in submitting this story, Erik wanted us to know that the company was doing better than ever, still with 0 unit tests in a codebase that's loaded with WTFs, and still feeling guilty enough about this interview to write in.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is a third-rate book. Don’t waste your money buying it

How do you evaluate a book before buying? If it were from a traditional bookshop, then one scans some pages at least. The art master in my secondary school told his students of a method he had: read page 15 or 16, then flip to page 150 and read that. If the book interests you, then buy it.

But when it’s online buying, what happens? Not every book you buy is from a known author and many online booksellers do not offer the chance to flip through even a few pages. At times, this ends with the buyer getting a dud.

One book I bought recently proved to be a dud. I am interested in the outcome of the civil war in Sri Lanka where I grew up. Given that, I picked up the first book about the ending of the war, written in 2011 by Australian Gordon Weiss, a former UN official. This is an excellent account of the whole conflict, one that also gives a considerable portion of the history of the island and the events that led to the rise of tensions between the Sinhalese and the Tamils.

Prior to that, I picked up a number of other books, including the only biography of the Tamil leader, Velupillai Pirapaharan. Many of the books I picked up are written by Indians and thus the standard of English is not as good as that in Weiss’s book. But the material in all books is of a uniformly high standard.

Recently, I bought a book titled The Rise and Fall of the Tamil Tigers that gave its publication date as 2018 and claimed to tell the story of the war in its entirety. The reason I bought it was to see if it bridged the gap between 2011, when Weiss’s book was published, and 2018, when the current book came out.

But it turned out to be a scam. I am not sure why the bookseller, The Book Depository, stocks this volume, given its shocking quality.

The blurb about the book runs thus: “This book offers an accurate and easy to follow explanation of how the Tamil Tigers, who are officially known as the Liberation Tigers of Tamil Eelam (LTTE), was defeated. Who were the major players in this conflict? What were the critical strategic decisions that worked? What were the strategic mistakes and their consequences? What actually happened on the battlefield? How did Sri Lanka become the only nation in modern history to completely defeat a terrorist organisation? The mind-blowing events of the Sri Lankan civil war are documented in this book to show the truth of how the LTTE terrorist organisation was defeated. The defeat of a terrorist organisation on the battlefield was so unprecedented that it has rewritten the narrative in the fight against terrorism.”

Nothing could be further from the truth.

The book is published by the author himself, an Australian named Damian Tangram, who appears to have no connection to Sri Lanka apart from the fact that he is married to a Sri Lankan woman.

It is extremely badly written, has obviously not been edited and has not even been subjected to a spell-checker before being printed. This can be gauged by the fact that the same word is spelt in different ways on the same page.

Capital letters are spewed all over the pages and even an eighth-grade student would not write rubbish of this kind.

In many parts of the book, government propaganda is republished verbatim and it all looks like a cheap attempt to make some money by taking up a subject that would be of interest, and then producing a low-grade tome.

Some of the sources it quotes are highly dubious, one of them being a Singapore-based so-called terrorism expert Rohan Gunaratne who has been unmasked as a fraud on many occasions.

The reactions of the book sellers — I bought it through Abe Books which groups together a number of sellers from whom one can choose; I chose The Book Depository — were quite disconcerting. When the abysmal quality of the book was brought to their notice, both thought I wanted my money back. I wanted them to remove it from sale so that nobody else would get cheated the way I was.

After some back and forth, and both companies refusing to understand that the book is a fraud, I gave up.

MELong-term Device Use

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

,

CryptogramFriday Squid Blogging: Fantastic Video of a Juvenile Giant Squid

It's amazing:

Then, about 20 hours into the recording from the Medusa's fifth deployment, Dr. Robinson saw the sharp points of tentacles sneaking into the camera's view. "My heart felt like exploding," he said on Thursday, over a shaky phone connection from the ship's bridge.

At first, the animal stayed on the edge of the screen, suggesting that a squid was stalking the LED bait, pacing alongside it.

And then, through the drifting marine snow, the entire creature emerged from the center of the dark screen: a long, undulating animal that suddenly opened into a mass of twisting arms and tentacles. Two reached out and made a grab for the lure.

For a long moment, the squid seemed to explore the strange non-jellyfish in puzzlement. And then it was gone, shooting back into the dark.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramI'm Leaving IBM

Today is my last day at IBM.

If you've been following along, IBM bought my startup Resilient Systems in Spring 2016. Since then, I have been with IBM, holding the nicely ambiguous title of "Special Advisor." As of the end of the month, I will be back on my own.

I will continue to write and speak, and do the occasional consulting job. I will continue to teach at the Harvard Kennedy School. I will continue to serve on boards for organizations I believe in: EFF, Access Now, Tor, EPIC, Verified Voting. And I will increasingly be an advocate for public-interest technology.

Krebs on SecurityMicrosoft to Require Multi-Factor Authentication for Cloud Solution Providers

It might be difficult to fathom how this isn’t already mandatory, but Microsoft Corp. says it will soon force all Cloud Solution Providers (CSPs) that help companies manage their Office365 accounts to use multi-factor authentication. The move comes amid a noticeable uptick in phishing and malware attacks targeting CSP employees and contractors.

When an organization buys Office365 licenses from a reseller partner, the partner is granted administrative privileges in order to help the organization set up the tenant and establish the initial administrator account. Microsoft says customers can remove that administrative access if they don’t want or need the partner to have access after the initial setup.

But many companies partner with a CSP simply to gain more favorable pricing on software licenses — not necessarily to have someone help manage their Azure/O365 systems. And those entities are more likely to be unaware that just by virtue of that partnership they are giving someone at their CSP (or perhaps even outside contractors working for the CSP) full access to all of their organization’s email and files stored in the cloud.

This is exactly what happened with a company whose email systems were rifled through by intruders who broke into PCM Inc., the world’s sixth-largest CSP. The firm had partnered with PCM because doing so was far cheaper than simply purchasing licenses directly from Microsoft, but its security team was unaware that a PCM employee or contractor maintained full access to all of their employees’email and documents in Office365.

As it happened, the PCM employee was not using multi-factor authentication. And when that PCM employee’s account got hacked, so too did many other PCM customers.

KrebsOnSecurity pinged Microsoft this week to inquire whether there was anything the company could be doing to better explain this risk to customers and CSP partners. In response, Microsoft said while its guidance has always been for partners to enable and require multi-factor authentication for all administrators or agent users in the partner tenants, it would soon be making it mandatory.

“To help safeguard customers and partners, we are introducing new mandatory security requirements for the partners participating in the Cloud Solution Provider (CSP) program, Control Panel Vendors, and Advisor partners,” Microsoft said in a statement provided to KrebsOnSecurity.

“This includes enforcing multi-factor authentication for all users in the partner tenants and adopting secure application model for their API integration with Microsoft,” the statement continues. “We have notified partners of these changes and enforcement will roll out over the next several months.”

Microsoft said customers can check or remove a partner’s delegated administration privileges from their tenants at any time, and that guidance on how do do this is available here and here.

This is a welcome — if long overdue — change. Countless data breaches are tied to weak or default settings. Whether we’re talking about unnecessary software features turned on, hard-coded passwords, or key security settings that are optional, defaults matter tremendously because far too many people never change them — or they simply aren’t aware that they exist.

CryptogramCellebrite Claims It Can Unlock Any iPhone

The digital forensics company Cellebrite now claims it can unlock any iPhone.

I dithered before blogging this, not wanting to give the company more publicity. But I decided that everyone who wants to know already knows, and that Apple already knows. It's all of us that need to know.

Worse Than FailureCoded Smorgasbord: Classic WTF: Code Comedians

Everybody's a comedian, until the joke's on them. We close out this week of classics with some code and comments by people who were being funny. We'll return to our regularly scheduled programming next week! Original. --Remy

When it comes to bad code, everybody thinks they’re a comedian. Heck, look at us! Stupid programmer jokes are a game everyone can play, though, so let’s enjoy an evening at the Improv with some code comedians.

Brian sends us this enum, which I’m sure was very funny back in 2007.

Public Enum TouchMessageBoxResponse
       [Ok]
       [Cancel]
       [Yes]
       [No]
       [DontTazeMeBro]
End Enum

Brian claims that [DontTazeMeBro] is never returned, but if it were, I’m certain it would cause the application to Rickroll the user. Or taze them, I suppose.

Sometimes, a developer just has to puzzle over the application logic. Jeremy’s predecessor had some questions about this block of code:

public function Sanitize($string){
    return null;
    #wtf is this doing here. That's a hell of a sanitization lol.
}

Not all jokes in code are intentional. Tony sends us this code block, from Polish developers. He assures us that this is an innocent misuse of English.

//Don't expect much work to get done after this event is triggered
CometWorker.OnFirstJoint += new FirstJointObjects(CometWorker_OnFirstJoint)

Hopefully that’s not the same Tony that worked in Gavin’s office, writing Excel macros.

On Error GoTo holysheet
'… snip
holysheet:
  MsgBox "Enter a sheet name Tony", vbOkOnly, "You sheethead, you bungled."

And finally, some developers make sure the joke is on those that come after them. David found this.

'Saw a lot of instances where the same box_num would get used elsewhere
'which caused this screen to show more units than were actually in this batch
'This is probably a serious issue
'I'll let you guys discover it after I'm gone. ;D
'Love,
'Martin
strSql = "SELECT …"
[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Worse Than FailureError'd: Not That Kind of Chicken

"I probably wouldn't pick that picture, but technically, Google isn't wrong," wrote Dan K.

 

Daniel writes, "Yeah Amazon, that's right...I keep all the good stuff in my 'null' album."

 

"I was a little worried that Microsoft's new Edge browser flagged our company website as potentially malicious, but then I saw this," wrote Nigel.

 

"SO... Burger King wants me to wait ~10 days between orders? Maybe it's for the best?" writes Pascal.

 

"Oh, thank goodness...I thought that I was going to have to make a decision sometime this year," writes Geoff O.

 

Daniel writes, "My Dell laptop sure has a positive attitude when trying to install updates using its own management software."

 

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

Cory DoctorowHouston! Come see Hank Green and me on July 31

I’m coming to Houston on July 31 to appear with Hank Green at an event for the paperback launch of his outstanding debut novel An Absolutely Remarkable Thing: we’re on a 7PM at Spring Forest Middle School (14240 Memorial Drive, Houston, TX 77079); it’s a ticketed event and the ticket price includes a copy of Hank’s book. Hope to see you there! (Images: Vlogbrothers, Jonathan Worth, CC-BY)

Krebs on SecurityBreach at Cloud Solution Provider PCM Inc.

A digital intrusion at PCM Inc., a major U.S.-based cloud solution provider, allowed hackers to access email and file sharing systems for some of the company’s clients, KrebsOnSecurity has learned.

El Segundo, Calif. based PCM [NASDAQ:PCMI] is a provider of technology products, services and solutions to businesses as well as state and federal governments. PCM has nearly 4,000 employees, more than 2,000 customers, and generated approximately $2.2 billion in revenue in 2018.

Sources say PCM discovered the intrusion in mid-May 2019. Those sources say the attackers stole administrative credentials that PCM uses to manage client accounts within Office 365, a cloud-based file and email sharing service run by Microsoft Corp.

One security expert at a PCM customer who was recently notified about the incident said the intruders appeared primarily interested in stealing information that could be used to conduct gift card fraud at various retailers and financial institutions.

In that respect, the motivations of the attackers seem similar to the goals of intruders who breached Indian IT outsourcing giant Wipro Ltd. earlier this year. In April, KrebsOnSecurity broke the news that the Wipro intruders appeared to be after anything they could quickly turn into cash, and used their access to harvest gift card information from a number of the company’s customers.

It’s unclear whether PCM was a follow-on victim from the Wipro breach, or if it was attacked separately. As noted in that April story, PCM was one of the companies targeted by the same hacking group that compromised Wipro.

The intruders who hacked into Wipro set up a number of domains that appeared visually similar to that of Wipro customers, and many of those customers responded to the April Wipro breach story with additional information about those attacks.

PCM never did respond to requests for comment on that story. But in a statement shared with KrebsOnSecurity today, PCM said the company “recently experienced a cyber incident that impacted certain of its systems.”

“From its investigation, impact to its systems was limited and the matter has been remediated,” the statement reads. “The incident did not impact all of PCM customers; in fact, investigation has revealed minimal-to-no impact to PCM customers. To the extent any PCM customers were potentially impacted by the incident, those PCM customers have been made aware of the incident and PCM worked with them to address any concerns they had.”

On June 24, PCM announced it was in the process of being acquired by global IT provider Insight Enterprises [NASDAQ:NSIT]. Insight has not yet responded to requests for comment.

Earlier this week, cyber intelligence firm RiskIQ published a lengthy analysis of the hacking group that targeted Wipro, among many other companies. RiskIQ says this group has been active since at least 2016, and posits that the hackers may be targeting gift card providers because they provide access to liquid assets outside of the traditional western financial system.

The breach at PCM is just the latest example of how cybercriminals increasingly are targeting employees who work at cloud data providers and technology consultancies that manage vast IT resources for many clients. On Wednesday, Reuters published a lengthy story on “Cloud Hopper,” the nickname given to a network of Chinese cyber spies that hacked into eight of the world’s biggest IT suppliers between 2014 and 2017.

LongNowLong-Lived Institutions

The Sagrada Familia Catholic Church in Barcelona, Spain. The Catholic Church is one of the longest-lived institutions in human history.

The Long Now Foundation was founded in 01996 with the idea to build a 10,000 year clock — an icon to long-term thinking that might inspire people to engage more deeply with the future. We have been prototyping and designing the Clock since then, and are now in the installation phase of the monument scale version in West Texas.

It was acknowledged at the inception of Long Now that while designing and building a mechanical device to last 10,000 years would involve difficult engineering, designing the institution to last along side it was the truly challenging project. We have human-made artifacts from tens of thousands to even millions of years ago, but there are vanishingly few human organizations and institutions that have lasted longer than even a single millennia. Even more concerning is the trend that organizations are lasting for shorter and shorter lengths of time, even though the challenges such organizations must address — whether they be climate change, education, ecosystem viability — are only increasing in terms of temporal significance.

As we stand at the edge of a generational shift as an organization, it became clear that we needed a generational unit of time — 25 years — to use for our planning. There are four hundred 25 year generations in 10,000 years, and we are just closing the very first one. So if the “Long Now Quarter” is 25 years, and we spent our first quarter building the Clock and our community, it is now time to start planning our second quarter in which we are thinking seriously about how to create a very long-lived organization.

In February, we held a small charrette at The Interval to broaden our thinking about building long-lived institutions. The group consisted of Long Now staff, board members, and outside experts.

Scenario Planning for the Long-term

The event opened with Peter Schwartz, a founding board member of Long Now and author of The Art of the Long View, giving an overview of his work in scenario planning for organizations.

A map from 01701 depicting California as an island that Schwartz used to exemplify the limited nature of our mental maps.

Schwartz discussed how organizations are often limited by mental maps in the choices they make about the future. Effective scenario planning helps organizations build a broader set of mental maps so that they are better prepared for a changing future. The test of a good scenario, Schwartz argued, is not whether or not it is accurate. Rather, it’s about whether or not the leadership of an organization is influenced to make better decisions in light of that scenario.

Intellectual Dark Matter

Following Schwartz, Samo Burja, a researcher specializing in how civilizations function, introduced the idea of intellectual dark matter, which he defines as“knowledge we cannot see publicly, but whose existence we can infer because our institutions would fly apart if the knowledge we see were all there was.”

Burja used the Bessemer steel process to illustrate his idea of tacit knowledge, “knowledge that is not transmitted in written form.”

Burja explored how civilizations have historically lost knowledge, information, and in some cases even the foundations of their entire societies. In so doing, he was able to highlight many areas that any organization aspiring for long-term survival must avoid.

The Data of Long-Lived Institutions

Following Burja, I gave an overview of some of my research into what type of organizations last, what they have in common, and which elements and strategies they developed that have contributed to their survival. Unfortunately, some of the longest-lived organizations, such as the Catholic church or a temple building company in Japan, have some of the least portable lessons for us. Their success is too singular, and the result of highly specific circumstances that will never apply to another organization.

Kees Van Der Heijden’s “strangely prophetic” business idea for Long Now from 01997.

As we delve into the stories of the world’s longest lived organizations more deeply, we are looking for mechanisms that we can borrow and put into practice as cultural institutions, companies, and governments. In so doing, we hope to extend the lifespan of many more organizations so that they are better able to address issues facing humanity that need more than a few years to solve. Imagine if the World Health Organization had been around for a millennia or two, with all their health data intact, how valuable of a resource this would be. It seems that it is our duty to make sure the resources are available for some version of the WHO and its data can last at least that long in to the future.

The Longevity of Nature

To broaden our thinking about the mechanisms of long-term systemic success, we heard from complexity researcher Dr. Eric Berlow, who has been mapping ecological systems to see how robust or fragile they may be. Berlow’s detailed studies of these complex systems are used to create beautiful and informative visuals that clearly map how dependencies in a system combine to create the whole.

A visualization by Berlow showing the shapes of dependency chains in certain ecosystems.

The recurring theme in nature, Berlow said, is that everything is connected, but not equally connected or randomly connected. “The non-randomness in the patterning of the architecture of nature,” Berlow concluded, “is where the library of information about longevity exists.”

Language, Meaning, and Culture

In the afternoon, the director of our own Rosetta Project, Dr. Laura Welcher, a PhD linguist, discussed one of the longest lived human systems — language itself. While language has been with us for a very long time, it is an evolving and changing system. Welcher pointed out the drastic difference between preserving information vs meaning.

Generational Transitions and Governing Systems

The last talk of the day came from one of Long Now’s board members, Katherine Fulton. Fulton emerged as a leading thinker on impact investing and the future of philanthropy, and has worked for many companies and foundations as they transition from one generation to the next.

Fulton argued that the key lesson for a company surviving from the first generation to the next is to have enough structure, but not too much.

Fulton pointed out that generational change, and how it is handled, is often one of the most critical moments in any organization that hopes to last more than a decade or two. In times of generational change, much of the ability for a successful transition comes from the founding DNA and governing systems that were set up at the organization’s inception.


This is just the beginning of our inquiry into understanding how organizations last. Key questions remain: How do we create best practices for governing systems? How do we develop a culture that is attuned to the nuances of systemic robustness? Fundamentally, this is a systems design question. For a system to last, it needs to have ways of adapting to a changing world, and be able to fail in small ways, and recover from those failures without having the whole system crash. I suspect that as we learn more about specific examples of long-lived organizations, we will hear about several instances in which they almost failed, only to recover and become stronger as a result. As we continue our learning, we hope to create the beginning of a knowledge base that is applicable not only to Long Now, but to any organization aiming to thrive across generations.


We would like to thank all those who participated in the workshop:

  • Stewart Brand — President of The Long Now Foundation and founder of the Whole Earth Catalog
  • Peter Schwartz — Board Member at Long Now and Senior VP of Strategic Planning at Salesforce
  • Katherine Fulton — Board Member at Long Now and former President of the Monitor Institute
  • Samo Burja — Researcher at Bismarck Analysis
  • Alexander Rose — Executive Director of The Long Now Foundation
  • Laura Welcher — Director of Operations and the Long Now Library
  • Danica Remy — President of B612 Foundation
  • Marty Krasney — Executive Director of the Dalai Lama Fellows
  • Eric Berlow — Ecological Network Scientist, Co-founder of Vibrant Data Inc.
  • Lawrence Wilkinson — Chairman of Heminge & Condell and Vice Chairman of Common Sense Media
  • Abby Smith Rumsey — Writer and Historian
  • Jim O’Neill — Managing Director Mithril Capital Management
  • Nicholas Paul Brysiewicz — Director of Development at The Long Now Foundation
  • Daniel Claussen — Research Fellow at The Long Now Foundation
  • James Cham — Partner at Bloomberg Beta
  • Frances Winter — Chair of English Department at Massbay Community College
  • David Lei — Co-founder of the Chinese American Community Foundation
  • Zhan Li — Researcher at the Gottlieb Duttweiler Institute
  • Lulie Tanett — Writer
  • Frederick Leist — Information Technology Consultant, Medieval Linguistics
  • David McConville — President of the Buckminster Fuller Institute
  • Erik Davis — Author and Journalist
  • Gary Yost — Founder of Wisdom VR, Filmmaker and Software Designer
  • Hannu Rajaniemi — Science Fiction Author
  • Tim Chang — Partner at Mayfield Fund

CryptogramSpanish Soccer League App Spies on Fans

The Spanish Soccer League's smartphone app spies on fans in order to find bars that are illegally streaming its games. The app listens with the microphone for the broadcasts, and then uses geolocation to figure out where the phone is.

The Spanish data protection agency has ordered the league to stop doing this. Not because it's creepy spying, but because the terms of service -- which no one reads anyway -- weren't clear.

Worse Than FailureCodeSOD: Classic WTF: The not-so-efficient StringBuilder

As our short summer break continues, this one is from waaaaay back, but it's yet another example of how seemingly simple tasks and seemingly simple data-types can be too complicated sometimes. Original--Remy

The .NET developers out there have likely heard that using a StringBuilder is a much better practice than string concatenation. Something about strings being immutable and creating new strings in memory for every concatenation. But, I'm not sure that this (as found by Andrey Shchekin) is what they had in mind ...


public override string getClassVersion() {
return
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append("V0.01")
.append(", native: ibfs32.dll(").ToString())
.append(DotNetAdapter.getToken(this.mainVersionBuffer.ToString(), 2)).ToString())
.append(") [type").ToString())
.append(this.portType).ToString())
.append(":").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 0xff)).ToString())
.append("](").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 2)).ToString())
.append(")").ToString();
}

Note, that it is J#, StringBuffer and StringBuilder are the same thing.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowPodcast number 300: “Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today’s Monopolies”

I just published the 300th installment of my podcast, which has been going since 2006 (!); I present a reading of my EFF Deeplinks essay Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today’s Monopolies, where I introduce the idea of “Adversarial Interoperability,” which allows users and toolsmiths to push back against monopolists.

Facebook’s advantage is in “network effects”: the idea that Facebook increases in value with every user who joins it (because more users increase the likelihood that the person you’re looking for is on Facebook). But adversarial interoperability could allow new market entrants to arrogate those network effects to themselves, by allowing their users to remain in contact with Facebook friends even after they’ve left Facebook.

This kind of adversarial interoperability goes beyond the sort of thing envisioned by “data portability,” which usually refers to tools that allow users to make a one-off export of all their data, which they can take with them to rival services. Data portability is important, but it is no substitute for the ability to have ongoing access to a service that you’re in the process of migrating away from.

Big Tech platforms leverage both their users’ behavioral data and the ability to lock their users into “walled gardens” to drive incredible growth and profits. The customers for these systems are treated as though they have entered into a negotiated contract with the companies, trading privacy for service, or vendor lock-in for some kind of subsidy or convenience. And when Big Tech lobbies against privacy regulations and anti-walled-garden measures like Right to Repair legislation, they say that their customers negotiated a deal in which they surrendered their personal information to be plundered and sold, or their freedom to buy service and parts on the open market.

But it’s obvious that no such negotiation has taken place. Your browser invisibly and silently hemorrhages your personal information as you move about the web; you paid for your phone or printer and should have the right to decide whose ink or apps go into them.

Adversarial interoperability is the consumer’s bargaining chip in these coercive “negotiations.” More than a quarter of Internet users have installed ad-blockers, making it the biggest consumer revolt in human history. These users are making counteroffers: the platforms say, “We want all of your data in exchange for this service,” and their users say, “How about none?” Now we have a negotiation!

Or think of the iPhone owners who patronize independent service centers instead of using Apple’s service: Apple’s opening bid is “You only ever get your stuff fixed from us, at a price we set,” and the owners of Apple devices say, “Hard pass.” Now it’s up to Apple to make a counteroffer. We’ll know it’s a fair one if iPhone owners decide to patronize Apple’s service centers.

MP3

CryptogramMongoDB Offers Field Level Encryption

MongoDB now has the ability to encrypt data by field:

MongoDB calls the new feature Field Level Encryption. It works kind of like end-to-end encrypted messaging, which scrambles data as it moves across the internet, revealing it only to the sender and the recipient. In such a "client-side" encryption scheme, databases utilizing Field Level Encryption will not only require a system login, but will additionally require specific keys to process and decrypt specific chunks of data locally on a user's device as needed. That means MongoDB itself and cloud providers won't be able to access customer data, and a database's administrators or remote managers don't need to have access to everything either.

For regular users, not much will be visibly different. If their credentials are stolen and they aren't using multifactor authentication, an attacker will still be able to access everything the victim could. But the new feature is meant to eliminate single points of failure. With Field Level Encryption in place, a hacker who steals an administrative username and password, or finds a software vulnerability that gives them system access, still won't be able to use these holes to access readable data.

LongNowRevive & Restore Releases Ocean Genomics Horizon Scan

Revive & Restore has released a 200-page report providing the first-of-its-kind assessment of genomic and biotech innovations to complement, enhance, and accelerate today’s marine conservation strategies.

Revive & Restore’s mission is to enhance biodiversity through the genetic rescue of endangered and extinct species. In pursuit of this and in response to global threats to marine ecosystems, the organization conducted an Ocean Genomics Horizon Scan – interviewing almost 100 marine biologists, conservationists, and technologists representing over 60 institutions. Each was challenged to identify ways that rapid advances in genomics could be applied to address marine conservation needs. The resulting report is a first-of-its-kind assessment of  highlighting the opportunities to bring genomic insight and biotechnology innovations to complement current and future marine conservation.

Our research has shown that we now have the opportunity to apply biotechnology tools to help solve some of the most intractable problems in ocean conservation resulting from: overfishing, invasive species, biodiversity loss, habitat destruction, and climate change. This report presents the most current genomic solutions to these threats and develops 10 “Big Ideas” – which, if funded, can help build transformative change and be catalytic for marine health.

Read the full report here.

CryptogramPerson in Latex Mask Impersonated French Minister

Forget deep fakes. Someone wearing a latex mask fooled people on video calls for a period of two years, successfully scamming 80 million euros from rich French citizens.

Sam VargheseMethinks Israel Folau is acting like a hypocrite

The case of Israel Folau has been a polarising one in Australia with some supporting the rugby union player’s airing of his Christian beliefs and others loudly opposed. In the end, it turns out that Folau may be guilty of one of the sins of which he accuses others: hypocrisy.

Last year, Folau made a post on Instagram saying adulterers, drunkards, fornicators, homosexuals and the like would all go to hell if they did not repent and come to Jesus. In this, he was merely stating what the Bible says about these kinds of people. He was cautioned about such posts by his employer, Rugby Australia. Whether he signed any agreement about not putting up similar posts in the future is unknown.

A second similar post this year resulted in a fairly big outcry among the media and those who champion the gay cause. Folau had a number of meetings with his employers and was finally shown the door. He was on a four-year $4 million contract so he has lost a considerable amount of cash. The Australian team has lost a lot too, as he was by far the best player and the World Cup rugby tournament is in September this year. The main sponsor of the team is Qantas and the chief executive, Alan Joyce, is gay. There have been accusations that Joyce has been a pivotal force in pushing for Folau’s sacking.

Soon after this, Folau announced that he was suing Rugby Australia and sought to raise $3 million for defending himself. His campaign on GoFundMe had reached about $750,000 when it was pulled down by the site. But the Christian lobby started another fund for Folau and it has now raised well beyond a million dollars.

Now Folau has the right to hold his own religious beliefs. He is also free to state them openly. But in this he goes against the very Bible he quotes, for Christians are told to practise their faith quietly, and not in the manner of the scribes and Pharisees of Jesus’ time, people who took great care to show outwardly that they were religious – though in private they were as worldly and non-religious as anyone else. In short, they were hypocrites.

Christians were told to behave in this manner and promised that their God would reward them openly. In other words, a Christian is expected to influence others not by talking loudly and flaunting his/her faith, but by impressing others by one’s behaviour and attitudes. Folau’s flaunting of his faith appears to go against this admonishment.

Then again, Folau’s seeking of money to fund his court case is a very worldly act. First of all, Christians are told not to go to court against their neighbours but rather to settle things peacefully. Even if Folau decided to violate this teaching, he has plenty of properties and did not need to take money from others. If he wanted a court battle, then he could have used his own money. This is un-Christian in the extreme.

Folau’s supporters cite the admonishment by Jesus that his followers should go to all corners of the world and preach the gospel. That is an admonishment given to pastors and leaders of the flock. The rest of us are told to influence others by the way we live.

Folau is the one who set himself up as one who acts according to the Christian faith and left himself open to be judged by the same creed. If all his actions had been in keeping with the faith, then one would have no quarrel with him. But when one chooses Christianity when it is convenient, and goes the way of the world when it suits, then there is only word to describe it: hypocrisy.

Hypocrites were one category of people who attracted a huge amount of criticism from Jesus Christ during his earthly sojourn. Israel Folau should muse on this.

Worse Than FailureCodeSOD: Classic WTF: Uncovering Nothing

As our little summer break continues, we have a tale about Remi (no relation) and a missing stored procedure. Original --Remy

Remi works on one of his country's largest Internet Service Providers, and has the fortune to be on an elite team that focuses on agile development. Or misfortune, depending on how you look at it: at his company, "agile development" actually means "we need that in two weeks".

One of Remi's first assignments was to fix an "emergency" on one of the ATM Addressing systems. Apparently, the application was coming up with incorrect routing data. After a solid day-and-a-half of digging through Visual Basic code that called SQL Server stored procedures which called VB-based COM objects which called more stored procs, Remi found a weird table ("Cal_ATM") that was referenced from an externally-linked database, and the data in that table was completely out of date.

Wondering how that table was populated, Remi searched high-and-low through the VB and stored proc code, but found nothing. Thinking it might be directly updated by some batch job, he looked through the job lists, and that's where he found it: a step buried in a job that called "EXECUTE PUpd_Cal_ATM". As it turned out, the step had been erroring-out for nearly a month-and-a-half with "Could not find stored procedure PUpd_Cal_ATM". No one knew about it, of course, because the failure notification was also not working.

The timing of the failures seemed to coincide with a recent "emergency" clean-up effort that another developer on the agile development team performed, and Remi was worried that this procedure was accidently deleted during the clean-up. He searched for a backup that was made before the clean-up project, and was able to restore the missing procedure. Its entire code was as follows.

CREATE PROCEDURE [dbo].[PUpd_Cal_ATM]
AS
-- +---------------------------------------------------------------------------+
-- | Description : Feeds the tables ATM                                        |
-- | Entry : none                                                              |
-- | Output : Return (0 = OK)                                                  |
-- +--------+------+-------+---------------------------------------------------+
-- | Date   |Author|Version|Description                                        |
-- +--------+------+-------+---------------------------------------------------+
-- |06/12/07| XXX  | 1.0   | Starting version                                  |
-- +--------+------+-------+---------------------------------------------------+

-- For now, this stored procedure is empty but it may be 
-- called by UpdateDatabase
-- ==========================================================================
-- return code
-- ==========================================================================

It made little sense; this procedure had clearly not been feeding the database for three years, yet the data was only a week old. It took him three more full days to figure out the source: a hotfix to Microsoft Office caused a certain type of macro code in Excel to be considered insecure, and a certain type of macro was being used to feed the ATM Addressing system from Excel.

Oh, the joys of agile development.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

TEDOne great way to celebrate Pride Month: Document LGBTQ history before it’s lost

June 28, 2019, marks the 50th anniversary of the Stonewall uprising, when LGBTQ people and allies fought back in a six-night riot against a police raid on The Stonewall Inn, a gay bar in New York City. Stonewall was not the first time that LGBTQ people took a stand against oppression or police harassment, but it was a major turning point in the global fight for queer liberation and civil rights. 

As a twenty-something gay person living in New York — where, as the sign in Stonewall claims, “Pride began” — I’ve been thinking about how to properly mark the occasion and what exactly Pride celebrations mean to me. What I know for sure, especially after my conversation with Dave Isay, StoryCorps founder and 2015 TED Prize winner, is that one of the most important things we can do this Pride Month is listen to the older LGBTQ people in our lives and document their stories. 

“It has been 50 years since Stonewall, and the people who were living that history are now in their 70s, 80s and 90s,” Isay told me. “Recording interviews takes emotional energy; it takes time. We’re asking people to record these LGBTQ stories now as an act of public service, because the totality of these stories is American history. We must collect them before they are lost forever.”

Since 2003, StoryCorps has invited people to interview each other and record their exchanges. The organization’s mission is “to preserve and share humanity’s stories in order to build connections between people and create a more just and compassionate world.” These stories are then shared by StoryCorps and preserved in the Library of Congress for future generations to learn from. 

Stonewall OutLoud is a new initiative from StoryCorps that is focused on collecting LGBTQ stories from Americans in order to capture this important but sometimes overlooked aspect of our country’s history. These stories can help inform the next generation of LGBTQ-affirming relatives, mentors, activists and community leaders. 

Historically, Pride has been a time to be loud. It’s a time for queer people to be visible and for all people to advocate for equality and justice. As we commemorate this landmark anniversary of Stonewall, it’s also become clear to me that it’s a time to listen to LGBTQ experiences from the past as well. That way, we’ll all know exactly why we’re shouting in the streets and what kind of future we’re marching for. 

Here’s how to get involved: 

  • Make a pledge to record an LGBTQ story.
  • Record it with by using the StoryCorps App, which provides start-to-finish tools for the process.
  • Use the app to listen to Stonewall OutLoud stories.
  • Spread the word about Stonewall OutLoud on your networks using the #StonewallOutLoud.

CryptogramFlorida City Pays Ransomware

Learning from the huge expenses Atlanta and Baltimore incurred by refusing to pay ransomware, the Florida City of Riveria Beach decided to pay up. The ransom amount of almost $600,000 is a lot, but much cheaper than the alternative.

Cory DoctorowJoin me today at 12PM Pacific for a New York Times/Periscope livestream about my “op-ed from the future”

Yesterday, the New York Times published my “op-ed from the future,” an essay entitled “I Shouldn’t Have to Publish This in The New York Times,” which tried to imagine what would happen to public discourse if the Big Tech platforms were forced to use algorithms to police their users’ speech in order to fight extremism, trolling, copyright infringement, harassment, and so on.

In just a couple hours — 12PM Pacific, 3PM Eastern — I’ll be doing a Periscope livestream for the Times to discuss the piece and answer questions. I hope you can make it! It’ll be pinned to the top of my Twitter profile.

Krebs on SecurityTracing the Supply Chain Attack on Android

Earlier this month, Google disclosed that a supply chain attack by one of its vendors resulted in malicious software being pre-installed on millions of new budget Android devices. Google didn’t exactly name those responsible, but said it believes the offending vendor uses the nicknames “Yehuo” or “Blazefire.” What follows is a deep dive into the identity of that Chinese vendor, which appears to have a long and storied history of pushing the envelope on mobile malware.

“Yehuo” () is Mandarin for “wildfire,” so one might be forgiven for concluding that Google was perhaps using another dictionary than most Mandarin speakers. But Google was probably just being coy: The vendor in question appears to have used both “blazefire” and “wildfire” in two of many corporate names adopted for the same entity.

An online search for the term “yehuo” reveals an account on the Chinese Software Developer Network which uses that same nickname and references the domain blazefire[.]com. More searching points to a Yehuo user on gamerbbs[.]cn who advertises a mobile game called “Xiaojun Junji,” and says the game is available at blazefire[.]com.

Research on blazefire[.]com via Domaintools.com shows the domain was assigned in 2015 to a company called “Shanghai Blazefire Network Technology Co. Ltd.” just a short time after it was registered by someone using the email address “tosaka1027@gmail.com“.

The Shanghai Blazefire Network is part of a group of similarly-named Chinese entities in the “mobile phone pre-installation business and in marketing for advertisers’ products to install services through mobile phone installed software.”

“At present, pre-installed partners cover the entire mobile phone industry chain, including mobile phone chip manufacturers, mobile phone design companies, mobile phone brand manufacturers, mobile phone agents, mobile terminal stores and major e-commerce platforms,” reads a descriptive blurb about the company.

A historic records search at Domaintools on that tosaka1027@gmail.com address says it was used to register 24 Internet domain names, including at least seven that have been conclusively tied to the spread of powerful Android mobile malware.

Two of those domains registered to tosaka1027@gmail.com — elsyzsmc[.]com and rurimeter[.]com — were implicated in propagating the Triada malware. Triada is the very same malicious software Google said was found pre-installed on many of its devices and being used to install spam apps that display ads.

In July 2017, Russian antivirus vendor Dr.Web published research showing that Triada had been installed by default on at least four low-cost Android models. In 2018, Dr.Web expanded its research when it discovered the Triada malware installed on 40 different models of Android devices.

At least another five of the domains registered to tosaka1027@gmail.com — 99youx[.]com, buydudu[.]com, kelisrim[.]com, opnixi[.]com and sonyba[.]comwere seen as early as 2016 as distribution points for the Hummer Trojan, a potent strain of Android malware often bundled with games that completely compromises the infected device.

A records search at Domaintools for “Shanghai Blazefire Network Technology Co” returns 11 domains, including blazefire[.]net, which is registered to a yehuo@blazefire.net. For the remainder of this post, we’ll focus on the bolded domain names below:

Domain Name      Create Date   Registrar
2333youxi[.]com 2016-02-18 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
52gzone[.]com 2012-11-26 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
91gzonep[.]com 2012-11-26 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
blazefire[.]com 2000-08-24 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
blazefire[.]net 2010-11-22 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
hsuheng[.]com 2015-03-09 GODADDY.COM, LLC
jyhxz.net 2013-07-02 —
longmen[.]com 1998-06-19 GODADDY.COM, LLC
longmenbiaoju[.]com 2012-12-09 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
oppayment[.]com 2013-10-09 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD
tongjue[.]net 2014-01-20 ALIBABA CLOUD COMPUTING (BEIJING) CO., LTD

Following the breadcrumbs from some of the above domains we can see that “Blazefire” is a sprawling entity with multiple business units and names. For example, 2333youxi[.]com is the domain name for Shanghai Qianyou Network Technology Co., Ltd., a firm that says it is “dedicated to the development and operation of Internet mobile games.”

Like the domain blazefire[.]com, 2333youxi[.]com also was initially registered to tosaka1027@gmail.com and soon changed to Shanghai Blazefire as the owner.

The offices of Shanghai Quianyou Network — at Room 344, 6th Floor, Building 10, No. 196, Ouyang Rd, Shanghai, China — are just down the hall from Shanghai Wildfire Network Technology Co., Ltd., reportedly at Room 35, 6th Floor, Building 10, No. 196, Ouyang Rd, Shanghai.

The domain tongjue[.]net is the Web site for Shanghai Bronze Network Technology Co., Ltd., which appears to be either another name for or a sister company to Shanghai Tongjue Network Technology Co., Ltd.  According to its marketing literature, Shanghai Tongjue is situated one door down from the above-mentioned Shanghai Quianyou Network — at Room 36, 6th Floor, Building 10, No. 196, Ouyang Road.

“It has developed into a large domestic wireless Internet network application,” reads a help wanted ad published by Tongjue in 2016.  “The company is mainly engaged in mobile phone pre-installation business.”

That particular help wanted ad was for a “client software development” role at Tongjue. The ad said the ideal candidate for the position would have experience with “Windows Trojan, Virus or Game Plug-ins.” Among the responsibilities for this position were:

-Crack the restrictions imposed by the manufacturer on the mobile phone.
-Research and master the android [operating] system
-Reverse the root software to study the root of the android mobile phone
-Research the anti-brushing and provide anti-reverse brushing scheme

WHO IS BLAZEFIRE/YEHUO?

Many of the domains mentioned above have somewhere in their registration history the name “Hsu Heng” and the email address yehuo@blazefire.net. Based on an analysis via cyber intelligence firm 4iq.com of passwords and email addresses exposed in multiple data breaches in years past, the head of Blazefire goes by the nickname “Hagen” or “Haagen” and uses the email “chuda@blazefire.net“.

Searching on the phrase “chuda” in Mandarin turns up a 2016 story at the Chinese gaming industry news site Youxiguancha.com that features numerous photos of Blazefire employees and their offices. That story also refers to the co-founder and CEO of Blazefire variously as “Chuda” and “Chu da”.

“Wildfire CEO Chuda is a tear-resistant boss with both sports (Barcelona hardcore fans) and literary genre (playing a good guitar),” the story gushes. “With the performance of leading the wildfire team and the wildfire product line in 2015, Chu has won the top ten new CEO awards from the first Black Rock Award of the Hardcore Alliance.”

Interestingly, the registrant name “Chu Da” shows up in the historical domain name records for longmen[.]com, perhaps Shanghai Wildfire’s oldest and most successful mobile game ever. That record, from April 2015, lists Chu Da’s email address as yehuo@blazefire.com.

The CEO of Wildfire/Blazefire, referred to only as “Chuda” or “Hagen.”

It’s not clear if Chuda is all or part of the CEO’s real name, or just a nickname; the vice president of the company lists their name simply as “Hua Wei,” which could be a real name or a pseudonymous nod to the embattled Chinese telecom giant by the same name.

According to this cached document from Chinese business lookup service TianYanCha.com, Chuda also is a senior executive at six other companies.

Google declined to elaborate on its blog post. Shanghai Wildfire did not respond to multiple requests for comment.

It’s perhaps worth noting that while Google may be wise to what’s cooking over at Shanghai Blazefire/Wildfire Network Technology Co., Apple still has several of the company’s apps available for download from the iTunes store, as well as others from Shanghai Qianyou Network Technology.

CryptogramiPhone Apps Surreptitiously Communicated with Unknown Servers

Long news article (alternate source) on iPhone privacy, specifically the enormous amount of data your apps are collecting without your knowledge. A lot of this happens in the middle of the night, when you're probably not otherwise using your phone:

IPhone apps I discovered tracking me by passing information to third parties ­ just while I was asleep ­ include Microsoft OneDrive, Intuit's Mint, Nike, Spotify, The Washington Post and IBM's the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn't only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic.