Planet Russell

,

CryptogramOn Chinese "Spy Trains"

The trade war with China has reached a new industry: subway cars. Congress is considering legislation that would prevent the world's largest train maker, the Chinese-owned CRRC Corporation, from competing on new contracts in the United States.

Part of the reasoning behind this legislation is economic, and stems from worries about Chinese industries undercutting the competition and dominating key global industries. But another part involves fears about national security. News articles talk about "spy trains," and the possibility that the train cars might surreptitiously monitor their passengers' faces, movements, conversations or phone calls.

This is a complicated topic. There is definitely a national security risk in buying computer infrastructure from a country you don't trust. That's why there is so much worry about Chinese-made equipment for the new 5G wireless networks.

It's also why the United States has blocked the cybersecurity company Kaspersky from selling its Russian-made antivirus products to US government agencies. Meanwhile, the chairman of China's technology giant Huawei has pointed to NSA spying disclosed by Edward Snowden as a reason to mistrust US technology companies.

The reason these threats are so real is that it's not difficult to hide surveillance or control infrastructure in computer components, and if they're not turned on, they're very difficult to find.

Like every other piece of modern machinery, modern train cars are filled with computers, and while it's certainly possible to produce a subway car with enough surveillance apparatus to turn it into a "spy train," in practice it doesn't make much sense. The risk of discovery is too great, and the payoff would be too low. Like the United States, China is more likely to try to get data from the US communications infrastructure, or from the large Internet companies that already collect data on our every move as part of their business model.

While it's unlikely that China would bother spying on commuters using subway cars, it would be much less surprising if a tech company offered free Internet on subways in exchange for surveillance and data collection. Or if the NSA used those corporate systems for their own surveillance purposes (just as the agency has spied on in-flight cell phone calls, according to an investigation by the Intercept and Le Monde, citing documents provided by Edward Snowden). That's an easier, and more fruitful, attack path.

We have credible reports that the Chinese hacked Gmail around 2010, and there are ongoing concerns about both censorship and surveillance by the Chinese social-networking company TikTok. (TikTok's parent company has told the Washington Post that the app doesn't send American users' info back to Beijing, and that the Chinese government does not influence the app's use in the United States.)

Even so, these examples illustrate an important point: there's no escaping the technology of inevitable surveillance. You have little choice but to rely on the companies that build your computers and write your software, whether in your smartphones, your 5G wireless infrastructure, or your subway cars. And those systems are so complicated that they can be secretly programmed to operate against your interests.

Last year, Le Monde reported that the Chinese government bugged the computer network of the headquarters of the African Union in Addis Ababa. China had built and outfitted the organization's new headquarters as a foreign aid gift, reportedly secretly configuring the network to send copies of confidential data to Shanghai every night between 2012 and 2017. China denied having done so, of course.

If there's any lesson from all of this, it's that everybody spies using the Internet. The United States does it. Our allies do it. Our enemies do it. Many countries do it to each other, with their success largely dependent on how sophisticated their tech industries are.

China dominates the subway car manufacturing industry because of its low prices­ -- the same reason it dominates the 5G hardware industry. Whether these low prices are because the companies are more efficient than their competitors or because they're being unfairly subsidized by the Chinese government is a matter to be determined at trade negotiations.

Finally, Americans must understand that higher prices are an inevitable result of banning cheaper tech products from China.

We might willingly pay the higher prices because we want domestic control of our telecommunications infrastructure. We might willingly pay more because of some protectionist belief that global trade is somehow bad. But we need to make these decisions to protect ourselves deliberately and rationally, recognizing both the risks and the costs. And while I'm worried about our 5G infrastructure built using Chinese hardware, I'm not worried about our subway cars.

This essay originally appeared on CNN.com.

EDITED TO ADD: I had a lot of trouble with CNN's legal department with this essay. They were very reluctant to call out the US and its allies for similar behavior, and spent a lot more time adding caveats to statements that I didn't think needed them. They wouldn't let me link to this Intercept article talking about US, French, and German infiltration of supply chains, or even the NSA document from the Snowden archives that proved the statements.

Planet DebianThomas Lange: Read-only nfsroot with NFS v4 and overlayfs

The Fully Automatic Installation (FAI) is using a read-only nfsroot since it's very beginning. This is also used in diskless clients enviroments and in the LTSP (Linux Terminal Server Project).

During a network installation the clients are running as diskless clients, so the installation has full access to the local hard disk which is not in use. But we need some files to be writable on the read-only nfsroot. In the past we've created symlinks to a ram disk. Later we used aufs (another union fs), a kernel module for doing union mounts of several file systems. Putting a ram disk on top of the read-only nfsroot with aufs makes the nfsroot writable. But aufs was not available in kernel 4.X any more. It was replaced by overlayfs.

Then initrd of FAI mounts the nfsroot read only and then puts a tmpfs ram disk on top of it using overlayfs. The result is a new merged file system which is writable. This works nicely since several years when using NFSv3. But when using NFSv4 we can read from a file, but writing always reported

openat(AT_FDCWD,....) = -1 EOPNOTSUPP (Operation not supported)

After some days of debugging overlayfs and NFS v4, I found that it's a complicated mixture of NFS and acl support (POSIX and nfs4 acl) and what overlayfs expects from the file systems in respect to certain xattr. Overlayfs uses calls like

setxattr(work/work, "trusted.overlay.opaque", "0", 1, 0x0) = 0

and writing to a file used

getxattr("/b/lower/etc/test1", "system.nfs4_acl", ....) = 80

without any errors. When talking to some overlayfs guys they ask me to disable acl for the exported NFS file system. There's an noacl option listed on nfs(5), but it's for NFS version 2 and 3 only, not for NFS v4. You cannot disable ACL on a NFS v4 mount.

In the end the solution was to disable ACL on the whole file system on the NFS server, which is exported to the clients. If you have a ext4 file system this works on the NFS server by doing

# mount -oremount,noacl $EXPORTED_FS

After that, overlayfs will detect that ACL's are not support on the NFS mount and behaves as expected allowing writes to a file.

You will need to use dracut instead of initramfs-tools for creating the initrd. The later is using busybox or klibc tools inside the initrd. Both do not support NFS v4 mounts (https://bugs.debian.org/409271).

Dracut is using the normal libc based executables. The Debian package of dracut supports the kernel cmdline option rootovl. This is an example of the kernel cmdline options:

rootovl ip=dhcp root=11.22.33.44:/srv/fai/nfsroot

This mounts a read only nfsroot and puts a tmpfs on top for making it writable.

NFSv4 nfsroot

Worse Than FailureCodeSOD: Trim Off a Few Miles

I don’t know the length of Russell F’s commute. Presumably, the distance is measured in miles. Miles and miles. I say that, because of this block, which is written… with care.

  string Miles_w_Care = InvItem.MilesGuaranteeFlag == true && InvItem.Miles_w_Care.HasValue ? (((int)InvItem.Miles_w_Care / 1000).ToString().Length > 2 ? ((int)InvItem.Miles_w_Care / 1000).ToString().Trim().Substring(0, 2) : ((int)InvItem.Miles_w_Care / 1000).ToString().Trim()) : "  ";
  string Miles_wo_Care = InvItem.MilesGuaranteeFlag == true && InvItem.Miles_wo_Care.HasValue ? (((int)InvItem.Miles_wo_Care / 1000).ToString().Length > 2 ? ((int)InvItem.Miles_wo_Care / 1000).ToString().Trim().Substring(0, 2) : ((int)InvItem.Miles_wo_Care / 1000).ToString().Trim()) : "  ";

Two lines, so many nested ternaries. Need to round off to the nearest thousand? Just divide and then ToString the result, selecting the substring as needed. Be sure to Trim the string which couldn’t possibly contain whitespace, you never know.

Ironically, the only expression in this block which isn’t a WTF is InvItem.MilesGuaranteeFlag == true, because while we’re comparing against true, MilesGuaranteeFlag is a Nullable<bool>, so this confirms that it has a value and that the value is true.

So many miles.

And I would write five hundred lines
and I would write five hundred more
just to be the man who wrote a thousand lines
Uncaught Exception at line 24

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Krebs on SecurityInterview With the Guy Who Tried to Frame Me for Heroin Possession

In April 2013, I received via U.S. mail more than a gram of pure heroin as part of a scheme to get me arrested for drug possession. But the plan failed and the Ukrainian mastermind behind it soon after was imprisoned for unrelated cybercrime offenses. That individual recently gave his first interview since finishing his jail time here in the states, and he’s shared some select (if often abrasive and coarse) details on how he got into cybercrime and why. Below are a few translated excerpts.

When I first encountered now-31-year-old Sergei “Fly,” “Flycracker,” “MUXACC” Vovnenko in 2013, he was the administrator of the fraud forum “thecc[dot]bz,” an exclusive and closely guarded Russian language board dedicated to financial fraud and identity theft.

Many of the heavy-hitters from other fraud forums had a presence on Fly’s forum, and collectively the group financed and ran a soup-to-nuts network for turning hacked credit card data into mounds of cash.

Vovnenko first came onto my radar after his alter ego Fly published a blog entry that led with an image of my bloodied, severed head and included my credit report, copies of identification documents, pictures of our front door, information about family members, and so on. Fly had invited all of his cybercriminal friends to ruin my financial identity and that of my family.

Somewhat curious about what might have precipitated this outburst, I was secretly given access to Fly’s cybercrime forum and learned he’d freshly hatched a plot to have heroin sent to my home. The plan was to have one of his forum lackeys spoof a call from one of my neighbors to the police when the drugs arrived, complaining that drugs were being delivered to our house and being sold out of our home by Yours Truly.

Thankfully, someone on Fly’s forum also posted a link to the tracking number for the drug shipment. Before the smack arrived, I had a police officer come out and take a report. After the heroin showed up, I gave the drugs to the local police and wrote about the experience in Mail From the Velvet Cybercrime Underground.

Angry that I’d foiled the plan to have me arrested for being a smack dealer, Fly or someone on his forum had a local florist send a gaudy floral arrangement in the shape of a giant cross to my home, complete with a menacing message that addressed my wife and was signed, “Velvet Crabs.”

The floral arrangement that Fly or one of his forum lackeys had delivered to my home in Virginia.

Vovnenko was arrested in Italy in the summer of 2014 on identity theft and botnet charges, and spent some 15 months in arguably Italy’s worst prison contesting his extradition to the United States. Those efforts failed, and he soon pleaded guilty to aggravated identity theft and wire fraud, and spent several years bouncing around America’s prison system.

Although Vovnenko sent me a total of three letters from prison in Naples (a hand-written apology letter and two friendly postcards), he never responded to my requests to meet him following his trial and conviction on cybercrime charges in the United States. I suppose that is fair: To my everlasting dismay, I never responded to his Italian dispatches (the first I asked to be professionally analyzed and translated before I would touch it).

Seasons greetings from my pen pal, Flycracker.

After serving his 41 month sentence in the U.S., Vovnenko was deported, although it’s unclear where he currently resides (the interview excerpted here suggests he’s back in Italy, but Fly doesn’t exactly confirm that). 

In an interview published on the Russian-language security blog Krober[.]biz, Vovnenko said he began stealing early in life, and by 13 was already getting picked up for petty robberies and thefts.

A translated English version of the interview was produced and shared with KrebsOnSecurity by analysts at New York City-based cyber intelligence firm Flashpoint.

Sometime in the mid-aughts, Vovnenko settled with his mother in Naples, Italy, but he had trouble keeping a job for more than a few days. Until a chance encounter led to a front job at a den of thieves.

“When I came to my Mom in Naples, I could not find a permanent job. Having settled down somewhere at a new job, I would either get kicked out or leave in the first two days. I somehow didn’t succeed with employment until I was invited to work in a wine shop in the historical center of Naples, where I kinda had to wipe the dust from the bottles. But in fact, the wine shop turned out to be a real den and a sales outlet of hashish and crack. So my job was to be on the lookout and whenever the cops showed up, take a bag of goods and leave under the guise of a tourist.”

Cocaine and hash were plentiful at his employer’s place of work, and Vovnenko said he availed himself of both abundantly. After he’d saved enough to buy a computer, Fly started teaching himself how to write programs and hack stuff. He quickly became enthralled with the romanticized side of cybercrime — the allure of instant cash — and decided this was his true vocation.

“After watching movies and reading books about hackers, I really wanted to become a sort of virtual bandit who robs banks without leaving home,” Vovnenko recalled. “Once, out of curiosity, I wrote an SMS bomber that used a registration form on a dating site, bypassing the captcha through some kind of rookie mistake in the shitty code. The bomber would launch from the terminal and was written in Perl, and upon completion of its work, it gave out my phone number and email. I shared the bomber somewhere on one of my many awkward sites.”

“And a couple of weeks later they called me. Nah, not the cops, but some guy who comes from Sri Lanka who called himself Enrico. He told me that he used my program and earned a lot of money, and now he wants to share some of it with me and hire me. By a happy coincidence, the guy also lived in Naples.”

“When we met in person, he told me that he used my bomber to fuck with a telephone company called Wind. This telephone company had such a bonus service: for each incoming SMS you received two cents on the balance. Well, of course, this guy bought a bunch of SIM cards and began to bomb them, getting credits and loading them into his paid lines, similar to how phone sex works.”

But his job soon interfered with his drug habit, and he was let go.

“At the meeting, Enrico gave me 2K euros, and this was the first money I’ve earned, as it is fashionable to say these days, on ‘cybercrime’. I left my previous job and began to work closely with Enrico. But always stoned out of my mind, I didn’t do a good job and struggled with drug addiction at that time. I was addicted to cocaine, as a result, I was pulling a lot more money out of Enrico than my work brought him. And he kicked me out.”

After striking out on his own, Vovnenko says he began getting into carding big time, and was introduced to several other big players on the scene. One of those was a cigarette smuggler who used the nickname Ponchik (“Doughnut”).

I wonder if this is the same Ponchik who was arrested in 2013 as being the mastermind behind the Blackhole exploit kit, a crimeware package that fueled an overnight explosion in malware attacks via Web browser vulnerabilities.

In any case, Vovnenko had settled on some schemes that were generating reliably large amounts of cash.

“I’ve never stood still and was not focusing on carding only, with the money I earned, I started buying dumps and testing them at friends’ stores,” Vovnenko said. “Mules, to whom I signed the hotlines, were also signed up for cashing out the loads, giving them a mere 10 percent for their work. Things seemed to be going well.”

FAN MAIL

There is a large chronological gap in Vovnenko’s account of his cybercrime life story from that point on until the time he and his forum friends started sending heroin, large bags of feces and other nasty stuff to our Northern Virginia home in 2013.

Vovnenko claims he never sent anything and that it was all done by members of his forum.

-Tell me about the packages to Krebs.
“That ain’t me. Suitcase filled with sketchy money, dildoes, and a bouquet of coffin wildflowers. They sent all sorts of crazy shit. Forty or so guys would send. When I was already doing time, one of the dudes sent it. By the way, Krebs wanted to see me. But the lawyer suggested this was a bad idea. Maybe he wanted to look into my eyes.”

In one part of the interview, Fly is asked about but only briefly touches on how he was caught. I wanted to add some context here because this part of the story is richly ironic, and perhaps a tad cathartic.

Around the same time Fly was taking bitcoin donations for a fund to purchase heroin on my behalf, he was also engaged to be married to a nice young woman. But Fly apparently did not fully trust his bride-to-be, so he had malware installed on her system that forwarded him copies of all email that she sent and received.

Fly,/Flycracker discussing the purchase of a gram of heroin from Silk Road seller “10toes.”

But Fly would make at least two big operational security mistakes in this spying effort: First, he had his fiancée’s messages forwarded to an email account he’d used for plenty of cybercriminal stuff related to his various “Fly” identities.

Mistake number two was the password for his email account was the same as one of his cybercrime forum admin accounts. And unbeknownst to him at the time, that forum was hacked, with all email addresses and hashed passwords exposed.

Soon enough, investigators were reading Fly’s email, including the messages forwarded from his wife’s account that had details about their upcoming nuptials, such as shipping addresses for their wedding-related items and the full name of Fly’s fiancée. It didn’t take long to zero in on Fly’s location in Naples.

While it may sound unlikely that a guy so immeshed in the cybercrime space could make such rookie security mistakes, I have found that a great many cybercriminals actually have worse operational security than the average Internet user.

I suspect this may be because the nature of their activities requires them to create vast numbers of single- or brief-use accounts, and in general they tend to re-use credentials across multiple sites, or else pick very poor passwords — even for critical resources.

In addition to elaborating on his hacking career, Fly talks a great deal about his time in various prisons (including their culinary habits), and an apparent longing or at least lingering fondness for the whole carding scene in general.

Towards the end, Fly says he’s considering going back to school, and that he may even take up information security as a study. I wish him luck in that whatever that endeavor is as long as he can also avoid stealing from people.

I don’t know what I would have written many years ago to Fly had I not been already so traumatized by receiving postal mail from him. Perhaps it would go something like this:

“Dear Fly: Thank you for your letters. I am very sorry to hear about the delays in your travel plans. I wish you luck in all your endeavors — and I sincerely wish the next hopeful opportunity you alight upon does not turn out to be a pile of shit.”

The entire translated interview is here (PDF). Fair warning: Many readers may find some of the language and topics discussed in the interview disturbing or offensive.

,

Planet DebianShirish Agarwal: Life, Liberty and Kashmir

I was going to write about history of banking today but because the blockade is still continuing in Kashmir, I am forced to write my opinions on it and clear at least some ideas and myths various people have about Kashmir. Before I start though, I hope the Goa Debian Utsav was good. While I haven’t seen any reports, I hope it went well. Frankly, I was in two minds whether I should apply for the Debutsav in Goa or not. While there is a possibility that I could have applied and perhaps even got the traveling sponsorship, I was unsure as to what to tell the students. With recovery of the economy in India at least 6 quarters away if not more, it would have been difficult for me to justify to the students as to how to look for careers in I.T. when salaries of most professionals have been stagnant, lowered and even retention happening in Pune, Bangalore and other places it would have been difficult to say that.

Anyways, this would be a long one. I would like to start with a lawsuit filed in Kerala which was shared and the judgement which was given which at least in my view was a progressive decision. The case I am reciting is ”Right To Access Internet Is Part Of Right To Privacy And Right To Education‘ which was given by Kerala HC recently. The judgement of the case is at https://www.livelaw.in/pdf_upload/pdf_upload-364655.pdf which I reproduce below as well.

So let us try to figure out what the suit/case was all about and how it involves the larger question of communication blockades and other things in Kashmir. The case involves a woman student of 18 years of age, a Faheema shirin (Petitioner) who came to Kerala for higher studies (B.Ed) at an institute called Narayanguru College located in Kozhikhode District. Incidentally, I have been fortunate to visit Kerala and Khozikhode District and they are beautiful places but we can have that conversation some other day. Now apparently, she was expelled from the college hostel for using the mobile phone during study time. The College is affiliated to University of Calicut. Now according to statements from the hostel matron, the petitioner and others, it became clear that inmates of the hostel were not allowed to use mobile phones from 10 p.m. to 6.a.m. -i.e. 22:00 hrs. to 0600 hrs. Apparently, this rule was changed to 1800 hrs – 2000 hrs. arbitrarily. The petitioner’s house is 150 kms. from the place. When she said it is not possible to follow the rules because of the subjects she was studying as well as she needed to connect to home anytime she wanted or her father or relatives may feel to call her or in case of any help. She alleged discrimination as these rules were only made for the girl’s hostel and not for the boy’s hostel. I had also seen and felt the same but as shared that’s for another day altogether.

The petitioner invoked the Conventions on Eliminations of all forms of Discrimination against Women, 1979, the Beijing Declaration and Universal Declaration of Human Rights, to which GOI is a signatory and hence had to abide by its rules. She further contended that her education depended on her using digital technology with access to web as given in her textbook. She needed to scan the QR codes in various places in her textbooks and use the link given therein to see videos, animations etc. on a digital platform called swayam. Incidentally, it seems swayam runs on closed source software as shared by SFLC.in on their website. Now if it is closed, commercial software than most probably the only the content can be viewed is via streaming rather than downloading, going offline and seeing it as that would attract provisions of the IT ACT and perhaps would constitute piracy. While this point was not argued, it seemed pertinent for me to point out as few people on social media have asked about. In several such cases it is either impossible or you have to be an expert in order to manipulate and download such data (like Snowdem did) but then that’s again a story for another day. Interestingly, the father in the case above was also in the favor of the girl using mobile phone for whatever purpose as he trusts her implicitly and she is adult enough to make her own life choices.

Thankfully, the petitioner had presence of mine throughout the journey that she did all her correspondence through letters instead of orally and had documentary evidence to back up all her claims. The State Govt. of Kerala has been on the forefront of digital technology for a long while and me and many of my friends have been both witness and played our small parts in whichever way to see Kerala become an IT hub. While they still do need to do a lot more but that again is a story for another day. While there was lot of back and forth between her, the hostel authorities, the father and the hostel authorities, she, her father, the hostel authorities and the college but they were unable to resolve the issues amicably. Her grounds for the fight were –

a. She is an adult and of rational mind so she can make decisions on her own.
b. She has right of privacy ( as shared by the Honorable Supreme Court in its 2017 landmark judgement)
c. She needs the mobile and the laptop for studying as her studies demand her using Internet.
d. She also relied and used the budget speech made by Minister of Finance and State Government for making internet accessible to all citizens and recognizing the right to Internet as a human right.
e. Her violation to right of property under Article 300 A.

In order to further bolster her case, through her lawyers she cited further judgements and studies which show how women are disadvantaged to Internet access, in particular she cited a UNESCO study which tells the same.

The judge, Honorable Jutice P.V. Asha guided herself with the arguments and counter-arguments by both parties, she also delved into Calicut University First Ordinances under which the University, the college and the hostel come in to see how thngs fare there. She had also asked the respondent that by using Internet has she or any other student in the hostel ever caused disturbance to any of the other inmates to which the reply was negative. The Judge also determined that if a miuse of a mobile phone or laptop has to happen, it can happen any time, anywhere and you cannot and should not control adult behavior especially when it collides with dignity and freedom of an adult. The learned counsel for the petitioner also shared resolution 23/2 in the UN General Assembly held on 24th June 2013 which talks of freedom of expression and opinion for women’s empowerment to which India is a signatory. There is also resolution 20/8 of 5th July 2012 which also underscores the point. Both the portions of the resolution can be found on page 18 of the judgement. The judge also cited few other judgements which were pointed out by the learned counsel for the petitioner, the Vishaka Judgement (1997) , the Beijing Statement and several other cases and judgement which showed how women are discriminated against under society. In the end she set aside the expulsion citing various judgements and her rationale for the same and asked the matron to take the student back and also asked the student to not humiliate the teacher or warden and she be allowed to use phone in any way she feels fit as far as she doesn’t create any disturbance to other students.

Observations – It opens up several questions which are part of society’s issues even today and probably for sometime.

a. I have been part of quite a few workshops where while I was supposed to share about GNU/Linux, more often than not I ended up sharing about how to use web access rather than advanced technologies. In this I found women to be more backward and take more time to understand and use the concepts than men. Whether it is due to just access issues or larger societal reasons ( the hidden alleged misuse of web) I just don’t know. While I do wish we could do more I don’t have any solutions.

b. As correctly pointed by Honorable Justice Asha, if a women who is pursuing B.Ed. it would harm the career of the young woman. I would opine and put one step more, wouldn’t it also be endangering her proteges, her students from getting a better teacher who is able to guide her students to the best of her ability. As we all know, rightly or wrongly almost all information is available on the net. The role of the teacher or guide is not to show information but probably more as to how to inquire and interpret information in different ways.

Kashmir

In light of the above judgement would not the same principles apply to Kashmir. There are two points shared by various people who are in favor of the lockdown. The first is National Security, National Interest and the second is Kashmiri Pandits. Let us take them one by one –

a. National Interest or/and National Security – I find this reason porous on many grounds. This Govt. is ruled by one of the richest political parties that India ever has. Without divulging further, there is such a huge range of hardware and software for the Government to surveil. With AFSA in-place and all sorts of technologies available off-the-shelf to surveil on residents that argument looks weak. Further, the Minister’s statement tells that the issue is not security of the state but something else. Of course the majoratian view is that they deserve it because they are muslims. If this is not hate, I dunno what is. A person on twitter did a social experiment where a daughter and a mother had the same conflict. The daughter’s view is that it is not right, the mother’s view being the opposite. The daughter disallowed the mother any contact with her, her husband and her daughter for 2 weeks, the mother was in tears. Then how can you think of people being blocked for 2 months.

Another variation of the argument is that militants will come and kill. Now I find it hard to believe that even after having half a million soldiers in the valley they still feel miitants can do something and they cannot. I find it a little hard to digest. There has been news now that the Taliban are involved. If this is true then they have troubled U.S. also, so if one of the most powerful armies on the earth can be stale-mated for what 19 years, are we going to put Kashmiris in lockdown for 19 years ? In fact the prejudcial face can be seen even more at https://www.youtube.com/watch?v=kXWZnnD6JFY-

Kashmiri Pandits – There is no doubt that there was a mass exodus of Kashmiri Hindus from the valley. Nobody disputes that. But just like the process followed in NRC, whether rightly or wrongly couldn’t the Kashmiri Pandits be sent back home. I would argue this is the best time. You have a huge contigent of forces in the valley, you can start the process, get the documents, get them back into the valley, otherwise this will continue to be something like Palestine is in Israel which has continued to an issue for both Israelis and Palestinians with no end in sight. The idea that Pakistan will not harass or do something in Kashmir in fool’s paradise. They have been doing it since 90’s, for that to have a huge population blocked from communicating is nothing but harassment. And hate will never get you anywhere. While this is more greyer than I am making it out, feel free to read this interview as well as the series called The Family Man which I found to be pretty truthful as to the greyishness of the situation out there. While most of the mainstream media gave it an average score, I found it thought-provoking. The fact is mainstream media in India no longer questions the Government excesses. Some people do and they are often targeted. I do hope to share the banking scenario and a sort of mini-banking crisis soon. Till later.

Cory DoctorowMy appearance on Futurithmic

I was delighted to sit down with my old friend Michael Hainsworth for his new TV show Futurithmic, where we talked about science fiction, technological self-determination, internet freedom. They’ve just posted the episode and it’s fabulous!

Planet DebianMike Gabriel: IServ Schulserver - Insecure Setup Strategy allows Hi-Jacking of User Accounts

"IServ Schulserver" [1] is a commercial school server developed by a company in Braunschweig, Germany. The "IServ Schulserver" is a product based on Debian. The whole project started as a students' project.

The "IServ" is an insular school server (one machine for everything + backup server) that provides a web portal / communication platform for the school (reachable from the internet), manages the school's MS Windows® clients via OPSI [2] and provides other features like chatrooms, mail accounts, etc.

The "IServ Schulserver" has written quite a success story in various areas of Germany, recently. IServ has been deployed at many many schools in Northrhein-Westfalia, Lower Saxony and Schleswig-Holstein. You can easily find those schools on the internet, if you search the web for "IServ IDesk".

The company that is developing "IServ" has various IT partner businesses all over Germany that deploy the IServ environment at local schools and are also the first point of contact for support.

It's all hear-say...

So, last night, I heard about a security design flaw not having been fixed / addressed since I had first heard about it. That was in 2014, when one of the Debian Edu schools I supported back then migrated over to IServ. At that time, the below could be confirmed. Last night, I learned that the following is still an issue on an IServ machine deployed recently here in Schleswig-Holstein (its deployment dates only a few weeks back). It's all hear-say, you know. But alas, ...

Mass User Creation Modes

If IServ admins mass create user accounts following the product's documentation [3], they can opt for user accounts to be created and made active immediately, or they can opt for creating user accounts that are initially deactivated.

The strategy of the local supplier for IServ Schulserver in Schleswig Holstein (around the area of city of Kiel) seems to create these initial user accounts (that is, all contemporary teachers and students) with immediately activated accounts.

Initial Login

If you are a teacher (or student) at a school and have been notified about your initial IServ account being set up for you, you will get the instruction to initially log into the IServ web portal. The school provides each teacher with a URL and a login name. The default scheme for login names is <firstname>.<lastname>.

The password is not explicitly mentioned, as it is easy to remember. It is also <firstname>.<lastname> (i.e. initial_password := login_name). Conveniently as it is, people can do these logins from anywhere. When doing the initial login, the users are guided to a change-password dialog in their web browser session and finally, they can set their own password.

Pheeeww.... one account less that is just too dumb easy to hack.

Getting to know People at your New School

Nowadays, most schools have a homepage. On that homepage, they always present the core teacher staff group (people with some sort of a leadership position) with full names. Sometimes they even list all teachers with their full names. More rarely, but also quite common, all teachers are listed with a portrait photo (and/or the subjects they teach). Wanna be a teacher at that school? Hacky-sign up for an account then...

How to Get In

If you are a nasty hacker, you can now go to some school's homepage, pick a teacher/face (or subject combination) that makes you assume that that person is not an IT-affiliated-kind-of-person and try to login as that person. If you are a neat hacker, you do this via Tor (or similar), of course.

Seriously!

If our imaginery hackers succeed with logging in using initial credentials, they can set a password for the impersonated teacher and they are in.

Many schools, I have seen, distribute documents and information to their teachers via the schools communication platform. If that platform is "IServ Schulserver", then you can easily gain access to those documents [4].

My personal guess is, that schools also use their school communication platform for distributing personal data, which is probably not allowed on the educational network of a school anyway (the "IServ Schulserver" is not an E-Mail server on the internet, it is the core server, firewall, mail gateway, Windows Network Server, etc. of the school's educational network).

Now, sharing those information via a system that is so easy to get unauthorized access to, is IMHO highly negligent and a severe violation of the GDPR.

Securing Mass User Creation

There are several ways, to fix this design flaw:

  • mass create users with accounts being initially deactivated and come up with some internal social workflow for enabling and setting up accounts and user passwords
  • talk to the developers and ask them to add credential imports (i.e. mass setting passwords for a list of given usernames)
  • use some other school server solution

Other Security Issues?

If people like to share their observations about school IT and security, I'd be interested. Let me know (see the imprint page [5] on my blog for my mail address).

light+love
Mike Gabriel (aka sunweaver at debian.org)

References & Footnotes

Planet DebianAndrej Shadura: Rust-like enums in Kotlin

Rust has an exciting concept of enumeration types, which is much more powerful than enums in other languages. Notably C has the weakest type of enum, since there’s no type checking of any kind, and enum values can be used interchangeably with integers:

enum JobState {
    PENDING,
    STARTED,
    FAILED,
    COMPLETED
};

You can opt for manually assigning integers instead of leaving this to the compiler, but that’s about it.

Higher level languages like Python and Java treat enumeration types as classes, bringing stricted type checking and better flexibility, since they can be extended nearly as any other classes. In both Python and Java individual enumerated values are singleton instances of the enumeration class.

class JobState(Enum):
    PENDING = auto()
    STARTED = auto()
    FAILED = auto()
    COMPLETED = auto()
enum JobState {
    PENDING,
    STARTED,
    FAILED,
    COMPLETED;
}

Since enumerations are classes, they can define extra methods, but because the enum values are singletons, they can’t be coupled with any extra data, and no new instances of the enum class can be created.

In contrast with Python and Java, Rust allows attaching data to enumerations:

enum JobState {
    Pending,
    Started,
    Failed(String),
    Completed
}

This allows us to store the error message in the same value as the job state, without having to declare a structure with an extra field which would be used only when the state in Failed.

So, what Kotlin has to offer? Kotlin has a language feature called sealed classes. A sealed class is an abstract class with limited interitance: all of its subclasses have to be declated in the same file. In a way, this is quite close to the Rust enums, even though sealed classed look and behave a bit differently.

sealed class JobState {
    object Pending : JobState()
    object Started : JobState()
    object Completed : JobState()
    data class Failed(val errorMessage: String) : JobState()
}

Declared this way, JobState can be used in a way similar to Rust’s enums: a single variable of this type can be assigned singletons Pending, Started or Completed, or any instance of Failed with a mandatory String member:

val state: JobState = JobState.Failed("I/O error")

when (state) {
    is JobState.Completed ->
        println("Job completed")
    is JobState.Failed ->
        println("Job failed with an error: ${state.errorMessage}")
}

This usage resembles the regular Java/Kotlin enums quite a bit, but alternatively, Pending and friends can be declared outside of the sealed class, allowing them to be used directly without the need to add a JobState qualifier.

A slightly simplified real life example from a Kotlin project I’m working on, where a separate coroutine handles I/O with a Bluetooth or a USB device:

sealed class Result
object Connected : Result()
data class Failed(val error: String) : Result()

sealed class CommServiceMsg
data class Connect(val response: CompletableDeferred<Result>) : CommServiceMsg()
object Disconnect : CommServiceMsg()
data class Write(val data: ByteArray) : CommServiceMsg()

fun CoroutineScope.bluetoothServiceActor(device: BluetoothDevice) = actor<CommServiceMsg>(Dispatchers.IO) {
    val socket: BluetoothSocket = device.createSocket()

    process@ for (msg in channel) {
        when (msg) {
            is Connect -> {
                with(socket) {
                    msg.response.complete(try {
                        connect()
                        Connected
                    } catch (e: IOException) {
                        val error = e.message ?: ""
                        Failed(error)
                    }
                }
            }
            is Disconnect -> break@process
            is Write -> {
                socket.outputStream.write(msg.data)
            }
        }
    }
    socket.outputStream.flush()
    socket.close()
}

Here, we can talk to bluetoothServiceActor using messages each carrying extra data; if the coroutine needs to talk back (in this example, the result of a connection attempt), it uses a CompletableDeferred<> value of the Result type, which can hold an error message when needed.

With that in place, we can write something like this:

val bluetoothService = bluetoothServiceActor(device)
val response = CompletableDeferred<Result>()

bluetoothService.send(Connect(response))
var result = response.await()
when (result) {
    is Connected -> {
        bluetoothService.send(Write(byteArrayOf(42, 0x1e, 0x17)))
        bluetoothService.send(Disconnect)
    }
    is Failed ->
        println("error occurred: ${result.error}")
}

CryptogramIneffective Package Tracking Facilitates Fraud

This article discusses an e-commerce fraud technique in the UK. Because the Royal Mail only tracks packages to the postcode -- and not to the address - it's possible to commit a variety of different frauds. Tracking systems that rely on signature are not similarly vulnerable.

Worse Than FailureCodeSOD: And it was Uphill Both Ways

Today’s submission is a little bit different. Kevin sends us some code where the real WTF is simply that… it still is in use somewhere. By the standards of its era, I’d actually say that the code is almost good. This is more of a little trip down memory lane, about the way web development used to work.

Let’s start with the HTML snippet:

<frameset  border="0" frameborder="0" framespacing="0" cols="*,770,*"  onLoad="MaximizeWindow()">
	<!-- SNIPPED... -->
</frameset>

In 2019, if you want to have a sidebar full of links which allow users to click, and have a portion of the page update while not refreshing the whole page, you probably write a component in the UI framework of your choice. In 1999, you used frames. Honestly, by 1999, frames were already on the way out (he says, despite maintaining a number of frames-based applications well into the early 2010s), but for a brief period in web development history, they were absolutely all the rage.

In fact, shortly after I made my own personal home page, full of <marquee> tags, creative abuse of the <font> tag, and a color scheme which was hot pink and neon green, I showed it to a friend, who condescendingly said, “What, you didn’t even use frames?” He made me mad enough that I almost deleted my Geocities account.

Frames are dead, but now we have <iframes> which do the same thing, but are almost entirely used for embedding ads or YouTube videos. Some things will never truly die.

  IE4 = (document.all) ? true : false;
  NS4 = (document.layers) ? true : false;
  ver4 = (IE4||NS4);

  if (ver4!=true){  
    function MaximizeWindow(){
        alert('Please install a browser with support for Javascript 1.2. This website works for example with Microsofts Internet Explorer or Netscapes Navigator in versions 4.x or newer!')
        self.history.back();
        }
    }
  
  if (ver4==true){
    function MaximizeWindow(){
    window.focus();
	window.moveTo(0,0)
	window.resizeTo(screen.availWidth,screen.availHeight)
      }
}

Even today, in the era of web standards, we still constantly need to use shims and compatibility checks. The reasons are largely the same as they were back then: standards (or conventions) evolve quickly, vendors don’t care about standards, and browsers represent fiendishly complicated blocks of software. Today, we have better ways of doing those checks, but here we do our check with the first two lines of code.

And this, by the way, is why I said this code was “almost good”. In the era of “a browser with support for Javascript 1.2”, the standard way of checking browser versions was mining the user-agent string. And because of that we have situations where browsers report insanity like Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36.

Even in the late 90s though, the “right” way to check if your site was compatible with a given browser was to check for the features you planned to use. Which this code does- specifically, it’s looking for document.all or document.layers, which were two different approaches to exploring the DOM before we had actual tools for exploring the DOM. In this era, we’d call stuff like this “DHTML” (the D is for “dynamic”), and we traversed the DOM as a chain of properties, doing things like document.forms[0].inputs[0] to access fields on the form.

This is almost good, though, because it doesn’t gracefully degrade. If you don’t have a browser which reports these properties- document.all or document.layers, we just pop up an alert and forcibly hit the back button on your browser. Then again, if you do have a browser that supports those properties, it’s just going to go and forcibly hit the “Maximize” button on you, which is also not great, but I’m sure would make the site look quite impressive on an 800x600 resolution screen. I’m honestly kind of surprised that this doesn’t also check your resolution, and provide some warning about looking best at a certain resolution, which was also pretty standard stuff for this era.

Again, the real WTF is that this code still exists out in the wild somewhere. Kevin found it when he encountered a site that kept kicking him back to the previous page. But there’s a deeper WTF: web development is bad. It’s always been bad. It possibly always will be bad. It’s complicated, and hard, and for some reason we’ve decided that we need to build all our UIs using a platform where a paragraph is considered a first-class UI element comparable to a button. But the next time you struggle to “grok” the new hot JavaScript framework, just remember that you’re part of a long history of people who have wrestled with accomplishing basic tasks on the web, and that it’s always been a hack, whether it’s a hack in the UA-string, a hack of using frames to essentially embed browser windows inside of browser windows, or a hack to navigate the unending efforts of browser vendors to hamstring and befuddle the competition.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet DebianEnrico Zini: xtypeinto: type text into X windows

Several sites have started disabling paste in input fields, mostly password fields, but also other fields for no apparent reason.

Random links on the topic:

  • https://developers.google.com/web/tools/lighthouse/audits/password-pasting
  • https://www.ncsc.gov.uk/blog-post/let-them-paste-passwords
  • https://www.troyhunt.com/the-cobra-effect-that-is-disabling/
  • https://www.wired.com/2015/07/websites-please-stop-blocking-password-managers-2015/

This said, I am normally uneasy about copy-pasting passwords, as any X window can sniff the clipboard contents at any time, and I like password managers like impass that would type it for you instead of copying it to the clipboard.

However, today I got out way more frustrated than I could handle after illing in 17-digits nonsensical, always-slightly-different INPS payment codelines inside input fields that disabled paste for no reason whatsoever (they are not secret).

I thought "never again", I put together some code from impass and wmctrl and created xtypeinto:

$ ./xtypeinto --help
usage: xtypeinto [-h] [--verbose] [--debug] [string]

Type text into a window

positional arguments:
  string         string to type (default: stdin)

optional arguments:
  -h, --help     show this help message and exit
  --verbose, -v  verbose output
  --debug        debug output

Pass a string to xtypeinto as an argument, or as standard input.

xtypeinto will show a crosshair to pick a window, and the text will be typed into that window.

Please make sure that you focus on the right field before running xtypeinto, to make sure things are typed where you need them.

CryptogramCrown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

Earlier this month, I made fun of a company called Crown Sterling, for...for...for being a company that deserves being made fun of.

This morning, the company announced that they "decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer." Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

The press release goes on: "Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing." They didn't demonstrate it, but if they're right they've matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.

Is anyone taking this company seriously anymore? I honestly wouldn't be surprised if this was a hoax press release. It's not currently on the company's website. (And, if it is a hoax, I apologize to Crown Sterling. I'll post a retraction as soon as I hear from you.)

EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: "Today's decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys."

People, this isn't hard. Find an RSA Factoring Challenge number that hasn't been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won't have to reveal your super-secret world-destabilizing cryptanalytic techniques.

EDITED TO ADD (9/21): Others are laughing at this, too.

EDITED TO ADD (9/24): More commentary.

CryptogramRussians Hack FBI Comms System

Yahoo News reported that the Russians have successfully targeted an FBI communications system:

American officials discovered that the Russians had dramatically improved their ability to decrypt certain types of secure communications and had successfully tracked devices used by elite FBI surveillance teams. Officials also feared that the Russians may have devised other ways to monitor U.S. intelligence communications, including hacking into computers not connected to the internet. Senior FBI and CIA officials briefed congressional leaders on these issues as part of a wide-ranging examination on Capitol Hill of U.S. counterintelligence vulnerabilities.

These compromises, the full gravity of which became clear to U.S. officials in 2012, gave Russian spies in American cities including Washington, New York and San Francisco key insights into the location of undercover FBI surveillance teams, and likely the actual substance of FBI communications, according to former officials. They provided the Russians opportunities to potentially shake off FBI surveillance and communicate with sensitive human sources, check on remote recording devices and even gather intelligence on their FBI pursuers, the former officials said.

It's unclear whether the Russians were able to recover encrypted data or just perform traffic analysis. The Yahoo story implies the former; the NBC News story says otherwise. It's hard to tell if the reporters truly understand the difference. We do know, from research Matt Blaze and others did almost ten years ago, that at least one FBI radio system was horribly insecure in practice -- but not in a way that breaks the encryption. Its poor design just encourages users to turn off the encryption.

Worse Than FailureCodeSOD: Do You Need this

I’ve written an unfortunate amount of “useless” code in my career. In my personal experience, that’s code where I write it for a good reason at the time- like it’s a user request for a feature- but it turns out nobody actually needed or wanted that feature. Or, perhaps, if I’m being naughty, it’s a feature I want to implement just for the sake of doing it, not because anybody asked for it.

The code’s useless because it never actually gets used.

Claude R found some code which got used a lot, but was useless from the moment it was coded. Scattered throughout the codebase were calls to getInstance(), as in, Task myTask = aTask.getInstance().

At first glance, Claude didn’t think much of it. At second glance, Claude worried that there was some weird case of deep indirection where aTask wasn’t actually a concrete Task object and instead was a wrapper around some factory-instantiated concrete class or something. It didn’t seem likely, but this was Java, and a lot of Java code will follow patterns like that.

So Claude took a third glance, and found some code that’s about as useful as a football bat.

public Task getInstance(){
    return this;
}

To invoke getInstance you need a variable that references the object, which means you have a variable referencing the same thing as this. That is to say, this is unnecessary.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianKeith Packard: picolibc

Picolibc Version 1.0 Released

I wrote a couple of years ago about the troubles I had finding a good libc for embedded systems, and for the last year or so I've been using something I called 'newlib-nano', which was newlib with the stdio from avrlibc bolted on. That library has worked pretty well, and required very little work to ship.

Now that I'm doing RISC-V stuff full-time, and am currently working to improve the development environment on deeply embedded devices, I decided to take another look at libc and see if a bit more work on newlib-nano would make it a good choice for wider usage.

One of the first changes was to switch away from the very confusing "newlib-nano" name. I picked "picolibc" as that seems reasonably distinct from other projects in the space and and doesn't use 'new' or 'nano' in the name.

Major Changes

Let's start off with the big things I've changed from newlib:

  1. Replaced stdio. In place of the large and memory-intensive stdio stack found in newlib, picolibc's stdio is derived from avrlibc's code. The ATmel-specific assembly code has been replaced with C, and the printf code has seen significant rework to improve standards conformance. This work was originally done for newlib-nano, but it's a lot cleaner looking in picolibc.

  2. Switched from 'struct _reent' to TLS variables for per-thread values. This greatly simplifies the library and reduces memory usage for all applications -- per-thread data from unused portions of the library will not get allocated for any thread. On RISC-V, this also generates smaller and faster code. This also eliminates an extra level of function call for many code paths.

  3. Switched to the 'meson' build system. This makes building the library much faster and also improves the maintainability of the build system as it eliminates a maze of twisty autotools configure scripts.

  4. Updated the math test suite to use glibc as a reference instead of some ancient Sun machine.

  5. Manually verified the test results to see how the library is doing; getting automated testing working will take a lot more effort as many (many) tests still have invalid 'correct' values resulting in thousands of failure.

  6. Remove unused code with non-BSD licenses. There's still a pile of unused code hanging around, but all non-BSD licensed bits have been removed to make the licensing situation clear. Picolibc is BSD licensed.

Picocrt

Starting your embedded application requires initializing RAM as appropriate and calling initializers/constructors before invoking main(). Picocrt is designed to do that part for you.

Building Simplified

Using newlib-nano meant specifying the include and library paths very carefully in your build environment, and then creating a full custom linker script. With Picolibc, things are much easier:

  • Compile with -specs=picolibc.specs. That and the specification of the target processor are enough to configure include and library paths. The Debian package installs this in the gcc directory so you don't need to provide a full path to the file.

  • Link with picolibc.ld (which is used by default with picolibc.specs). This will set up memory regions and include Picocrt to initialize memory before your application runs.

Debian Packages

I've uploaded Debian packages for this version; they'll get stuck in the new queue for a while, but should eventually make there way into the repository. I'll plan on removing newlib-nano at some point in the future as I don't plan on maintaining both.

More information

You can find the source code on both my own server and over on github:

You'll find some docs and other information linked off the README file

Sam VargheseSaudis want US to fight another war for them

On 3 August 1990, the morning after Iraq invaded Kuwait, the Saudi Arabian government was more than a bit jittery, fearing that the Iraqi dictator Saddam Hussein would make Riyadh his next target. The Saudis had been some of the bigger buyers of American and British arms, but they found that they had a big problem.

And that was the fact that all the princes who were pilots of F-16 jets, considered one of the glamour jobs, had gone missing. Empty jets were of no use. How would the Saudis defend their country if Baghdad decided to march into the country’s Eastern Region? If Hussein decided to do so, he would be in control of a sizeable portion of the world’s oil resources and many countries would be royally screwed.

Then the Americans came calling, ready with doctored satellite imagery to scare the hell out of King Fahd and his colleagues. Finally, the king gave in to Dick Cheney’s arguments and asked the Americans to come into Saudi Arabia to defend the country.

The situation appears to be repeating itself after missiles hit Saudi Arabian oil installations two weeks ago, though this time the Americans seem reluctant to get into a fight with Iran, which has been blamed for the attack.

There is not a shred of proof to implicate Teheran apart from American and Saudi claims but then when has the Western press ever needed anything more than claims to point the finger at Iran?

The Saudis have been using foreign labour for a long time to do all the work in the country, right from cleaning the toilets to managing their companies. And they would, no doubt, be looking to the Americans to fight Iran too if it becomes necessary.

The fact is, the Saudis have more than enough military equipment to protect their country. But they are either incompetent to the point where they are unable to use it as it should be used. Or else, they are lazy and want others to do the work for them. After all, these are royals, right?

The Americans made a profit on the war which was waged in 1991 to eject Iraq from Kuwait; they spent US$51 billion and raked in US$60 billion, with contributions being made by numerous countries, all worried that oil prices would put their economies into negative territory.

But Iran will not be a pushover as Iraq was. And there is unlikely to be any kind of coalition like the one assembled in 1990. Nobody has the appetite for a fight. The world economy is looking decidedly shaky. And after the US pulled out of a deal to prevent Iran from developing nuclear weapons, countries in Europe are not exactly enthusiastic about joining the Americans in any more crazy adventures.

Planet DebianDirk Eddelbuettel: RcppAnnoy 0.0.13

annoy image

A new release of RcppAnnoy is now on CRAN.

RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm.

This release brings several updates. First and foremost, the upstream Annoy C++ code was updated from version 1.12 to 1.16 bringing both speedier code thanks to AVX512 instruction (where available) and new functionality. Which we expose in two new functions of which buildOnDisk() may be of interest for some using the file-back indices. We also corrected a minor wart in which a demo file was saved (via example()) to a user directory; we now use tempfile() as one should, and contributed two small Windows build changes back to Annoy.

Detailed changes follow below.

Changes in version 0.0.13 (2019-09-23)

  • In example(), the saved and loaded filename is now obtained via tempfile() to not touch user directories per CRAN Policy (Dirk).

  • RcppAnnoy was again synchronized with Annoy upstream leading to enhanced performance and more features (Dirk #48).

  • Minor changes made (and send as PRs upstream) to adapt both annoylib.h and mman.h changes (Dirk).

  • A spurious command was removed from one vignette (Peter Hickey in #49).

  • Two new user-facing functions onDiskBuild() and unbuild() were added (Dirk in #50).

  • Minor tweaks were made to two tinytest-using test files (Dirk).

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

TEDIs geoengineering a good idea? A brief Q&A with Kelly Wanser and Tim Flannery

This satellite image shows marine clouds off the Pacific West Coast of the United States. The streaks in the clouds are created by the exhaust from ships, which include both greenhouse gases and particulates like sulfates that mix with clouds and temporarily make them brighter. Brighter clouds reflect more sunlight back to space, cooling the climate.

As we recklessly warm the planet by pumping greenhouse gases into the atmosphere, some industrial emissions also produce particles that reflect sunshine back into space, putting a check on global warming that we’re only starting to understand. In her talk at TEDSummit 2019, “Emergency medicine for our climate fever,” climate activist Kelly Wanser asked: Can we engineer ways to harness this effect and reduce the effects global warming?

This idea, known as “cloud brightening,” is seen as controversial. After her talk, Wanser was joined onstage by environmentalist Tim Flannery — who gave a talk just moments earlier about the epic carbon-capturing abilities of seaweed — to discuss cloud brightening and how it could help restore our climate to health. Check out their exchange below.

CryptogramI'm Looking to Hire a Strategist to Help Figure Out Public-Interest Tech

I am in search of a strategic thought partner: a person who can work closely with me over the next 9 to 12 months in assessing what's needed to advance the practice, integration, and adoption of public-interest technology.

All of the details are in the RFP. The selected strategist will work closely with me on a number of clear deliverables. This is a contract position that could possibly become a salaried position in a subsequent phase, and under a different agreement.

I'm working with the team at Yancey Consulting, who will follow up with all proposers and manage the process. Please email Lisa Yancey at lisa@yanceyconsulting.com.

Google AdsenseAdSense now understands Marathi

Today, we’re excited to announce the addition of Marathi, a language spoken by over 80 millions people in Maharashtra, India and many other countries around the world, to the family of AdSense supported languages.

The interest for Marathi language content has been growing steadily over the last few years. With this launch, AdSense provides an easy way for publishers to monetize the content they create in Marathi, and advertisers can connect to a Marathi speaking audience with relevant ads.

To start monetizing your Marathi content website with Google AdSense:

  1. Check the AdSense Program policies and make sure your site is compliant.
  2. Sign up for an AdSense account
  3. Add the AdSense code to start displaying relevant ads to your users

Welcome to AdSense! Sign Up now!

Posted by:
AdSense Internationalization Team

CryptogramFrance Outlines Its Approach to Cyberwar

In a document published earlier this month (in French), France described the legal framework in which it will conduct cyberwar operations. Lukasz Olejnik explains what it means, and it's worth reading.

Planet DebianMolly de Blanc: Freedoms and Rights

I want to talk a bit about the relationship between rights and freedoms, and what they are. I think building a mutual understanding around this is important as I dig deeper into conversations around digital rights and software, user, and technology freedom.

A right is like a privilege in as much is that it’s something you’re allowed to do, however rights are innate and not earned. They are things to which everyone is entitled. A freedom expresses a lack of constraints related to an action. When we have a particular freedom (freedom X), we have an unrestrained ability to do X — we can do whatever we want in relation to X. You can also have the right to a certain kind of freedom (e.g. freedom of speech). I talk about both digital rights and digital freedoms. I view digital rights are the extension of our rights into digital spaces, and digital freedoms as the freedoms we have in those spaces. We have the right to free expression when speaking in a room; we have the right to free expression when speaking on the Internet.

Typically, we frame rights and freedoms in terms of government restrictions: governments are not allowed to keep you from exercising your freedoms, and they are there to protect and ensure your rights. It is becoming increasingly relevant (and common) to also talk about these in relation to companies and technology. It is important to also shift this discussion to include companies and technologies — especially computing software. As computing becomes more pervasive, we need to make sure that the software we’re writing is freedom protecting and rights respecting. These freedoms include the freedoms we typically associate with free and open source software: the unbridaled ability to use, study, modify, and share. it also includes freedoms like expression (to express ourselves without constraint) and the freedom to assemble (to get together without constraint). All of these freedoms are freedoms we have the right to, in addition to other rights including the right to digital autonomy and the right to consent.

I want to dig a little into a specific example, of the play between freedoms and rights, and the way we see computing fits in.

We have the right to freedom of speech — to communicate unfettered with one another. Free expression is something to which everyone is entitled, and there is a societal, social, and moral imperative to protect that right. Computers connect us to one another and enable us to express ourselves. They also give us safe spaces to develop the ideas we want to express in public ones, which is a necessary part of freedom of speech. However, computers can also infringe upon that right. Home surveillance devices, like home assistants, that are listening to and recording everything you say are stepping on your right and restricting your freedom. They are taking away your safe space to develop ideas and creating an environment where you cannot express yourself without restriction for fear of possible repercussions.

This is just one example of how computers play with the things we traditionally consider our rights and freedoms. Computers also force us to consider rights and freedoms in new contexts, and push the boundaries of what we consider to “count.” Our right to bodily autonomy now includes which medical devices, which computers, we allow to be implanted into our bodies; what happens with our medical and biometric data; and when and how our bodies are being monitored in public (and private) spaces. This includes the near future, where we see an increase in wearable computers and recreational and elective implants.

We have freedoms, we have rights, and we have the rights to certain freedoms because it is moral, ethical, and necessary for a just world. Our digital rights and digital freedoms are necessary for our digital autonomy, to borrow a phrase from Karen Sandler. Digital autonomy is necessary to move forward into a world of justice, equity, and equality.

Special thanks for Christopher Lemmer Webber.

Worse Than FailureAccounting for Changes

Sara works as a product manager for a piece of accounting software for a large, international company. As a product manager, Sara interacts with their internal customers- the accounting team- and Bradley is the one she always bumps heads with.

Bradley's idea of a change request is to send a screenshot, with no context, and a short message, like "please fix", "please advise", or "this is wrong". It would take weeks of emails and, if they were lucky, a single phone call, for Sara's team to figure out what needs to be fixed, because Bradley is "too busy" to provide any more information.

One day, Bradley sent a screenshot of their value added taxation subsystem, saying, "This is wrong. Please fix." The email was much longer, of course, but the rest of the email was Bradley's signature block, which included a long list of titles, certifications, a few "inspirational" quotes, and his full name.

Sara replied. "Hi Brad," her email began- she had once called him "Bradley" which triggered his longest email to date, a screed about proper forms of address. "Thanks for notifying us about a possible issue. Can you help me figure out what's wrong? In your screen shot, I see SKU numbers, tax information, and shipping details."

Bradley's reply was brief. "Yes."

Sara sighed and picked up her phone. She called Bradley's firm, which landed her with an assistant, who tracked down another person, who asked another who got Bradley to confirm that the issue is that, in some cases, the Value Added Tax isn't using the right rate, as in some situations multiple rates have to be applied at the same time.

It was a big update to their VAT rules. Sara managed to talk to some SMEs at her company to refine the requirements, contacted development, and got the modifications built in the next sprint.

"Hi, Bradley," Sara started her next email. "Thank you for bringing the VAT issue to our attention. Based on your description, we have implemented an update. We've pushed it to the User Acceptance Testing environment. After you sign off that the changes are correct, we will deploy it into production. Let me know if there are any issues with the update." The email included links to the UAT process document, the UAT test plan template, and all the other details that they always provided to guide the UAT process.

A week later, Bradley sent an email. "It works." That was weird, as Bradley almost never signed off until he had pushed in a few unrelated changes. Still, she had the sign off. She attached the email to the ticket and once the changes were pushed to production, she closed the ticket.

A few days later, the entire accounting team goes into a meltdown and starts filing support request after support request. One user submitted ten by himself- and that user was the CFO. This turns into a tense meeting between the CFO, Bradley, Sara, and Sara's boss.

"How did this change get released to production?"

Sara pulled up the ticket. She showed the screenshots, referenced the specs, showed the development and QA test plans, and finally, the email from Bradley, declaring the software ready to go.

The CFO turned to Bradley.

"Oh," Bradley said, "we weren't able to actually test it. We didn't have access to our test environment at all last week."

"What?" Sara asked. "Why did you sign off on the change if you weren't able to test it!?"

"Well, we needed it to go live on Monday."

After that, a new round of requirements gathering happened, and Sara's team was able to implement them. Bradley wasn't involved, and while he still works at the same company, he's been shifting around from position to position, trying to find the best fit…

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Sam VargheseWas Garcès the right choice to officiate SA-NZ game?

The authorities who select referees for matches at the Rugby World Cup do not seem to think very deeply about the choices they make. This is, perhaps, what resulted in the French referee Jérôme Garcès being put in charge of the game between New Zealand and South Africa on 21 September.

Some background is necessary to understand why Garcès’ appointment was questionable. He had officiated in the game between Australia and New Zealand earlier this year and handed out a red card to Kiwi lock Scott Barrett for a charge on Australian skipper Michael Hooper. This was a decision that was questioned in many quarters; that Scott Barrett deserved a yellow card was not in question, but a red card was deemed to be a gross over-reaction.

Scott Barrett was banned for two matches after that and was making his return in Saturday’s game. Thus there were a fair few people observing how Garcès would officiate, especially when it came to Scott Barrett.

An additional factor that made Garcès unsuitable for this game is the regular claim about referees going easy on New Zealand because of their influence in world rugby; apart from those who come to watch a game because they are fans of this team or that, there is huge contingent of people who come to watch the All Blacks because they have some sort of mystique around them.

This claim is made by officials of teams which have been getting hammered by the Kiwis for years so one can put it down to that variety of fruit which is common these days: sour grapes. The fact is that all teams take advantage of the rules to the extent possible.

Garcès, thus, had to avoid being seen as going easy on New Zealand. And he made some very elementary errors.

The most glaring mistake he made was when he failed to send off South Africa winger Makazolo Mapimpi for not releasing the New Zealand standoff Richie Mo’unga, after the latter had booted a ball downfield, collected it five metres from the goalline and, though somewhat off-balance, was set to stumble over the line and score. Mapimpi tacked him but did not release Mo’unga as per the rules as there were no other South African players around to lend support.

Given that South Africa indulges in cynical tactics like this quite often — who can forget the professional fouls committed by the like of Bakkies Botha, Victor Matfield and Bryan Habana in years gone by? — a hardline referee may well have awarded the All Blacks a penalty try.

But Garcès did not go beyond a regulation penalty. He earned bitter criticism from the New Zealand captain Kieran Read who described him as “gutless” right there on the field.

Garcès also overlooked a number of neck rolls by South Africa’s Pieter-Steph du Toit on the All Blacks flanker Ardi Savea. Springboks giant lock Eben Etzebeth also grabbed the neck of a Kiwi player here and there but Garcès had no eyes for these tactics. All this in a year when there have been repeated reports that rugby referees have been ordered to crack down on tackles that come anywhere near the head.

The French official also missed a number of questionable tackles by the New Zealand players. He was put in a tricky situation by whoever selected him to officiate in the game and came out smelling of anything but roses.

But then Garcès was not responsible for the most shocking refereeing decision of the opening weekend of the tournament. This honour was claimed by British referee Rowan Kitt who was officiating as the television match official in the game between Australia and Fiji.

Kitt had nothing to offer on a tackle that Australian winger Reece Hodge effected on Fiji’s Peceli Yato, the team’s best player up to that point of the game, blocking the flanker with a shoulder-led, no-arms challenge to the head that resulted in Yato having to leave the field with concussion. He played no further part in the game.

On-field official Ben O’Keeffe missed the tackle too, but he was somewhat unsighted as the tackle took place close to the sideline. Former referee Jonathan Kaplan was scathing in his criticism of Kitt.

“On this occasion Kitt ruled that the challenge was legal and I find that extremely surprising,” said the 70-Test referee, a highly respected official during his day, in a column for the UK’s Daily Telegraph. “To let it pass without any sanction whatsoever was clearly the wrong call.”

He added: “Going into this tournament World Rugby have been very clear about contact with the head and what constitutes a red card under their new High Tackle Sanction framework.

“With that in mind I have absolutely no idea why Reece Hodge was not sent off for his tackle on Fiji’s Peceli Yato. To me it was completely clear and an almost textbook example of the type of challenge they are trying to outlaw.”

Exactly what it will take for referees to rule equally on all infringements remains to be seen. Perhaps someone needs to die on the field in real-time before rugby officials sit up and take notice.

Planet DebianWilliam (Bill) Blough: Free Software Activities (August 2019)


Debian

  • Fixed bug 933422: passwordsafe — Switch to using wxgtk3

    Versions:

    • unstable/testing: 1.06+dfsg-3
  • Upgraded passwordsafe package to latest upstream version (1.08.2)

    Versions:

    • unstable/testing: 1.08.2+dfsg-1
    • buster-backports: 1.08.2+dfsg-1~bpo10+1
  • Updated python-django-cas-client to latest upstream version (1.5.1) and did some miscellaneous cleanup/maintenance of the packaging.

    Versions:

    • unstable/testing: 1.5.1-1
  • Discovered an issue with sbuild where the .changes file output by the build was different from the .changes file passed to lintian. This meant that the lintian results were sometimes different when lintian was run via sbuild vs when it was run manually. Patch submitted.

  • Provided a patch for NuSOAP to update deprecated class constructors.

  • Submitted a merge request to update the ftp-master website and replace a reference to Buster as testing with Bullseye.

Axis2-C

  • Fixed bug AXIS2C-1619: CVE-2012-6107: SSL/TLS Hostname validation

    Commits:

    • r1866225 - Perform SSL hostname validation
    • r1866245 - Add SSL host validation check to X509_V_OK code path

CryptogramA Feminist Take on Information Privacy

Maria Farrell has a really interesting framing of information/device privacy:

What our smartphones and relationship abusers share is that they both exert power over us in a world shaped to tip the balance in their favour, and they both work really, really hard to obscure this fact and keep us confused and blaming ourselves. Here are some of the ways our unequal relationship with our smartphones is like an abusive relationship:

  • They isolate us from deeper, competing relationships in favour of superficial contact -- 'user engagement' -- that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.

  • They tell us the onus is on us to manage their behavior. It's our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.

  • They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we're meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.

  • They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.

  • It's impossible to just cut them off. They've wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?

Nope. Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true.

EDITED TO ADD (9/22) Cindy Cohn echoed a similar sentiment in her essay about John Barlow and his legacy.

Planet DebianColin Watson: Porting Storm to Python 3

We released Storm 0.21 on Friday (the release announcement seems to be stuck in moderation, but you can look at the NEWS file directly). For me, the biggest part of this release was adding Python 3 support.

Storm is a really nice and lightweight ORM (object-relational mapper) for Python, developed by Canonical. We use it for some major products (Launchpad and Landscape are the ones I know of), and it’s also free software and used by some other folks as well. Other popular ORMs for Python include SQLObject, SQLAlchemy and the Django ORM; we use those in various places too depending on the context, but personally I’ve always preferred Storm for the readability of code that uses it and for how easy it is to debug and extend it.

It’s been a problem for a while that Storm only worked with Python 2. It’s one of a handful of major blockers to getting Launchpad running on Python 3, which we definitely want to do; stoq ended up with a local fork of Storm to cope with this; and it was recently removed from Debian for this and other reasons. None of that was great. So, with significant assistance from a large patch contributed by Thiago Bellini, and with patient code review from Simon Poirier and some of my other colleagues, we finally managed to get that sorted out in this release.

In many ways, Storm was in fairly good shape already for a project that hadn’t yet been ported to Python 3: while its internal idea of which strings were bytes and which text required quite a bit of untangling in the way that Python 2 code usually does, its normal class used for text database columns was already Unicode which only accepted text input (unicode in Python 2), so it could have been a lot worse; this also means that applications that use Storm tend to get at least this part right even in Python 2. Aside from the bytes/text thing, many of the required changes were just the usual largely-mechanical ones that anyone who’s done 2-to-3 porting will be familiar with. But there were some areas that required non-trivial thought, and I’d like to talk about some of those here.

Exception types

Concrete database implementations such as psycopg2 raise implementation-specific exception types. The inheritance hierarchy for these is defined by the Python Database API (DB-API), but the actual exception classes aren’t in a common place; rather, you might get an instance of psycopg2.errors.IntegrityError when using PostgreSQL but an instance of sqlite3.IntegrityError when using SQLite. To make things easier for applications that don’t have a strict requirement for a particular database backend, Storm arranged to inject its own virtual exception types as additional base classes of these concrete exceptions by patching their __bases__ attribute, so for example, you could import IntegrityError from storm.exceptions and catch that rather than having to catch each backend-specific possibility.

Although this was always a bit of a cheat, it worked well in practice for a while, but the first sign of trouble even before porting to Python 3 was with psycopg2 2.5. This release started implementing its DB-API exception types in a C extension, which meant that it was no longer possible to patch __bases__. To get around that, a few years ago I landed a patch to Storm to use abc.ABCMeta.register instead to register the DB-API exceptions as virtual subclasses of Storm’s exceptions, which solved the problem for Python 2. However, even at the time I landed that, I knew that it would be a porting obstacle due to Python issue 12029; Django ran into that as well.

In the end, I opted to refactor how Storm handles exceptions: it now wraps cursor and connection objects in such a way as to catch DB-API exceptions raised by their methods and properties and re-raise them using wrapper exception types that inherit from both the appropriate subclass of StormError and the original DB-API exception type, and with some care I even managed to avoid this being painfully repetitive. Out-of-tree database backends will need to make some minor adjustments (removing install_exceptions, adding an _exception_module property to their Database subclass, adjusting the raw_connect method of their Database subclass to do exception wrapping, and possibly implementing _make_combined_exception_type and/or _wrap_exception if they need to add extra attributes to the wrapper exceptions). Applications that follow the usual Storm idiom of catching StormError or any of its subclasses should continue to work without needing any changes.

SQLObject compatibility

Storm includes some API compatibility with SQLObject; this was from before my time, but I believe it was mainly because Launchpad and possibly Landscape previously used SQLObject and this made the port to Storm very much easier. It still works fine for the parts of Launchpad that haven’t been ported to Storm, but I wouldn’t be surprised if there were newer features of SQLObject that it doesn’t support.

The main question here was what to do with StringCol and its associated AutoUnicodeVariable. I opted to make these explicitly only accept text on Python 3, since the main reason for them to accept bytes was to allow using them with Python 2 native strings (i.e. str), and on Python 3 str is already text so there’s much less need for the porting affordance in that case.

Since releasing 0.21 I realised that the StringCol implementation in SQLObject itself in fact accepts both bytes and text even on Python 3, so it’s possible that we’ll need to change this in the future, although we haven’t yet found any real code using Storm’s SQLObject compatibility layer that might rely on this. Still, it’s much easier for Storm to start out on the stricter side and perhaps become more lenient than it is to go the other way round.

inspect.getargspec

Storm had some fairly complicated use of inspect.getargspec on Python 2 as part of its test mocking arrangements. This didn’t work in Python 3 due to some subtleties relating to bound methods. I switched to the modern inspect.signature API in Python 3 to fix this, which in any case is rather simpler with the exception of a wrinkle in how method descriptors work.

(It’s possible that these mocking arrangements could be simplified nowadays by using some more off-the-shelf mocking library; I haven’t looked into that in any detail.)

What’s next?

I’m working on getting Storm back into Debian now, which will be with Python 3 support only since Debian is in the process of gradually removing Python 2 module support. Other than that I don’t really have any particular plans for Storm at the moment (although of course I’m not the only person with an interest in it), aside from ideally avoiding leaving six years between releases again. I expect we can go back into bug-fixing mode there for a while.

From the Launchpad side, I’ve recently made progress on one of the other major Python 3 blockers (porting Bazaar code hosting to Breezy, coming soon). There are still some other significant blockers, the largest being migrating to Mailman 3, subvertpy fixes so that we can port code importing to Breezy as well, and porting the lazr.restful stack; but we may soon be able to reach the point where it’s possible to start running interesting subsets of the test suite using Python 3 and categorising the failures, at which point we’ll be able to get a much better idea of how far we still have to go. Porting a project with the best part of a million lines of code and around three hundred dependencies is always going to take a while, but I’m happy to be making progress there, both due to Python 2’s impending end of upstream support and so that eventually we can start using new language facilities.

,

Planet DebianJoey Hess: how to detect chef

If you want your program to detect when it's being run by chef, here's one way to do that.

sleep 1 while $ENV{PATH} =~ m#chef[^:]+/bin#;

This works because Chef's shell_out adds Gem.bindir to PATH, which is something like /opt/chefdk/embedded/bin.

You may want to delete the "sleep", which will make it run faster.

Would I or anyone ever really do this? Chef Inc's management seems determined to test the question, don't they.

Cory DoctorowWhy do people believe the Earth is flat?

I have an op-ed in today’s Globe and Mail, “Why do people believe the Earth is flat?” wherein I connect the rise of conspiratorial thinking to the rise in actual conspiracies, in which increasingly concentrated industries are able to come up with collective lobbying positions that result in everything from crashing 737s to toxic baby-bottle liners to the opioid epidemic.

In a world where official processes are understood to be corruptible and thus increasingly unreliable, we don’t just have a difference in what we believe to be true, but in how we believe we know whether something is true or not. Without an official, neutral, legitimate procedure for rooting out truth — the rule of law — we’re left just trusting experts who “sound right to us.”

Big Tech has a role to play here, but it’s not in automated brainwashing through machine learning: rather, it’s in the ability for conspiracy peddlers to find people who are ripe for their version of the truth, and in the ability of converts to find one another and create communities that make them resilient against social pressure to abandon their conspiracies.

Fighting conspiracies, then, is ultimately about fighting the corruption that makes them plausible — not merely correcting the beliefs of people who have come under their sway.

They say that ad-driven companies such as Google and Facebook threw so much R&D at using data-mining to persuade people to buy refrigerators, subprime loans and fidget-spinners that they accidentally figured out how to rob us of our free will. These systems put our online history through a battery of psychological tests, automatically pick an approach that will convince us, then bombard us with an increasingly extreme, increasingly tailored series of pitches until we’re convinced that creeping sharia and George Soros are coming for our children.

This belief is rooted in a deep and completely justified mistrust of the Big Tech companies, which have proven themselves liars time and again on matters of taxation, labour policy, complicity in state surveillance and oppression, and privacy practices.

But this well-founded skepticism is switched off when it comes to evaluating Big Tech’s self-serving claims about the efficacy of its products. Exhibit A for the Mind-Control Ray theory of conspiratorial thinking is the companies’ own sales literature, wherein they boast to potential customers about the devastating impact of their products, which, they say, are every bit as terrific as the critics fear they are.

Why do people believe the Earth is flat? [Cory Doctorow/The Globe and Mail]

,

Planet DebianDirk Eddelbuettel: digest 0.6.21

A new version of digest is just now arriving at CRAN (following a slight holdup over one likely spurious reverse dependency error), and I will send an updated package to Debian shortly as well.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 795k downloads) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

Every now and then open source work really surprises you. Out of nowhere arrived a very fine pull request by Matthew de Queljoe which adds a very clever function getVDigest() supplying a (much faster) vectorized wrapper for digest creation. We illustrate this in a quick demo vectorized.R that is included too. So if you call digest() in bulk, this will most likely be rather helpful to you. Matthew even did further cleanups and refactorings but we are saving that for a subsequent pull request or two.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBen Hutchings: Linux Plumbers Conference 2019, part 2

Here's the second chunk of notes I took at Linux Plumbers Conference earlier this month. Part 1 covered the Distribution kernels track.

Kernel Debugging Tools BoF

Moderators: George Wilson and Serapheim Dimitropoulos from Delphix; Omar Sandoval from Facebook

Details: https://linuxplumbersconf.org/event/4/contributions/539/

Problem: ability to easily anlyse failures in production (live system) or post-mortem (crash dump).

Debuggers need to:

  • Get consistent stack traces
  • Traverse and pretty-print memory structures
  • Easily introduce, extend. combine commands

Most people present use crash; one mentioned crash-python (aka pycrash) and one uses kgdb.

Pain points:

  • Tools not keeping up with kernel changes
  • Poor scripting support in crash

crash-python is a Python layer on top of a gdb fork. Uses libkdumpfile to decode compressed crash-dumps.

drgn (aka Dragon) is a debugger-as-a-library. Excels in introspectiion of live systems and crash-dumps, and covers both kernel and user-space. It can be extended through Python. As a library it can be imported and used from the Python REPL.

sdb is Deplhix's front-end to drgn, providing a more shell-like interactive interface. Example of syntax:

> modules | filter obj.refcnt.counter > 10 | member name

Currently it doesn't always have good type information for memory. A raw virtual address can be typed using the "cast" command in a pipeline. Hoping that BTF will allow doing better.

Allows defining pretty-print functions, though it appears these have to be explciitly invoked.

Answering tough questions:

  • Can I see any stacks with a specific function in? (bpftrace can do that on a live system, but there's no similar facility for crash dumps.)
  • What I/O is currently being issued?
  • Which files are currently being written?

Some discussion around the fact that drgn has a lot of code that's dependent on kernel version, as internal structures change. How can it be kept in sync with the kernel? Could some of that code be moved into the kernel tree?

Omar (I think) said that his approach was to make drgn support multiple versions of structure definitions.

Q: How does this scale to the many different kernel branches that are used in different distributions and different hardware platforms?

A: drgn will pick up BTF structure definitions. When BTF is available the code only needs to handle addition/removal of members it accesses.

Brendan Gregg made a plea to distro maintainers to enable BTF. (CONFIG_DEBUG_INFO_BTF).

Wayland BoF

Moderator: Hans de Goede of Red Hat

Details: https://linuxplumbersconf.org/event/4/contributions/533/

Pain points and missing pieces with Wayland, or specifically GNOME Shell:

  • GNOME Shell is slower
  • Synergy doesn't work(?) - needs to be in the compositor
  • With Nvidia proprietary driver, mutter and native Wayland clients get GPU acceleration but X clients don't
  • No equivalent to ssh -X. Pipewire goes some way to the solution. The whole desktop can be remoted over RDP which can be tunnelled over SSH.
  • No remote login protocol like XDMCP
  • No Xvfb equivalent
  • Various X utilities that grab hot-keys don't have equivalents for Wayland
  • Not sure if all X's video acceleration features are implemented. Colour format conversion and hardware scaling are implemented.
  • Pointer movement becomes sluggish after a while (maybe related to GC in GNOME Shell?)
  • Performance, in general. GNOME Shell currently has to work as both a Wayland server and an X compositor, which limits the ability to optimise for Wayland.

IoT from the point of view of view of a generic and enterprise distribution

Speaker: Peter Robinson of Red Hat

Details: https://linuxplumbersconf.org/event/4/contributions/439/

The good

Can now use u-boot with UEFI support on most Arm hardware. Much easier to use a common kernel on multiple hardware platforms, and UEFI boot can be assumed.

The bad

"Enterprise" and "industrial" IoT is not a Raspberry Pi. Problems result from a lot of user-space assuming the world is an RPi.

Is bluez still maintained? No user-space releases for 15 months! Upstream not convinced this is a problem, but distributions now out of synch as they have to choose between last release and arbitrary git snapshot.

Wi-fi and Bluetooth firmware fixes (including security fixes) missing from linux-firmware.git. RPi Foundation has improved Bluetooth firmware for the chip they use but no-one else can redistribute it.

Lots of user-space uses /sys/class/gpio, which is now deprecated and can be disabled in kconfig. libgpiod would abstract this, but has poor documentation. Most other GPIO libraries don't work with new GPIO UAPI.

Similar issues with IIO - a lot of user-space doesn't use it but uses user-space drivers banging GPIOs etc. libiio exists but again has poor documentation.

For some drivers, even newly added drivers, the firmware has not been added to linux-firmware.git. Isn't there a policy that it should be? It seems to be an unwritten rule at present.

Toolchain track

Etherpad: https://etherpad.net/p/LPC2019_TC/timeslider#5767

Security feature parity between GCC and Clang

Speaker: Kees Cook of Google

Details: https://linuxplumbersconf.org/event/4/contributions/398/

LWN article: https://lwn.net/Articles/798913/

Analyzing changes to the binary interface exposed by the Kernel to its modules

Speaker: Dodji Seketeli of Red Hat

Details: https://linuxplumbersconf.org/event/4/contributions/399/

Wrapping system calls in glibc

Speakers: Maciej Rozycki of WDC

Details: https://linuxplumbersconf.org/event/4/contributions/397/

LWN article: https://lwn.net/Articles/799331/

CryptogramFriday Squid Blogging: Piglet Squid

Another piglet squid video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianBernhard R. Link: Firefox 69 dropped support for

With version 69, firefox removed the support for the <keygen> feature to easily deploy TLS client certificates.
It's kind of sad how used I've become to firefox giving me less and less reasons to use it...

Planet DebianEnrico Zini: Upgrading LineageOS 14 to 16

The LineageOS updater notified me that there will be no more updates for LineageOS 14, because now development on my phone happens on LineageOS 16, so I set aside some time and carefully followed the upgrade instructions.

I now have a phone with Lineageos 16, but the whole modem subsystem does not work.

Advice on #lineageos was that "the wiki instructions are often a bit generic.. offical thread often has the specific details".

Official thread is here, and the missing specific detail was "Make sure you had Samsung's Oreo firmware bootloader and modem before installing this.".

It looks like nothing ever installed firmware updates, since the Android that came with my phone ages ago. I can either wipe everything and install a stock android to let it do the upgrade, then replace it with LineageOS, or try a firmware upgrade.

This link has instructions for firmware upgrades using haimdall, which is in Debian, instead of Odin, which is in Windows.

Finding firmwares is embarassing. They only seem to be available from links on shady download sites, or commercial sites run by who knows whom. I verify sha256sums on LineageOS images, F-Droid has reproducible builds, but at the base of this wonderful stack there's going to be a blob downloaded off some forum on the internet.

In this case, this link points to some collection of firmware blobs.

I downloaded the pack and identified the ones for my phone, then unpacked the tar files and uncompressed the lz4 blobs.

With heimdall, I identified the mapping from partition names to blob names:

heimdall print-pit --no-reboot

Then I did the flashing:

heimdall flash --resume --RADIO modem.bin --CM cm.bin --PARAM param.bin --BOOTLOADER sboot.bin

The first time flashing didn't work, and I got stuck in download mode. This explains how to get out of download mode (power + volume down for 10s).

Second attempt worked fine, and now I have a working phone again:

heimdall flash --RADIO modem.bin --CM cm.bin --PARAM param.bin --BOOTLOADER sboot.bin

CryptogramNew Biometrics

This article discusses new types of biometrics under development, including gait, scent, heartbeat, microbiome, and butt shape (no, really).

Worse Than FailureError'd: Full Stack Languages...and BEYOND!

"When travelling to outer space, don't forget your...Javascript code?" writes Rob S.

 

Pascal wrote, "If you ask me, I think Dr. Phil needs to hire a captioner that doens't have a stutter."

 

Tore F. writes, "If the Lenovo System Update tool was coded to expect an unexpected exception, does that mean that it was, in fact, actually expected?"

 

"Note to self: Never set the A/C to its lowest limit, or at least have a toilet and TP handy," writes Peter G.

 

"No matter how hard you try, Yodal, 82 - (-7) does not equal 22," Chris E. wrote.

 

Jiri G. writes, "100% service availability? Nah, you don't need that. Close enough is good enough."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianLouis-Philippe Véronneau: Praise Be CUPS Driverless Printing

Last Tuesday, I finally got to start updating $work's many desktop computers to Debian Buster. I use Puppet to manage them remotely, so major upgrades basically mean reinstalling machines from scratch and running Puppet.

Over the years, the main upgrade hurdle has always been making our very large and very complicated printers work on Debian. Unsurprisingly, the blog posts I have written on that topic are very popular and get me a few 'thank you' emails per month.

I'm very happy to say, thanks to CUPS Driverless Printing (CUPS 2.2.2+), all those trials and tribulations are finally over. Printing on Buster just works. Yes yes, even color booklets printed on 11x17 paper folded in 3 stapled in the middle.

Xerox Altalink C8045 and Canon imageRUNNER ADVANCE C5550i

Although by default the Xerox Altalink C8045 comes with IPP Everywhere enabled, I wasn't able to print in color until I enabled AirPrint. I also had to update the printer to firmware version 101.002.008.274001 to make the folding and stapling features more stable.

As for the Canon imageRUNNER ADVANCE C5550i, it seems it doesn't support IPP Everywhere. After enabling AirPrint manually, everything worked perfectly.

Both printers now work wonderfully with all our computers, without the need to resort to strange (and broken) proprietary drivers or aweful 32bit libraries.

Note that if you run a firewall locally, you will need to open port 5353 UDP for machines to resolve .local addresses via mDNS. This had me bumped for a while.

Praises

Packaging CUPS and all the related CUPS bits for Debian isn't an easy task. I'm so glad I don't have to touch that side of CUPS. Three cheers to Didier Raboud, Till Kamppeter and to the Debian Printing Team.

Many, many thanks to Brian Potkin for the work he did to document CUPS Driverless printing and AirPrint on the Debian wiki. If we ever meet, I definitely owe you a pint.

Finally, well, thanks to Apple. (I never thought I'd ever say that)


  1. That took me a few hours. Yes. 

Planet DebianThomas Lange: FAI 5.8.7 and new ISO images using Debian 10

The new FAI release 5.8.7 now supports apt keys in files called package_config/CLASS.gpg. Before we only supported .asc files. fai-mirror has a new option -V, which checks if variables are used in package_config/ and uses variable definitions from class/.var.

I've also created new ISO images, which now install Debian 10 by default. They are available from

https://fai-project.org/fai-cd

If you need a newer kernel, checkout the FAI.me service which can also build ISO images using the kernel from backports which currently is 5.2.

FAI

CryptogramRevisiting Software Vulnerabilities in the Boeing 787

I previously blogged about a Black Hat talk that disclosed security vulnerabilities in the Boeing 787 software. Ben Rothke concludes that the vulnerabilities are real, but not practical.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 212.5 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned (plus 18 extra hours from July), but did nothing, thus he is carrying over 26h to September.
  • Ben Hutchings did 20 hours (out of 20 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 31 hours (out of 21.75h assigned plus 14.5 extra hours from July), thus he is carrying over 5.25h to September).
  • Hugo Lefeuvre did 30.5 hours (out of 21.75 hours allocated, plus 8.75 extra hours from July).
  • Jonas Meurer did 0.5 hours (out of 10, thus carrying 9.5h to September).
  • Markus Koschany did 21.75 hours (out of 21.75 hours allocated).
  • Mike Gabriel did 24 hours (out of 21.75 hours allocated plus 10 extra hours from July, thus carrying over 7.75h to September).
  • Ola Lundqvist got 8h assigned (plus 8 extra hours from August), but did nothing and gave back 8h, thus he is carrying over 8h to September.
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 21.75 hours (out of 21.75 hours allocated).
  • Thorsten Alteholz did 21.75 hours (out of 21.75 hours allocated).

Evolution of the situation

August was more or less a normal month, a bit still affected by summer in the area where most contributors live: so one contributor is still taking a break (thus we only had 13, not 14), two contributors were distracted by summer events and another one is still in training.

It’s been a while that we haven’t welcomed a new LTS sponsors. Nothing worrisome at this point as few sponsors are stopping, but after 5 years, some have moved on so it would be nice to keep finding new ones as well. We are still at 215 hours sponsored by month.

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 39 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureRedesign By Committee

Sample web form

Carl was excited to join his first "real" company and immerse himself in the World of Business. The fresh-faced IT Analyst was immediately assigned to a "cross-strata implementation team" tasked with redesigning the RMA form completed by customers when they returned goods. The current form had been flagged for various weaknesses and omissions.

The project's kickoff meeting ran for three hours, with twelve team members in attendance representing departments throughout the company. By the end of the meeting, the problem had been defined, and everyone had homework: to report to the next team meeting with their own interpretations of what the new form should look like.

Each team member dutifully came back with at least one version of the form each. The next meeting consisted of Norman, the QA Manager, critiquing each prospective form as it was presented to the group. Without fail, he'd shake his head with a furrowed brow, muttering "No, no ..."

This proceeded, form after form, until Terry, an Accounts Junior, presented his version. When Norman expressed displeasure, Terry dared to ask, "Well? What's wrong with it?"

Norman gestured to the list of required criteria in his hands. "You've missed this piece of information, and that's probably the most important item we need to capture."

Terry frowned. "But, Norman, your form doesn't have that information on it, either."

Upon looking down at his own form, Norman realized Terry was correct. He rallied to save his dignity. "Ah, yes, but, you see, I know that it's missing."

Stupefied, Terry backed down.

Carl cycled through bafflement, boredom, and agony of the soul as the meeting dragged on. At one point, Finance Manager Kevin picked up yet another version of the form and asked, "What about this one, then?"

Jason the Ops Manager skimmed through it, ticking off items against the list of criteria. "Yup, yup, yup, yup ... yes, this is it! I think we've cracked it!" he exclaimed.

Norman peered at the form in Jason's hands. "That's the form we're currently using." The very form they needed to replace.

Hours upon hours of combined effort had thus far resulted in no progress whatsoever. Carl glanced at the conference room's wall clock with its stubbornly slow hands, wondering if a camera hidden behind it were recording his reaction for a YouTube prank channel. But, no. He was simply immersed in the World of Business.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianLouis-Philippe Véronneau: Archiving 20 years of online content

Last Spring at work I was tasked with archiving all of the digital content made by Association pour une Solidarité Syndicale Étudiante (ASSÉ), a student union federation that was planned to shut down six months later.

Now that I've done it, I have to say it was quite a demanding task: ASSÉ was founded in 2001 and neither had proper digital archiving policies nor good web practices in general.

The goal was not only archiving those web resources, but also making sure they were easily accessible online too. I thus decided to create a meta site regrouping and presenting all of them.

All in all, I archived:

  • a Facebook page
  • a Twitter account
  • a YouTube account
  • multiple ephemeral websites
  • old handcrafted PHP4 websites that I had to partially re-write
  • a few crummy Wordpress sites
  • 2 old phpBB Forum using PHP3
  • a large Mailman2 mailing list
  • a large Yahoo! Group mailing list

Here are the three biggest challenges I faced during this project:

The Twitter API has stupid limitations

The Twitter API won't give you more than an account's last 3000 posts. When you need to automate the retrieval of more than 5500 tweets, you know you're entering a world of pain.

Long story short, I ended up writing this crummy shell script to parse the HTML, statify all the Twitter links and push the resulting code to a Wordpress site using Ozh' Tweet Archive Theme. The URL list was generated using the ArchiveTeam's web crawler.

Of course, once done I made the Wordpress into a static website. I personally think the result looks purty.

Here's the shell script I wrote - displayed here for archival purposes only. Let's pray I don't ever have to do this again. Please don't run this, as it might delete your grandma.

cat $1 | while read line
do
  # get the ID
  id=$(echo $line | sed 's@https://mobile.twitter.com/.\+/status/@@')
  # download the whole HTML page
  html=$(curl -s $line)
  # get the date
  date=$(echo "$html" | grep -A 1 '<div class="metadata">' | grep -o "[0-9].\+20[0-9][0-9]" | sed 's/ - //' | date -f - +"%F %k:%M:%S")
  # extract the tweet
  tweet=$(echo "$html" | grep '<div class="dir-ltr" dir="ltr">')
  # we strip the HTML tags for the title
  title=$(echo $tweet | sed -e 's/<[^>]*>//g')
  # get CSV list of tags
  tags=$(echo "$tweet" | grep -io "\#[a-z]\+" | sed ':a;N;$!ba;s/\n/,/g')
  # get a CSV list of links
  links=$(echo "$tweet" | grep -Po "title=\"http.*?>" | sed 's/title=\"//; s/">//' | sed ':a;N;$!ba;s/\n/,/g')
  # get a CSV list of usernames
  usernames=$(echo "$tweet" | grep -Po ">\@.*?<" | sed 's/>//; s/<//' | sed ':a;N;$!ba;s/\n/,/g')
  image_link=$(echo "$html" | grep "<img src=\"https://pbs.twimg.com/media/" | sed 's/:small//')

  # remove twitter cruft
  tweet=$(echo $tweet | sed 's/<div class="dir-ltr" dir="ltr"> /<p>/' | perl -pe 's@<a href="/hashtag.*?dir="ltr">@<span class="hashtag hashtag_local">@g')

  # expand links
if [ ! -z $links ]
then
    IFS=',' read -ra link <<< "$links"
    for i in "${link[@]}"
    do
      tweet=$(echo $tweet | perl -pe "s@<a href=\"*.+?rel=\"nofollow noopener\"dir=\"ltr\"data-*.+?</a>@<a href='$i'>$i</a>@")
    done
  fi

  # replace hashtags by links
  if [ ! -z $tags ]
  then
    IFS=',' read -ra tag <<< "$tags"
    for i in "${tag[@]}"
    do
      plain=$(echo $i | sed -e 's/#//')
      tweet=$(echo $tweet | sed -e "s@$i@#<a href=\"https://oiseau.asse-solidarite.qc.ca/index.php/tag/$plain\">$plain@")
    done
  fi

  # replace usernames by links
  if [ ! -z $usernames ]
  then
    IFS=',' read -ra username <<< "$usernames"
    for i in "${username[@]}"
    do
      plain=$(echo $i | sed -e 's/\@//')
      tweet=$(echo $tweet | perl -pe "s@<a href=\"/$plain.*?</a>@<span class=\"username username_linked\">\@<a href=\"https://twitter.com/$plain\">$plain</a></span>@i")
    done
  fi

  # replace images
  tweet=$(echo $tweet | perl -pe "s@<a href=\"http://t.co*.+?data-pre-embedded*.+?</a>@<span class=\"embed_image embed_image_yes\">$image_link</span>@")

echo $tweet | sudo -u twitter wp-cli post create - --post_title="$title" --post_status="publish" --tags_input="$tag" --post_date="$date" > tmp
post_id=$(grep -Po "[0-9]{4}" tmp)
sudo -u twitter wp-cli post meta add $post_id ozh_ta_id $id
echo "$post_id created"
rm tmp
done

Does anyone ever update phpBBs?

What's worse than a phpBB forum? Two phpBB 2.0.x forums using PHP3 and last updated in 2006.

I had to resort to unholy methods just to be able to get those things running again to be able to wget the crap out of them.

By the way, the magic wget command to grab a whole website looks like this:

wget --mirror -e robots=off --page-requisites --adjust-extension -nv --base=./ --convert-links --directory-prefix=./ -H -D www.foo.org,foo.org http://www.foo.org/

Depending on the website you are trying to archive, you might have to play with other obscure parameters. I sure had to. All the credits for that command goes to Koumbit's wiki page on the dark arts of website statification.

Archiving mailing lists

mailman2 is pretty great. You can get a dump of an email list pretty easily and mailman3's web frontend, the lovely hyperkitty, is well, lovely. Importing a legacy mailman2 mbox went without a hitch thanks to the awesome hyperkitty_import importer. Kudos to the Debian Mailman Team for packaging this in Debian for us.

But what about cramming a Yahoo! Group mailing list in hyperkitty? I wouldn't recommend it. After way too many hours spent battling character encoding errors I just decided people that wanted to read obscure emails from 2003 would have to deal with broken accents and shit. But hey, it kinda works!

Oh, and yes, archiving a Yahoo! Group with an old borken Perl script wasn't an easy task. Hell, I kept getting blacklisted by Yahoo! for scraping too much data to their liking. I ended up patching together the results of multiple runs over a few weeks to get the full mbox and attachments.

By the way, if anyone knows how to tell hyperkitty to stop at a certain year (i.e. not display links for 2019 when the list stopped in 2006), please ping me.

,

Krebs on SecurityBefore He Spammed You, this Sly Prince Stalked Your Mailbox

A reader forwarded what he briefly imagined might be a bold, if potentially costly, innovation on the old Nigerian prince scam that asks for help squirreling away millions in unclaimed fortune: It was sent via the U.S. Postal Service, with a postmarked stamp and everything.

In truth these old fashioned “advance fee” or “419” scams predate email and have circulated via postal mail in various forms and countries over the years.

The recent one pictured below asks for help in laundering some $11.6 million from an important dead person that anyway has access to a secret stash of cash. Any suckers who bite are strung along for weeks while imaginary extortionists or crooked employees at these bureaucratic institutions demand licenses, bribes or other payments before disbursing any funds. Those funds never arrive, no matter how much money the sucker gives up.

This type of “advance fee” or “419” scam letter is common in spam, probably less so via USPS.

It’s easy to laugh at this letter, because it’s sometimes funny when scammers try so hard. But then again, maybe the joke’s on us because sending these scams via USPS makes them even more appealing to the people most vulnerable: Older individuals with access to cash but maybe not all their marbles. 

Sure, the lure costs $.55 up front. But a handful of successful responses to thousands of mailers could net fortunes for these guys phishing it old school.

The losses from these types of scams are sometimes hard to track because so many go unreported. But they are often perpetrated by the same people involved in romance scams online and in so-called ‘business email compromise” or BEC fraud, wherein the scammers try to spoof the boss at a major company in a bid to get wire payment for an “urgent” (read: fraudulent) invoice.

These scam letters are sometimes called 419 scams in reference to the penal code for dealing with such crimes in Nigeria, a perennial source of 419 letter schemes. A recent bust of a Nigerian gang targeted by the FBI gives some perspective on the money-making abilities of a $10 million ring that was running these scams all day long.

Reportedly, in the first seven months of 2019 alone the FBI received nearly 14,000 complaints reporting BEC scams with a total loss of around $1.1 billion—a figure that nearly matches losses reported for all of 2018.

CryptogramCracking Forgotten Passwords

Expandpass is a string expansion program. It's "useful for cracking passwords you kinda-remember." You tell the program what you remember about the password and it tries related passwords.

I learned about it in this article about Phil Dougherty, who helps people recover lost cryptocurrency passwords (mostly Ethereum) for a cut of the recovered value.

Worse Than FailureCodeSOD: You Can Take Care

Tiberrias sends us some code that, on its face, without any context, doesn’t look bad.

var conditionId = _monitorConditionManagement.GetActiveConditionCountByClient(clientIdentityNumber);

_monitorConditionManagement.StopCondition(conditionId);

The purpose of this code is to lookup a condition ID for a client, and then clear that condition from a client by StopConditioning that ID. Which, if you read the code closely, the problem becomes obvious: GetActiveConditionCountByClient. Count. This doesn’t return a condition ID, it returns the count of the number of active conditions. So, this is a stupid, simple mistake, an easy error to make, and an easy error to catch- this code simply doesn’t work, so what’s the WTF?

This code was written by a developer who either made a simple mistake or just didn’t care. But then it went through code review- and the code reviewer either missed it, or just didn’t care. It’s okay, though, because there are unit tests. There’s a rich, robust unit test suite. But in this case, the GetActiveConditionCountByClient and the StopCondition methods are just mocks, and the person who wrote the unit test didn’t check to see that the mocks were called as expected, because they just didn’t care.

Still, there’s an entire QA team between this code and production, and since this code definitely can’t work, they’re going to catch the bug, right? They might- if they cared. But this code passed QA, and got released into production.

The users might notice, but the StopCondition method is so nice that, if given an invalid ID, it just logs the error and trucks on. The users think their action worked. But hey, there’s a log file, right? There’s an operations team which monitors the logs and should notice a lot of errors suddenly appearing. They just would have to care, which guess what…

This bug only got discovered and fixed because Tiberrias noticed it while scrolling through the class to fix an entirely unrelated bug.

“You really shouldn’t fix two unrelated bugs in the same commit,” the code reviewer said when Tiberrias submitted it.

There was only one way to reply. “I don’t care.”

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet DebianShirish Agarwal: Dissonance, Rakhigiri, one-party system

I heard the term dissonance or Cognitive dissonance about a year, year and a half back. This is the strategy that BJP used to win in 2014 and in 2019. How much part did EVM’s play and how much part did the cognitive dissonance work is hard to tell and define but both have made irreperable damage to not just the economy, social fabric, institutions, the constitution but the idea of India itself. India, the country which has thousands of different cultures, castes, are for one reason or the other put into hate which seems to have no end. But before going into the politics, let me share one beautiful video which got played on social media and also quickly forgotten.

Dhol Tasha in Ganesh Chaturthi

The gentleman playing that drum is Mr. Greg Ellis. While we had Ganesh Chaturthi just a while back, the most laid-back I have seen in my whole 40+ years. While there were other dhol tasha plays which played for e.g. these two videos

and

I do like the first one though which represents what is and used to be call jugalbandi. This is where two artists talk to each other having conversation while playing their instruments. For e.g. Zakir Hussain Sahab and Shivamani and many others. but that perhaps is a story for another day.

The Science and Politics of the 4500 year old Rakhigarhi woman.

This story was broken couple of weeks back. The best way or person to perhaps explain it in plain english would perhaps be Shekhar Gupta . From the discussions you see it is much heated and contested topic.

The study that was done can be found at https://reich.hms.harvard.edu/sites/reich.hms.harvard.edu/files/inline-files/2019_Cell_ShindeNarasimhan_Rakhigarhi.pdf and https://reich.hms.harvard.edu/sites/reich.hms.harvard.edu/files/inline-files/2019_Science_NarasimhanPatterson_CentralSouthAsia.pdf . In fact you can see whole series of papers on the same topic https://reich.hms.harvard.edu/publications . Now the Indian nationalist narrative movement is that Aryan migration does not take place while others say it’s the opposite. One can look at a bit more background on the story in the Indian Express article which came a few months before, before the whole controversy erupted.

One of the best things that has happened is the ability to get the DNA from a 4500 year old specimen. This is Priya Moorani’s github repo. which helped them get the genome data. I am sure some of the Debian folks may be interested in such kind of forensic anthropology. I know I had shared bits of its previously as well but it didn’t fit well with me as I had hadn’t shared the whole narrative behind it and the positions that people have taken.

The Economy

The Economy is in a bad shape in India is in a bad shape is not a hidden thing anymore but India Inc. had been keeping quiet. Apart from Ratan Tata and Mahindra who quipped about the R-word (Recession) only two ladies seems to be able a spade, a spade and both of them called on it on NDTV . Both these ladies are wealth-creators and have made India proud, one of them even benefiting from the rupee crash but even then their heart in the right place.

The Madame above is Ms. Kiran Majumdar Shaw who is a billionaire who makes her money selling generic active pharmaceutical ingredients and other bio products to advanced countries. Businessworld had done stories on pharmaceutical development, engineering about 6-7 years back when it was an emerging industry and lot of startups had come into the field. While it is a pretty competitive field with about 60 odd companies and biocon itself shedding quite a bit of its price on the stock exchange and still she has the guts to criticize the Modi Govt.

The other is Shobhana Kamineni of Apollo Hospitals, which is in superspeciality hospitals. Both the ladies are into medicine field and both are in the high-risk and high-reward game and still they are able to speak their mind. What is and was shocking for many of us is the kind of numbers she had shared of people emigrating to elsewhere in the last 5 odd years which perhaps wasn’t the case before. In fact, couple of days back both Indian Express and the Wire ran stories showing how millionaires and billionaires are moving out of the country. This hasn’t happened in a day but over a 5 year period. While people are saying that the money is being spent on trael and education, in fact it is due to tax harrassment and structural issues in the broader economy that wealth-creators are leaving. Of course for the rest of us it is the Hindu-Muslim topics but that is another story altogether.

The Hindu-Muslim polarization

When I talk to most of the Bhakts or people who have a militant belief of the way BJP is going, one of the questions I ask is, who are these muslims, are they the ones who came from Arab or are these converted Muslims, Christians, Buddhists, Sikhs etc. Most of them were and our own people who choose to convert to a religion because they saw upward mobility in that religion. So in a way the whole debate collapses. The saddest part is whether it is Hindu-Muslim polarization or Hindu-Sikh polarization it is always one brother beating the other. Also, studies showed that in the 2002 Gujarat riots, the ones who took part in the riots were from the lowest strata of the society because by killing the Muslims, they got prestige and money in the society. While I’m sure I have shared the studies before on the site, but if not shared, please let me know and will update the same accordingly. It is somewhere in my bookmarks somewhere 🙂

Chandrayaan 2 (Update)

The Update for Chandrayaan 2 has been that it was shared by ISRO team that the last 2 kms. the thrust landers which should have been pushing against gravity and moon so that the lander would land softly did the opposite or the reverse i.e. instead of pushing against the moon/regolith gave speed which may have resulted in the crash. In few days NASA may share some photographs of the lander.

The Single-party State

One of the newest discussions that has been thrown is having a single party for the country. Now I dunno whether it is just for us to get us into another discussions or controversies as they have been doing for last so many years, just like Trump does every other day or does he mean it. If this becomes a reality, then there will be nothing left. Even the judiciary has been keeping quiet on the habeaus corpus on the Kashmir issue . In fact Shah Faesal, who if memory serves me right, came on a BJP ticket has withdrawn his habeaus corpus writ as thousands of kashmiris has no access to put up such a writ. Also the Supreme Court casual approach has baffled many of the legal luminaries in the field. At the end, I don’t really know what to look out for The economy doesn’t seem will recover in the short-to-medium term and with the Saudi fields hit, oil, petrol will climb more than ever before. Not a happy note to end on though but it is, what it is.

Update 18/09/2019 – Newslaundry shared that ANI (A Govt. mouthpiece) has been told to alter the stories on single-party state so it appears a bit differently, some papers have still carried it in the same. Youtube also has his statement on record so he can’t escape.

Krebs on SecurityMan Who Hired Deadly Swatting Gets 15 Months

An Ohio teen who recruited a convicted serial “swatter” to fake a distress call that ended in the police shooting an innocent Kansas man in 2017 has been sentenced to 15 months in prison.

Image: FBI.gov

“Swatting” is a dangerous hoax that involves making false claims to emergency responders about phony hostage situations or bomb threats, with the intention of prompting a heavily-armed police response to the location of the claimed incident.

The tragic swatting hoax that unfolded on the night of Dec. 28, 2017 began with a dispute over a $1.50 wager in an online game “Call of Duty” between Shane M. Gaskill, a 19-year-old Wichita, Kansas resident, and Casey S. Viner, 18, from the Cincinnati, OH area.

Viner wanted to get back at Gaskill in grudge over the Call of Duty match, and so enlisted the help of another man — Tyler R. Barriss — a serial swatter in California known by the alias “SWAuTistic” who’d bragged of swatting hundreds of schools and dozens of private residences.

Chat transcripts presented by prosecutors showed Viner and Barriss both saying if Gaskill isn’t scared of getting swatted, he should give up his home address. But the address that Gaskill gave Viner to pass on to Barriss no longer belonged to him and was occupied by a new tenant.

Barriss’s fatal call to 911 emergency operators in Wichita was relayed from a local, non-emergency line. Barriss falsely claimed he was at the address provided by Viner, that he’d just shot his father in the head, was holding his mom and sister at gunpoint, and was thinking about burning down the home with everyone inside.

Wichita police quickly responded to the fake hostage report and surrounded the address given by Gaskill. Seconds later, 28-year-old Andrew Finch exited his mom’s home and was killed by a single shot from a Wichita police officer. Finch, a father of two, had no party to the gamers’ dispute and was simply in the wrong place at the wrong time.

“Swatting is not a prank, and it is no way to resolve disputes among gamers,” U.S. Attorney Stephen McAllister, said in a press statement. “Once again, I call upon gamers to self-police their community to ensure that the practice of swatting is ended once and for all.”

In chat records presented by prosecutors, Viner admitted to his role in the deadly swatting attack:

Defendant VINER: I literally said you’re gonna be swatted, and the guy who swatted him can easily say I convinced him or something when I said hey can you swat this guy and then gave him the address and he said yes and then said he’d do it for free because I said he doesn’t think anything will happen
Defendant VINER: How can I not worry when I googled what happens when you’re involved and it said a eu [sic] kid and a US person got 20 years in prison min
Defendant VINER: And he didn’t even give his address he gave a false address apparently
J.D.: You didn’t call the hoax in…
Defendant VINER: Does t [sic] even matter ?????? I was involved I asked him to do it in the first place
Defendant VINER: I gave him the address to do it, but then again so did the other guy he gave him the address to do it as well and said do it pull up etc

Barriss was sentenced earlier this year to 20 years in federal prison for his role in the fatal swatting attack.

Barriss also pleaded guilty to making hoax bomb threats in phone calls to the headquarters of the FBI and the Federal Communications Commission in Washington, D.C. In addition, he made bomb threat and swatting calls from Los Angeles to emergency numbers in Ohio, New Hampshire, Nevada, Massachusetts, Illinois, Utah, Virginia, Texas, Arizona, Missouri, Maine, Pennsylvania, New Mexico, New York, Michigan, Florida and Canada.

Prosecutors for the county that encompasses Wichita decided in April 2018 that the officer who fired the shot that killed Andrew Finch would not face charges, and would not be named because he wasn’t being charged with a crime.

Viner was sentenced after pleading guilty to one count each of conspiracy and obstructing justice, the US attorney’s office for Kansas said. CNN reports that Gaskill has been placed on deferred prosecution.

Viner’s obstruction charge stems from attempts to erase records of his communications with Barriss and the Wichita gamer, McAllister’s office said. In addition to his prison sentence, Viner was ordered to pay $2,500 in restitution and serve two years of supervised release.

TEDTransform: The talks of TED@DuPont

Hosts Briar Goldberg and David Biello open TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Transformation starts with the spark of something new. In a day of talks and performances about transformation, 16 speakers and performers explored exciting developments in science, technology and beyond — from the chemistry of everyday life to innovations in food, “smart” clothing, enzyme research and much more.

The event: TED@DuPont: Transform, hosted by TED’s David Biello and Briar Goldberg

When and where: Thursday, September 12, 2019, at The Fillmore in Philadelphia, PA

Music: Performances by Elliah Heifetz and Jane Bruce and Jeff Taylor, Matt Johnson and Jesske Hume

The talks in brief:

“The next time you send a text or take a selfie, think about all those atoms that are hard at work and the innovation that came before them,” says chemist Cathy Mulzer. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Cathy Mulzer, chemist and tech shrinker

Big idea: You owe a big thank you to chemistry for all that technology in your pocket.

Why? Almost every component that goes into creating a superpowered device like a smartphone or tablet exists because of a chemist — not the Silicon Valley entrepreneurs that come to most people’s minds. Chemistry is the real hero in our technological lives, Mulzer says — building up and shrinking down everything from vivid display screens and sleek bodies to nano-sized circuitries and long-lasting batteries.

Quote of talk: The next time you send a text or take a selfie, think about all those atoms that are hard at work and the innovation that came before them.”


Adam Garske, enzyme engineer

Big Idea: We can harness the power of new, scientifically modified enzymes to solve urgent problems across the world.

How? Enzymes are proteins that catalyze chemical reactions — turning milk into cheese, for example. Through a process called “directed evolution,” scientists can carefully edit and design the building blocks of enzymes for specific functions — to help treat diseases like diabetes, reduce CO2 in our laundry, break down plastics in the ocean and more. Enzyme evolution is already changing how we tackle health and environmental issues, Garske says, and there’s so much more ahead.

Quote of the talk: With enzymes, we can edit what nature wrote — or write our own stories.”


Henna-Maria Uusitupa, bioscientist

Big idea: Our bodies host an entire ecosystem of microorganisms that we’ve been cultivating since we were babies. And as it turns out, the bacteria we acquire as infants help keep us healthier as adults. Henna-Maria Uusitupa wants to ensure that every baby grows a healthy microbiome.

How? Babies must acquire the right balance of microbes in their bodies, but they must also receive them at the correct stages of their lives. C-sections and disruptions in breastfeeding can throw a baby’s microbiome out of balance. With a carefully curated blend of probiotics and other chemicals, scientists are devising ways to restore harmony — and beneficial microbes — to young bodies.

Quote of the talk: “I want to contribute to the unfolding of a future in which each baby has an equal starting point to be programmed for life-long health.”


Leon Marchal, innovation director 

Big Idea: Animals account for 50 to 80 percent of antibiotic consumption worldwide — a major contributing factor to the growing threat of antimicrobial resistance. To combat this, farmers can adopt a number of practices — like balanced, antibiotic-free nutrition for animals — on their farms.

Why: The UN predicts that antimicrobial resistance will become our biggest killer by 2050. To prevent that from happening, Marchal is working to transform a massive global industry: animal feed. Antibiotics are used in animal feed to keep animals healthy and to grow them faster and bigger. They can be found in the most unlikely places — like the treats we give our pets. This constant, low-dose exposure could lead some animals to develop antibiotic-resistant bugs, which could cause wide-ranging health problems for animals and humans alike. The solution? Antibiotic-free production — and it all starts with better hygiene. This means taking care of animal’s good bacteria with balanced nutrition and alterations to the food they eat, to keep their microbiomes more resilient.

Quote of the talk: “We have the knowledge on how to produce meat, eggs and milk without or with very low amounts of antibiotics. This is a small price to pay to avoid a future in which bacterial infections again become our biggest killer.”


Physical organic chemist Tina Arrowood shares a simple, eco-friendly proposal to protect our freshwater resources from future pollution. She speaks at TED@DuPont at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Tina Arrowood, physical organic chemist

Big idea: Human activity is a threat to freshwater rivers. We can transform that risk into an environmental and economic reward.

How? A simple, eco-friendly proposal to protect our precious freshwater resources from future pollution. We’ve had technology that purifies industrial wastewaters for the last 50 years. Arrowood suggests that we go a step further: as we clean our rivers, we can sell the salt byproduct as a primary resource — to de-ice roads and for other chemical processing — rather than using the tons of salt we currently mine from the earth.

Fun fact: If you were to compare the relative volume of ocean water to fresh river water on our planet, the former would be an Olympic-sized swimming pool — and the latter would be a one-gallon jug.


“Why not transform clothing and make it a part of our digitized world, in a manner that shines continuous light into our health and well-being?” asks designer Janani Bhaskar. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Janani Bhaskar, smart clothing designer

Big Idea: By designing “smart” clothing with durable technologies, we can better keep track of health and well-being.

How? Using screen-printing technology, we can design and attach biometric “smart stickers” to any piece of clothing. These stickers are super durable, Bhaskar says: they can withstand anything our clothing can, including workouts and laundry. They’re customizable, too — athletes can use them to track blood pressure and heart rate, healthcare providers can use them to remotely monitor vital signs, and expecting parents can use them to receive information about their baby’s growth. By making sure this technology is affordable and accessible, our clothing — the “original wearables” — can help all of us better understand our bodies and our health.

Quote of the talk: “Why not transform clothing and make it a part of our digitized world, in a manner that shines continuous light into our health and well-being?”


Camilla Andersen, neuroscientist and food scientist

Big idea: We can create tastier, healthier foods with insights from people’s brain activity.

How? Our conscious experience of food — how much we enjoy a cup of coffee or how sweet we find a cookie to be, for example — is heavily influenced by hidden biases. Andersen provides an example: after her husband started buying a fancy coffee brand, she conducted a blind taste test with two cups of coffee. Her husband described the first cup as cheap and bitter, and raved about the second — only to find out that the two were actually the same kind of coffee. The taste difference was the result of his bias for the new, fancy coffee — the very kind of bias that can leave food scientists in the dark when testing out new products. But there’s a workaround: brain scans that can access the raw, unfiltered, unconscious taste information that’s often lost in people’s conscious assessments. With this kind of information, Andersen says, we can create healthier foods without sacrificing taste — like creating a zero-calorie milkshake that tastes just like the original.

Fun fact: The five basic tastes are universally accepted: sweet, salty, sour, bitter and umami. But, based on evidence from Andersen’s EEG experiments, there’s evidence of a new sixth basic taste: fat, which we may sense beyond its smell and texture. 


“Science is an integral part of our everyday lives, and I think we’re only at the tip of the iceberg in terms of harnessing all of the knowledge we have to create a better world,” says enzyme scientist Vicky Huang. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Vicky Huang, enzyme scientist

Big idea: Enzymes are unfamiliar to many of us, but they’re far more important in our day-to-day lives than we realize — and they might help us unlock eco-friendly solutions to everything from food spoilage to household cleaning problems. 

How? We were all taught in high school that enzymes are a critical part of digestion and, because of that, they’re also ideal for household cleaning. But enzymes can do much more than remove stains from our clothes, break down burnt-on food in our dishwashers and keep our baguettes soft. As scientists are able to engineer better enzymes, we’ll be able to cook and clean with less energy, less waste and fewer costs to our environment.

Quote of the talk: “Everywhere in your homes, items you use every day have had a host of engineers and scientists like me working on them and improving them. Just one part of this everyday science is using enzymes to make things more effective, convenient and environmentally sustainable.”


Geert van der Kraan, microbe detective

Big Idea: We can use microbial life in oil fields to make oil production safer and cleaner.

How? Microbial life is often a problem in oil fields, corroding steel pipes and tanks and producing toxic chemicals like dihydrogen sulfide. We can transform this challenge into a solution by studying the clues these microbes leave behind. By tracking the presence and activity of these microbes, we can see deep within these undergrounds fields, helping us create safer and smoother production processes.

Quote of the talk: “There are things we can learn from the microorganisms that call oil fields their homes, making oil field operations just a little cleaner. Who knows what other secrets they may hold for us?”


Lori Gottlieb, psychotherapist and author

Big idea: The stories we tell about our lives shape who we become. By editing our stories, we can transform our lives for the better.

How? When the stories we tell ourselves are incomplete, misleading or just plain wrong, we can get stuck. Think of a story you’re telling about your life that’s not serving you — maybe that everyone’s life is better than yours, that you’re an impostor, that you can’t trust people, that life would be better if only a certain someone would change. Try exploring this story from another point of view, or asking a friend if there’s an aspect of the story you might be leaving out. Rather than clinging to an old story that isn’t doing us any good, Gottlieb says, we can work to write the most beautiful story we can imagine, full of hard truths that lead to compassion and redemption — our own “personal Pulitzer Prize.” We get to choose what goes on the page in our minds that shapes our realities. So get out there and write your masterpiece.

Quote of the talk: “We talk a lot in our culture about ‘getting to know ourselves,’ but part of getting to know yourself is to unknow yourself: to let go of the one version of the story you’ve told yourself about who you are — so you can live your life, and not the story you’ve been telling yourself about your life.”


“I’m standing here before you because I have a vision for the future: one where technology keeps my daughter safe,” says tech evangelist Andrew Ho. He speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Andrew Ho, tech evangelist

Big idea: As technological devices become smaller, faster and cheaper, they make daily tasks more convenient. But they can also save lives.

How? For epilepsy patients like Andrew Ho’s daughter Hilarie, a typical day can bring dangerous — or even fatal — challenges. Medical devices currently under development could reduce the risk of seizures, but they’re bulky and fraught with risk. The more quickly developers can improve the speed and portability of these devices (and other medical technologies), the sooner we can help people with previously unmanageable diseases live normal lives.

Quote of the talk: Advances in technology are making it possible for people with different kinds of challenges and problems to lead normal lives. No longer will they feel isolated and marginalized. No longer will they live in the shadows, afraid, ashamed, humiliated and excluded. And when that happens, our world will be a much more diverse and inclusive place, a better place for all of us to live.”


“Learning from our mistakes is essential to improvement in many areas of our lives, so why not be intentional about it in our most risk-filled activity?” asks engineer Ed Paxton. He speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

Ed Paxton, aircraft engineer and safety expert

Big idea: Many people fear flying but think nothing of driving their cars every day. Statistically, driving is far more dangerous than flying — in part because of common-sense principles pilots use to govern their behavior. Could these principles help us be safer on the road?

How? There’s a lot of talk about how autonomous vehicles will make traffic safer in the future. Ed Paxton shares three principles that can reduce accidents right now: “positive paranoia” (anticipating possible hazards or mishaps without anxiety), allowing feedback from passengers who might see things you don’t and learning from your mistakes (near-misses caused by driving while tired, for example).

Quote of the talk:  “Driving your car is probably the most dangerous activity that most of you do … it’s almost certain you know someone who’s been seriously injured or lost their life out on the road … Over the last ten years, seven billion people have boarded domestic airline flights, and there’s been just one fatality.”


Jennifer Vail, tribologist

Big idea: Complex systems lose much of their energy to friction; the more energy they lose, the more power we consume to keep them running. Tribology — or the study of friction and things that rub together — could unlock massive energy savings by reducing wear and alleviating friction in cars, wind turbines, motors and engines.

How? By studying the different ways surfaces rub together, and engineering those surfaces to create more or less friction, tribologists can tweak a surprising range of physical products, from dog food that cleans your pet’s teeth to cars that use less gas; from food that feels more appetizing in our mouth to fossil fuel turbines that waste less power. Some of these changes could have significant impacts on how much energy we consume.

Quote of the talk: “I have to admit that it’s a lot of fun when people ask me what I do for my job, because I tell them: ‘I literally rub things together.'”

Planet DebianBits from Debian: New Debian Developers and Maintainers (July and August 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Keng-Yu Lin (kengyu)
  • Judit Foglszinger (urbec)

The following contributors were added as Debian Maintainers in the last two months:

  • Hans van Kranenburg
  • Scarlett Moore

Congratulations!

Planet DebianAntoine Beaupré: FSF resignations

I have been hesitant in renewing my membership to the Free Software Foundation for a while, but now I never want to deal with the FSF until Richard Stallman, president and founder of the free software movement, resigns. So, like many people and organizations, I have written this letter to cancel my membership. (Update: RMS resigned before I even had time to send this letter, but I publish here to share my part of this story.)

My encounters with a former hero

I had the (mis)fortune of meeting rms in person a few times in my life. The first time was at an event we organized for his divine visit to Montreal in 2005. I couldn't attend the event myself, but I had the "privilege" of having dinner with rms later during the week. Richard completely shattered any illusion I had about him as a person. He was arrogant, full of himself, and totally uninterested in the multitude of young hackers he was meeting in his many travels, apart from, of course, arguing with them about proper wording and technicalities. Even though we brought him to the fanciest vegetarian restaurant in town, he got upset because the restaurant was trying to make "fake meat" meals. Somehow my hero, who wrote the GNU manifesto that inspired me to make free software a life goal, has spoiled a delicious meal by being such an ungrateful guest. I would learn later that Stallman has rock star level requirements, with "vegetarian meals served just so" being only one exception out of many. (I don't mind vegetarians of course: I've been a vegetarian for more than 20 years now, but I will never refuse vegetarian food given to me.)

The second time was less frustrating: it was in 2006 during the launch of the GPLv3 discussion draft, an ambitious project to include the community in the rewrite of the GPLv2. Even though I was deeply interested in the legal implications of the changes, everything went a bit over my head and I felt left out of a process that was supposedly designed to include legal geeks like me. At best, I was able to assist Stallman's assistant as she skidded over icy Boston sidewalks with a stereotypical (and maybe a little machismo, I must admit) Canadian winter assurance. At worst, I burned liters of fuel to drive me and some colleagues over the border to see idols speak on a stage.

Finally, I somehow got tangled up with rms in a hallway conversation about open hardware and wireless chipsets at LibrePlanet 2017, the FSF's yearly conference. I forgot the exact details, but we were debating whether or not legislation that forbids certain wireless chipsets to be open was legitimate or not.

(For some reason, rms has ambiguous opinions about "hardware freedom" and sees a distinction between software that runs on a computer (as "in the CPU") and software that is embedded in the hardware, etched into electronic circuits. The fact that this is a continuum that has various in-between incarnations ("firmware", ASIC, FPGA) seems to escape his analysis. But that is besides the point here.)

We "debated" this for a while, but for people who don't know, debating with rms is a little bit like talking with a three year old: they have their deeply rooted opinion, they might recognize you have one as well (if your lucky), but they will generally ignore whatever it is you non-sensical adult are saying because it's incomprehensible anyways. With a three year old, it's kind of hilarious (until they spill an bottle full of vanilla on the floor), but with an adult, it's kind of aggravating and makes you feel like an idiot for even trying.

I mention this anecdote because it's a good example of how Stallman doesn't think rules apply to him. Simple, informal rules like listening to people you're talking to seem like basic courtesy, but rms is above such mundane things. If this was just a hallway conversation, I wouldn't mind that much: after all, I don't need to talk to Richard Stallman. But at LibrePlanet (and in fact anywhere), he believes it is within his prerogative to interrupt any discussion or talk around him . I was troubled by the FSF's silence on Eric Schultz's request for safety at Libre Planet: while I heard the FSF privately reached out to Eric, nothing seemed to have been done to curb Stallman's attitude in public. This is the reason why I haven't returned to Boston for LibrePlanet since then, even though I have dear friends that live there and were deeply involved in the organization.

The final straw before this week's disclosurse was an event in Quebec city where Stallman was speaking at a conference. A friend of mine asked a question involving his daughter as an example user. Stallman responded to the question by asking my friend if he could meet his (underage) daughter, with obvious obscene undertones. Everyone took this as a joke, but, in retrospect, it was just horrible and I had come to conclude that Stallman was now a liability to the free software movement. I just didn't know what to do back then. I wish I had done something.

Why I am resigning from the FSF

Those events at LibrePlanet were the first reason why I haven't renewed my membership yet. But now I want to formally cancel my membership with the FSF because its president went over his usual sexism and weird pedophilia justification from the past. I first treated those as an abhorrent eccentricity or at best an unfortunate intellectual posture, but rms has gone way beyond this position now. Now rms has joined the rank of rape apologists in the Linux kernel development community, an inexcusable position in our community that already struggles too much with issues of inclusion, respect, and just being nice with each other. I am not going to go into details that are better described by this courageous person, but needless to say that this kind of behavior is inexcusable from anyone, and particularly from an historical leader. Stallman did respond to the accusations, but far from issuing an apology, he said his statements were "mischaracterised"; something that looks to me like a sad caricature.

I do not want to have anything to do with the FSF anymore. I don't know if they would be able to function without Stallman, and frankly at this point, I don't care: they have let this gone on for too long. I know how much rms contributed to the free software movement: he wrote most of Emacs, GCC and large parts of the GNU system so many people use on their desktops. I am grateful for that work, but that was a long time ago and this is now. As others have said, we don't need to replace rms. We need a world where such leaders are not necessary, because rock stars too easily become abusers.

Stallman is just the latest: our community is filled with obnoxious leaders like this. It seems our community leaders are (among other things) either assholes, libertarian gun freaks, or pedophilia apologists and sexists. We tolerate their abuse because we somehow believe they are technically exceptional. They aren't: they're just hard-working and privileged. But even if they would be geniuses, but as selamie says:

For a moment, let’s assume that someone like Stallman is truly a genius. Truly, uniquely brilliant. If that type of person keeps tens or even hundreds of highly intelligent but not ‘genius’ people out of science and technology, then they are hindering our progress despite the brilliance.

Or, as Banksy says:

We don't need any more heroes.

We just need someone to take out recycling.

I wish Stallman would just retire already. He's done enough good work for a lifetime, now he's bound to just do more damage.

Update: Richard Stallman resigned from the FSF and from MIT ("due to pressure on MIT and me"), still dodging responsability and characterizing the problem as "a series of misunderstandings and mischaracterizations". Obviously, this man cannot be reformed and we need to move on. Those events happened before I even had time to actually send this letter to the FSF, so I guess I might renew my membership after all. I'll hold off until LibrePlanet, however, we'll see what happens there... In the meantime, I'll see how I can help my friends left the FSF because they must be living through hell now.

Planet DebianMolly de Blanc: Thinkers

Free and open source software, ethical technology, and digital autonomy have a number of great thinkers, inspiring leaders, and hard working organizations. I see two discussions occurring now that I feel the need to address: What will we do next? Who will our new great leader be?

The thing is, we don’t need to do something new next, and we don’t need to find new leader.

Organizations and individuals have been doing amazing work in our sphere for more than thirty years. We only need to look at the works of groups like Public Labs, OpenStreetMap, and Wikimedia to see where the future of our work lies: applying the principles of user freedom to create demonstrable change, build equity, and fight for justice. I am positively inspired by the GNOME community and their dedication to building software for people in every country, of every ability, and of every need. Outreachy and projects and companies that participate in Outreachy internships are working hard to build the future of community that we want to see.

Deb Nicholson recently reminded me that we cannot build a principled future where people are excluded from the process of building it. She also pointed out that once we’ve have a techno-utopia, it will include everyone, because it needs to. This utopia is built on ideas, but it is also built by plumbers — by people doing work on the ground with those ideas.

Deb Nicholson is another inspiration to me. I’ve been lucky enough to know her since 2010, when she graciously began to mentor me. I now consider her both a mentor and a dear friend. Her ideas are innovative, her principles hard, and her vision wide.

Deb is one of the many  people who have helped and continue to help shape my ideas, teach me things. Allison Randall, Asheesh Laroia, Christopher Lemmer-Webber, Daniel Khan Gilmore, Elana Hashman, Gabriella Coleman, Jeffrey Warren, Karen Sandler, Karl Fogel, Stefano Zacchiroli — these are just a few of the individuals who have been necessary figures in my life.

We don’t need to find new leaders and thinkers because they’re already here. They’ve been here, thinking, writing, speaking, and doing for years.

What we need to do is listen to their voices.

As I see people begin to discuss the next president of the Free Software Foundation, they do so in a context of asking who will be leading the free software movement. The free software movement is more than the FSF and it’s more than any given individual. We don’t need to go in search of the next leader, because there are leaders who work every day not just for our digital rights, but for a better world. We don’t need to define a movement by one man, nor should we do so. We instead need to look around us and listen to what is already happening.

Worse Than FailureA Learning Experience

Jakob M. had the great pleasure of working as a System Administrator in a German school district. At times it was rewarding work. Most of the time it involved replacing keyboard keys mischievous children stole and scraping gum off of monitor screens. It wasn't always the students that gave him trouble though.

Frau Fritzenberger was a cranky old math teacher at a Hauptschule near Frankfurt. Jakob regularly had to answer support calls she made for completely frivolous things. Having been teaching since before computers were a thing, she put up a fight for every new technology or program Jakob's department wanted to implement.

Over the previous summer, a web-based grading system called NotenWertung was rolled out across the district's network. It would allow teachers to grade homework and post the scores online. They could work from anywhere, with any computer. There was even a limited mobile application. Students and parents could then get a notification and see them instantly. Frau Fritzenberger was predictably not impressed.

She threw a fit on the first day of school and Jakob was dispatched to defuse it. "Why do we need computers for grading?!" she screeched at Jakob. "Paper works just fine like it has for decades! How else can I use blood red pen to shame them for everything they get wrong!"

"I understand your concern, Frau Fritzenberger," Jakob replied while making a 'calm down' gesture with his arms. "But we can't have you submitting grades on paper when the entire rest of the district is using NotenWertung." He had her sit down at the computer and gave her a For Dummies-type walkthrough. "There, it's easier than you think. You can even do this at night from the comfort of your own home," he assured her before getting up to leave.

Just as he was exiting the classroom, he heard her shout, "If you were my student, I would smack you with my ruler!" Jakob brushed it off and left to answer a call about paper clips jammed in a PC fan.

The next morning, Jakob got a rare direct call on his desk phone. It was Frau and she was in a rage. All he could make out between strings of aged German cuss words was "computer is broken!" He hung up and prepared to head to Frau's Hauptschule.

Jakob expected to find that Frau didn't have a network connection, misplaced the shortcut to her browser, didn't realize the monitor was off, or something stupid like that. What he found was Frau's computer was literally broken. The LCD screen of her monitor was an elaborate spider web, her keyboard was cracked in half, and the PC tower looked like it had been run over on the Autobahn. Bits of the motherboard dangled outside the case, and the HDD swung from its cable. "Frau Fritzenberger... what in the name of God happened here?!"

"I told you the computer was broken!" Frau shouted while meanly pointing her crooked index finger at Jakob. "You told me I have to do grades on the computer. So I packed it up to take home on my scooter. It was too heavy for me to ride with it on back so I wiped out and it smashed all over the road! This is all your fault!"

Jakob stared on in disbelief at the mangled hunks of metal and plastic. Apparently you can teach an old teacher new tricks but you can't teach her that the same web application can be accessed from any computer.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet DebianSteve Kemp: A slack hack

So recently I've been on-call, expected to react to events around the clock. Of course to make it more of a challenge alerts are usually raised via messages to a specific channel in slack which come from a variety of sources. Let's pretend I'm all retro/hip and I'm using IRC instead.

Knowing what I'm like I knew there was essentially zero chance a single beep on my phone, from the slack/irc app, would wake me up. So I spent a couple of hours writing a simple bot:

  • Connect to the server.
  • Listen for messages.
  • When an alert is posted in the channel:
    • Trigger a voice-call via the twilio API.

That actually worked out really, really, really well. Twilio would initiate a call to my mobile which absolutely would, could, and did wake me up. I did discover a problem pretty quickly though; too many phone-calls!

Imagine something is broken. Imagine a notice goes to your channel, and then people start replying to it:

  Some Bot: Help! Stuff is broken!  I'm on Fire!!  :fire: :hot: :boom:
  Colleague Bob: Is this real?
  Colleague Ann: Can you poke Chris?
  Colleage Chris: Oh dears, woe is me.

The first night I was on call I got a phone call. Then another. Then another. Even I replied to the thread/chat to say "Yeah I'm on it". So the next step was to refine my alerting:

  • If there is a message in the channel
    • Which is not from Bob
    • Which is not from Steve
    • Which is not from Ann
    • Which is not from Chris
    • Which doesn't contain the text "common false-positive"
    • Which doesn't contain the text "backup completed"
  • Then make a phone-call.

Of course the next problem was predictable enough, so the rules got refined:

  • If the time is between 7PM and 7AM raise the alert.
  • Unless it is the weekend in which case we alert regardless of the time of day.

So I had a growing set of rules. All encoded in my goloang notification application. I moved some of them to JSON (specificially a list of users/messages to ignore) but things like the time of day were harder to move.

I figured I shouldn't be hardwiring these things. So last night put together a simple filter-library, an evaluation engine, in golang to handle them. Now I can load a script and filter things out much more dynamically. For example assume I have the following struct:

type Message struct {
    Author  string
    Channel string
    Message string
    ..
}

And an instance of that struct named message, I can run a user-written script against that object:

 // Create a new eval-filter
 eval, er := evalfilter.New( "script goes here ..." )

 // Run it against the "message" object
 out, err := eval.Run( message )

The logic of reacting now goes inside that script, which is hopefully easy to read - but more importantly can be edited without recompiling the application:

//
// This is a filter script:
//
//   return false means "do nothing".
//   return true means initiate a phone-call.
//

//
// Ignore messages on channels that we don't care about
//
if ( Channel !~ "_alerts" ) { return false; }

//
// Ignore messages from humans who might otherwise write in our channels
// of interest.
//
if ( Sender == "USER1" ) { return false; }   // Steve
if ( Sender == "USER2" ) { return true; }    // Ann
if ( Sender == "USER3" ) { return false; }   // Bob


//
// Is it a weekend? Always alert.
//
if ( IsWeekend() ) { return true ; }

//
// OK so it is not a weekend.
//
// We only alert if 7pm-7am
//
// The WorkingHours() function returns `true` during working hours.
//
if ( WorkingHours() ) { return false ; }

//
// OK by this point we should raise a call:
//
// * The message was NOT from a colleague we've filtered out.
// * The message is upon a channel with an `_alerts` suffix.
// * It is not currently during working hours.
//   * And we already handled weekends by raising calls above.
//
return true ;

If the script returns true I initiate a phone-call. If the script returns false we ignore the message/event.

The alerting script itself is trivial, and probably non-portable, but the filtering engine is pretty neat. I can see a few more uses for it, even without it having nested blocks and a real grammar. So take a look, if you like:

Planet DebianNeil McGovern: GNOME relationship with GNU and the FSF

On Saturday, I wrote an email to the FSF asking them to cancel my membership. Other people who I greatly respect are doing the same. This came after the president of the FSF made some pretty reprehensible remarks saying that the “most plausible scenario is that [one of Epstein’s underage victims] presented themselves as entirely willing” while being trafficked. This isn’t the only incident, but it is the straw that broke the camel’s back.

In my capacity as the Executive Director of the GNOME Foundation, I have also written to the FSF. One of the most important parts of my role is to think of the well being of our community and the GNOME mission. One of the GNOME Foundation’s strategic goals is to be an exemplary community in terms of diversity and inclusion. I feel we can’t continue to have a formal association with the FSF or the GNU project when its main voice in the world is saying things that hurt this aim.

I greatly admire the work of FSF staffers and volunteers, but have now reached the point of concluding that the greatest service to the mission of software freedom is for Richard to step down from FSF and GNU and let others continue in his stead. Should this not happen in a timely manner, then I believe that severing the historical ties between GNOME, GNU and the FSF is the only path forward.

Edit: I’ve also cross-posted this to the GNOME discourse instance.

Planet DebianSven Hoexter: ansible scp_if_ssh: smart debugging

I guess that is just one of the things you've to know, so maybe it helps someone else.

We saw some warnings in our playbook rollouts like

[WARNING]: sftp transfer mechanism failed on [192.168.23.42]. Use
ANSIBLE_DEBUG=1 to see detailed information

They were actually reported for sftp and scp usage. If you look at the debug output it's not very helpful for the average user, similar if you go to verbose mode with -vvv. The later one at least helped to see parameters passed to sftp and scp, but you still see no error message. But if you set

scp_if_ssh: True

or

scp_if_ssh: False

you will suddenly see the real error message

fatal: [docker-023]: FAILED! => {"msg": "failed to transfer file to /home/sven/testme.txt /home/sven/
.ansible/tmp/ansible-tmp-1568643306.1439135-27483534812631/source:\n\nunknown option -- A\r\nusage: scp [-346BCpqrv]
[-c cipher] [-F ssh_config] [-i identity_file]\n           [-l limit] [-o ssh_option] [-P port] [-S program] source
... target\n"}

Lesson learned, as long as ansible is running in "smart" mode it will hide all error messages from the user. Now we could figure out that the culprit is the -A for AgentForwarding, which is for obvious reasons not available in sftp and scp. One can move it to group_vars ansible_ssh_extra_args. The best documentation regarding this, beside of the --help output, seems to be the commit message of 3ad9b4cba62707777c3a144677e12ccd913c79a8.

CryptogramAnother Side Channel in Intel Chips

Not that serious, but interesting:

In late 2011, Intel introduced a performance enhancement to its line of server processors that allowed network cards and other peripherals to connect directly to a CPU's last-level cache, rather than following the standard (and significantly longer) path through the server's main memory. By avoiding system memory, Intel's DDIO­short for Data-Direct I/O­increased input/output bandwidth and reduced latency and power consumption.

Now, researchers are warning that, in certain scenarios, attackers can abuse DDIO to obtain keystrokes and possibly other types of sensitive data that flow through the memory of vulnerable servers. The most serious form of attack can take place in data centers and cloud environments that have both DDIO and remote direct memory access enabled to allow servers to exchange data. A server leased by a malicious hacker could abuse the vulnerability to attack other customers. To prove their point, the researchers devised an attack that allows a server to steal keystrokes typed into the protected SSH (or secure shell session) established between another server and an application server.

Worse Than FailureCodeSOD: Should I Do this? Depends.

One of the key differences between a true WTF and an ugly hack is a degree of self-awareness. It's not a WTF if you know it's a WTF. If you've been doing this job for a non-zero amount of time, you have had a moment where you have a solution, and you know it's wrong, you know you shouldn't do this, but by the gods, it works and you've got more important stuff to worry about right now, so you just do it.

An anonymous submitter committed a sin, and has reached out to us for absolution.

This is a case of "DevOps" hackery. They have one server with no Internet- one remote server with no Internet. Deploying software to a server you can't access physically or through the Internet is a challenge. They have a solution involving hopping through some other servers and bridging the network that lets them get the .deb package files within reach of the destination server.

But that introduces a new problem: these packages have complex dependency chains and unless they're installed in the right order, it won't work. The correct solution would be to install a local package repository on the destination server, and let apt worry about resolving those dependencies.

And in the long run, that's what our anonymous submitter promises to do. But they found themselves in a situation where they had more important things to worry about, and just needed to do it.

#!/bin/bash count=0 for f in ./*.deb do echo "Attempt $count" for file in ./*.deb do echo "Installing $file" sudo dpkg -i $file done (( count++ )) done

This is a solution to dependency management which operates on O(N^2): we install each package once for the total number of packages in the folder. It's the brutest of force solutions, and no matter what our dependency chain looks like, by sheer process of elimination, this will eventually get every package installed. Eventually.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianSam Hartman: Free as in Sausage Making: Inside the Debian Project

Recently, we’ve been having some discussion around the use of non-free software and services in doing our Debian work. In judging consensus surrounding a discussion of Git packaging, I said that we do not have a consensus to forbid the use of non-free services like Github. I stand behind that consensus call. Ian Jackson, who initially thought that I misread the consensus later agreed with my call.


I have been debating whether it would be wise for me as project leader to say more on the issue. Ultimately I have decided to share my thoughts. Yes, some of this is my personal opinion. Yet I think my thoughts resonate with things said on the mailing list; by sharing my thoughts I may help facilitate the discussion.


We are bound together by the Social Contract. Anyone is welcome to contribute to Debian so long as they follow the Social Contract, the DFSG, and the rest of our community standards. The Social Contract talks about what we will build (a free operating system called Debian). Besides SC #3 (we will not hide problems), the contract says very little about how we will build Debian.


What matters is what you do, not what you believe. You don’t even need to believe in free software to be part of Debian, so long as you’re busy writing or contributing to free software. Whether it’s because you believe in user freedom or because your large company has chosen Debian for entirely pragmatic reasons, your free software contributions are welcome.


I think that is one of our core strengths. We’re an incredibly diverse community. When we try to tie something else to what it means to be Debian beyond the quality of that free operating system we produce, judged by how it meets the needs of our users, we risk diminishing Debian. Our diversity serves the free software community well. We have always balanced pragmatic concerns against freedom. We didn’t ignore binary blobs and non-free firmware in the kernel, but we took the time to make sure we balanced our users’ needs for functional systems against their needs for freedom. By being so diverse, we have helped build a product that is useful both to people who care about freedom and other issues. Debian has been pragmatic enough that our product is wildly popular. We care enough about freedom and do the hard work of finding workable solutions that many issues of software freedom have become mainstream concerns with viable solutions.


Debian has always taken a pragmatic approach to its own infrastructure and to how Debian is developed. The Social Contract requires that the resulting operating system be 100% free software. But that has never been true of the Debian Project nor of our developers.



  • At the time the Social contract was adopted, uploading a package to Debian involved signing it with the non-free PGP version 2.6.3. It was years later that GnuPG became commonly used.

  • Debian developers of the day didn’t use non-free tools to sign the Social Contract. They didn’t digitally sign it at all. Yet their discussions used the non-free Qmail because people running the Debian infrastructure decided that was the best solution for the project’s mailing lists.


“That was then,” you say.



  • Today, some parts of security.debian.org redirect to security-cdn.debian.org, a non-free web service

  • Our recommended mirror (deb.debian.org) is backed by multiple non-free CDN web services.

  • Some day we may be using more non-free services. If trends in email handling continue, we may find that we need to use some non-free service to get the email we send accepted by major email providers. I know of no such plan in Debian today, but I know other organizations have faced similar choices.


Yet these choices to use non-free software and non-free services in the production of Debian have real costs. Many members of our community prefer to use free software. When we make these choices, we can make it harder for people to contribute to Debian. When we decline to use free software we may also be missing out on an opportunity to improve the free software community or to improve Debian itself. Ian eloquently describes the frustrations those who wish to use only free software face when faced with choices to use non-free services.


As alternatives to non-free software or services have become available, we as a project have consistently moved toward free options.


Normally, we let those doing the work within Debian choose whether non-free services or software are sufficiently better than the free alternatives that we will use them in our work. There is a strong desire to prefer free software and self-hosted infrastructure when that can meet our needs.


For individual maintainers, this generally means that you can choose the tools you want to do your Debian work. The resulting contributions to Debian must themselves be free. But if you want to go write all your Debian packaging in Visual Studio on Windows, we’re not going to stop you, although many of us will think your choices are unusual.


And my take is that if you want to store Debian packages on Github, you can do that too. But if you do that, you will be making it harder for many Debian contributors to contribute to your packages. As Ian discussed, even if you listen to the BTS, you will create two classes of contributors: those who are comfortable with your tools and those who are not. Perhaps you’ve considered this already. Perhaps you value making things easier for yourself or for interacting with an upstream community on Github over making it easier for contributors who want to use only free tools. Traditionally in Debian, we’ve decided that the people doing the work generally get to make that decision. Some day perhaps we’ll decide that all Debian packaging needs to be done in a VCS hosted on Debian infrastructure. And if we make that decision, we will almost certainly choose a free service to host. We’re not ready to make that change today.


So, what can you do if you want to use only free tools?



  • You could take Ian’s original approach and attempt to mandate project policy. Yet each time we mandate such policy, we will drive people and their contributions away. When the community as a whole evaluates such efforts we’ll need to ask ourselves whether the restriction is worth what we will lose. Sometimes it is. But unsurprisingly in my mind, Debian often finds a balance on these issues.


  • You could work to understand why people use Github or other non-free tools. As you take the time to understand and value the needs of those who use non-free services, you could ask them to understand and value your needs. If you identify gaps in what free software and services offer, work to fix those gaps.


  • Specifically in this instance, I think that setting up easy ways to bidirectionally mirror things between Github and services like Salsa could really help.



Conclusions



  1. We have come together to make a free operating system. Everything else is up for debate. When we shut down that debate—when we decide there is one right answer—we risk diluting our focus and diminishing ourselves.

  2. We and the entire free software community win through the Debian Project’s diversity.

  3. Freedom within the Debian Project has never been simple. Throughout our entire history we’ve used non-free bits in the sausage making, even though the result consists (and can be built from) entirely free bits.

  4. This complexity and diversity is part of what allows us to advocate for software freedom more successfully. Over time, we have replaced non-free software that we use with free alternatives, but those decisions are nuanced and ever-changing.

,

Planet DebianShirish Agarwal: Freedom, Chandrayaan 2 and Corporations in Space.

Today will be a longish blogpost so please excuse if you do not want to read a long article.

While today is my birthday, I don’t feel at all like celebrating. When 8 million Kashmiris are locked down in Kashmir and 19 million to be sent in detention camp, the number may increase, how one one feel happy? Sadly, many people disregard that illegal immigration is everywhere. Whether it is UK or US, Indians too have illegally immigrated. If you look at the comments either in US or UK papers is just as toxic as you would find in twitter in India. Most of the people are uninformed of the various reasons that people choose to take a dangerous path to make home a new country. Alliances are also divided because the children grow up in another culture and then they will be ‘corrupted’ especially if women are sent back to India. The situation in India have never been as similar as they are today, see this from Najam Sethi, an internationally known left-leaning journalist
https://www.youtube.com/watch?v=OCcrobZMy7A
and similarly you can see how Indian and US Investigative journalism is having a slow death in both India and U.S.
https://www.youtube.com/watch?v=65P44plUCng

You can also see how similar the societies are going into with this conversations
https://www.youtube.com/watch?v=ieWZi4gm_yE
https://www.youtube.com/watch?v=g_1oJui2Zq8

There are good moments too as can be seen here –

People going to Ganesh Visarjan and Muharram, Hyderabad.

We always say we are better than Pakistan but we seem to be going down to the same road and that can’t be good. Forget politics, even human right issues are not being tackled sensitively by our Supreme Court. Just today there came an incident involving one of the victims of the Muzaffarpur Shelter Home being allegedlly raped in a moving car. The case has been pending in the Supreme Court for quite sometime but no action being taken so far. Journalists either in Uttar Pradesh or Haryana are being booked for showing the truth. I have been trying to engage with people from across the political divide, i.e. the ones who support BJP. Majority of them are don’t have jobs and this is the only way they can get their frustration out. Also the dissonance in political message is such they feel that their jobs are being taken by outsiders. Ironically the Ministers get away with saying things like ‘North Indians lack qualifications’ . It shows lack of empathy on the Minister’s part. If they are citizens of the state, then it is the state’s responsibility of making sure they are skilled. If they are not skilled, then it is the Central Government and State Governments responsibility. Most of the States in the North are governed by BJP. I could share more but then it will all be about BJP only and nothing about the Chandrayaan 2 mission.

Chandrayaan 2 and Corporate Interests

Before we get to Chandrayaan 2, there are few interesting series I want to talk about, share about. The first one is AltBalaji’s Mission Over Mars which in some ways is similar to Mars 6-part series Docu-drama made by National Geographic and lot of movies, books etc. read over years. In both these and other books, movies etc. it has been shown how Corporate Interests win over science and exploration which the motives of such initiatives were and are. The rich become richer and richer while the poor become more poorer.

There has been also lot of media speculation that ISRO should be privatized similar to how NASA is and people saying that NASA’s importance has not lessened even though they couldn’t have been more wrong. Take the Space Launch System . It was first though of in the 2010 after the NASA Authorization Act of 2010 came into being. When it was shared or told it was told that it would be ready somewhere in 2016. Now it seems it won’t be ready until 2025. And who is responsible for this, the same company which has been responsible for lot of bad news in the international aviation business Boeing. The auditor’s report for NASA while blaming NASA for oversight also blames Boeing for not doing things right. And from what we have come to know that in the american system of self-regulation leaves much to be desired. More so, when an ex-employee of Boeing is exercising his fifth Amendment rights which raises the suspicion that there is more than just simply an oversight issue. Boeing also is a weapons manufacturer but that’s another story altogether. For people interested in the arms stuff, a wired article published 2 years back gives enough info. as to how America is good bad or Arms sale.

I know the whole thing gives a rather complex picture but that is the nature of things. The only thing I would say is we should be very careful in privatizing ISRO as the same issues are bound to happen sooner or later, the more private and non-transparent things become. We, the common citizens would never come to know if any sort of national or industrial espionage is happening and of course all profits would be of corporates while losses will be public as seen can be nowadays. Doesn’t leave a good taste in the mouth.

Vikram Lander

Now while the jury is still out as to what happened or might have happened and we hope that Vikram does connect within the 14 days there are lots of theories as to what could have gone wrong. While I’m no expert, I do find it hard to take the statement that probably ISRO saw an image of Chandrayaan 2 lander, at least not a single image has been released in the public. What ISRO has shared in its updates is that it located a lander which doesn’t tell much. While the Chandrayaan 2 orbitor started at 100 km. lunar orbit it probably would have some deviations to make sure that the orbiter itself doesn’t get into Moon’s gravity and crash-lands on the moon itself. The lens which probably would have been used would be to take panaromic shots and not telescopic in nature. As to what happened, we just don’t know as of yet. There are probably a dozen or two probabilities. One of the most simplest explanation to my mind could be some space rock could have crashed into the lander when it was landing. The dark side of the moon has more impacts than the one which we face so it’s entirely possible that the lander got hit by a space rock or lava. From what little I have been able to learn, the lander doesn’t seem to have any A.I. to manoeuver if such a scenario happens. Also any functioning A.I. would probably need more energy and for space missions energy, weight, electrical interference, contamination are all issues that Space Agencies have to deal with it. The other is of course, sensor failure, wrong calculation or a rough spot where it landed and broke the antennae. Till ISRO doesn’t share more details with us, we have only conjecture to help us.

Chandrayaan 2 Imaging

While we await news about the lander, would be curious to know about the images that Chandrayaan 2 is getting. Sadly, none of the images have made it to the public domain as of yet. Whether the images are in FITS or RAW format and whatever spectrum, (Chandrayaan 2 is going to image in a wide range of spectrum.) . Like Spirit and Opportunity did for NASA, I hope ISRO does show renderings of Moon as captured by the Orbitor, even though its lifeless so people, especially children get enthused about getting into Space Sciences .

Planet DebianMolly de Blanc: Free software activities (August 2019)

A photo of a beach in Greece, with bleach and tourquoise water and well trodden sand. In the forground is an inflatable uniforn with a rainbow mane.

August was really marked by traveling too much. I took the end of the month off from non-work activities in order to focus on GUADEC and GUADEC-follow up.

Personal

  • The Debian Community Team (CT) had a meeting where we discussed some of our activities, including potential new team members!
  • CT team members went on VAC, so we took a bit of a break in the second half of the month.
  • The OSI standing committee and board had meetings.
  • I handled some paperwork in my capacity as president.
  • I had regular meetings with the OSI general manager.
  • I gave a keynote at FrOSCon on “Open source citizenship for everyone!” TL;DR: We have rights and responsibilities as people participating in free software and the open source ecosystem — “we” here includes corporate actors.
  • I bought a really sweet pair of GNOME socks. Do recommend.

Professional

  • The LAS sponsorship team met and handled the creation of some important paperwork, and discussed fundraising strategy for the event.
  • I attended the GNOME Advisory Board meeting, where I got to meet and speak with the Foundation Board and the Advisory Board about activities over the past year, plans for the future, and the needs of the communities of AdBoard members. It was really educational and a lot of fun.
  • I attended my first GUADEC! it was amazing. I wrote a trip report over on the GNOME Engagement Blog.
  • At GUADEC, I spent some time helping out with basic operations, including keeping time in sessions.
  • We, the staff and board, did a Q&A at the Annual General Meeting.
  • I drank a lot of coffee. Like, a lot.

Planet DebianDirk Eddelbuettel: pinp 0.0.9: Real Fix and Polish

Another pinp package release! pinp allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

pinp vignette

This release comes exactly one week (i.e. the minimal time to not earn a NOTE) after the hot-fix release 0.0.8 which addressed breakage on CRAN tickled by changed in TeX Live. After updating the PNAS style LaTeX macros, and avoiding the issue with an (older) custom copy of titlesec, we now have the real fix, thanks to the eagle-eyed attention of Javier Bezos. The error, as so often, was simple and ours: we had left a stray \makeatother in pinp.cls where it may have been in hiding for a while. A very big Thank You! to Javier for spotting it, to Norbert for all his help and to James for double-checking on PNAS.

The good news in all of this is that the package is now in better shape than ever. The newer PNAS style works really well, and I went over a few of our extensions (such as papersize support for a4 as well as letter), direct on/off off a Draft watermark, a custom subtitle and more—and they all work consistently. So happy vignette or paper writing!

The NEWS entry for this release follows.

Changes in pinp version 0.0.9 (2019-09-15)

  • The processing error first addressed in release 0.0.8 is now fixed by removing one stray command; many thanks to Javier Bezos.

  • The hotfix of also installing titlesec.sty has been reverted.

  • Processing of the 'papersize' and 'watermark' options was updated.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDidier Raboud: miniDebConf19 Vaumarcus – Oct 25-27 2019 – Call for Presentations

MiniDebConf Vaumarcus 2019 - Oct 25.-27.

Talks wanted

We’re opening the Call for Presentations for the miniDebConf19 Vaumarcus now, until October 20, so please contribute to the MiniDebConf by proposing a talk, workshop, birds of feather (BoF) session, etc, directly on the Debian wiki: /Vaumarcus/TalkSubmissions We are aiming for talks which are somehow related to Debian or Free Software in general, see the wiki for subject suggestions. We expect submissions and talks to be held in English, as this is the working language in Debian and at this event. Registration is also still open; through the Debian wiki: Vaumarcus/Registration.

Debian Sprints are welcome

The place is ideal for a 2 days’ sprint; so we encourage teams to assemble and gather in Vaumarcus!

More sponsors and more hands wanted

We’re looking for more sponsors willing to help making this event possible; to help making it easier for anyone interested to attend.
Things are on a good track, but we need more help. Specifically, Attendee support would benefit from more hands.

Get in touch

We gather on the #debian.ch channel on irc.debian.org and on the debian-switzerland@lists.debian.org list. For more private matters, talk to minidebconf19@debian.ch!

Thank you already!

Sponsors Supporters
ServerBase - We keep IT online C+R Informatique Libre

See ya!

We’re looking forward to seeing a lot of you in Vaumarcus! (This was also sent to debian-devel-announce@l.d.o, amongst other lists)

,

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Planet DebianMatthew Garrett: It's time to talk about post-RMS Free Software

Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

[1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

(A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
(B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

[2] This argument is, obviously, bullshit

comment count unavailable comments

,

Planet DebianDirk Eddelbuettel: ttdo 0.0.3: New package

A new package of mine arrived on CRAN yesterday, having been uploaded a few days prior on the weekend. It extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam. Mark also tweeted about it.

ttdo screenshot

The package was written to address a fairly specific need. In teaching STAT 430 at Illinois, I am relying on the powerful PrairieLearn system (developed there) to provides tests, quizzes or homework. Alton and I have put together an autograder for R (which is work in progress, more on that maybe another day), and that uses this package to provides colorized differences between supplied and expected answers in case of an incorrect answer.

Now, the aspect of providing colorized diffs when tests do not evalute to TRUE is both simple and general enough. As our approach works rather well, I decided to offer the package on CRAN as well. The small screenshot gives a simple idea, the README.md contains a larger screenshoot.

The initial NEWS entries follow below.

Changes in ttdo version 0.0.3 (2019-09-08)

  • Added a simple demo to support initial CRAN upload.

Changes in ttdo version 0.0.2 (2019-08-31)

  • Updated defaults for format and mode to use the same options used by diffobj along with fallbacks.

Changes in ttdo version 0.0.1 (2019-08-26)

  • Initial version, with thanks to both Mark and Brodie.

Please use the GitHub repo and its issues for any questions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramFriday Squid Blogging: How Scientists Captured the Giant Squid Video

In June, I blogged about a video of a live juvenile giant squid. Here's how that video was captured.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBorder Stories: A night of talks on immigration, justice and freedom

Hosts Anne Milgram and Juan Enriquez kick off the evening at TEDSalon: Border Stories at the TED World Theater in New York City on September 10, 2019. (Photo: Ryan Lash / TED)

Immigration can be a deeply polarizing topic. But at heart, immigration policies and practices reflect no less than our attitude towards humanity. At TEDSalon: Border Stories, we explored the reality of life at the US-Mexico border, the history of the US immigration policy and possible solutions for reform — and investigated what’s truly at stake.

The event: TEDSalon: Border Stories, hosted by criminal justice reformer Anne Milgram and author and academic Juan Enriquez

When and where: Tuesday, September 10, 2019, at the TED World Theater in New York City

Speakers: Paul A. Kramer, Luis H. Zayas, Erika Pinheiro, David J. Bier and Will Hurd

Music: From Morley and Martha Redbone

A special performance: Poet and thinker Maria Popova, reading an excerpt from her book Figuring. A stunning meditation on “the illusion of separateness, of otherness” — and on “the infinitely many kinds of beautiful lives” that inhabit this universe — accompanied by cellist Dave Eggar and guitarist Chris Bruce.

“There are infinitely many kinds of beautiful lives,” says Maria Popova, reading a selection of her work at TEDSalon: Border Stories. (Photo: Ryan Lash / TED)

The talks in brief:

Paul A. Kramer, historian, writer, professor of history

  • Big idea: It’s time we make the immigration conversation to reflect how the world really works.
  • How? We must rid ourselves of the outdated questions, born from nativist and nationalist sentiments, that have permeated the immigration debate for centuries: interrogations of usefulness and assimilation, of parasitic rhetoric aimed at dismantling any positive discussions around immigration. What gives these damaging queries traction and power, Kramer says, is how they tap into a seemingly harmless sense of national belonging — and ultimately activate, heighten and inflame it. Kramer maps out a way for us to redraw those mental, societal and political borders and give immigrants access to the rights and resources that their work, activism and home countries have already played a fundamental role in creating.
  • Quote of the talk: “[We need] to redraw the boundaries of who counts — whose life, whose rights and whose thriving matters. We need to redraw … the borders of us.”

Luis H. Zayas, social worker, psychologist, researcher

  • Big idea: Asylum seekers — especially children — face traumatizing conditions at the US-Mexico border. We need compassionate, humane practices that give them the care they need during arduous times.
  • Why? Under prolonged and intense stress, the young developing brain is harmed — plain and simple, says Luis H. Zayas. He details the distressing conditions immigrant families face on their way to the US, which have only escalated since children started being separated from their parents and held in detention centers. He urges the US to reframe its practices, replacing hostility and fear with safety and compassion. For instance: the US could open processing centers, where immigrants can find the support they need to start a new life. These facilities would be community-oriented, offering medical care, social support and the fundamental human right to respectful and dignified treatment.
  • Quote of the talk: “I hope we can agree on one thing: that none of us wants to look back at this moment in our history when we knew we were inflicting lifelong trauma on children, and that we sat back and did nothing. That would be the greatest tragedy of all.”

Immigration lawyer Erika Pinheiro discusses the hidden realities of the US immigration system. “Seeing these horrors day in and day out has changed me,” she says. (Photo: Ryan Lash / TED)

Erika Pinheiro, nonprofit litigation and policy director

  • Big idea: The current US administration’s mass separations of asylum-seeking families at the Mexican border shocked the conscience of the world — and the cruel realities of the immigration system have only gotten worse. We need a legal and social reckoning.
  • How? US immigration laws are broken, says Erika Pinheiro. Since 2017, US attorneys general have made sweeping changes to asylum law to ensure fewer people qualify for protection in the US. This includes all types of people fleeing persecution: Venezuelan activists, Russian dissidents, Chinese Muslims, climate change refugees — the list goes on. The US has simultaneously created a parallel legal system where migrants are detained indefinitely, often without access to legal help. Pinheiro issues a call to action: if you are against the cruel and inhumane treatment of migrants, then you need to get involved. You need to demand that your lawmakers expand the definition of refugees and amend laws to ensure immigrants have access to counsel and independent courts. Failing to act now threatens the inherent dignity of all humans.
  • Quote of the talk: “History shows us that the first population to be vilified and stripped of their rights is rarely the last.”

David J. Bier, immigration policy analyst

  • Big idea: We can solve the border crisis in a humane fashion. In fact, we’ve done so before.
  • How? Most migrants who travel illegally from Central America to the US do so because they have no way to enter the US legally. When these immigrants are caught, they find themselves in the grips of a cruel system of incarceration and dehumanization — but is inhumane treatment really necessary to protect our borders? Bier points us to the example of Mexican guest worker programs, which allow immigrants to cross borders and work the jobs they need to support their families. As legal opportunities to cross the border have increased, the number of illegal Mexican immigrants seized at the border has plummeted 98 percent. If we were to extend guest worker programs to Central Americans as well, Bier says, we could see a similar drop in the numbers of illegal immigrants.
  • Quote of the talk: “This belief that the only way to maintain order is with inhumane means is inaccurate — and, in fact, the opposite is true. Only a humane system will create order at the border.”

“Building a 30-foot-high concrete structure from sea to shining sea is the most expensive and least effective way to do border security,” says Congressman Will Hurd in a video interview with Anne Milgram at TEDSalon: Border Stories. (Photo: Ryan Lash / TED)

Will Hurd, US Representative for Texas’s 23rd congressional district

  • Big idea: Walls won’t solve our problems.
  • Why? Representing a massive district that encompasses 29 counties and two times zones and shares an 820-mile border with Mexico, Republican Congressman Will Hurd has a frontline perspective on illegal immigration in Texas. Legal immigration options and modernizing the Border Patrol (which still measures their response times to border incidents in hours and days) will be what ultimately stems the tide of illegal border crossings, Hurd says. Instead of investing in walls and separating families, the US should invest in their own defense forces — and, on the other side of the border, work to alleviate poverty and violence in Central American countries.
  • Quote of the talk: “When you’re debating your strategy, if somebody comes up with the idea of snatching a child out of their mother’s arms, you need to go back to the drawing board. This is not what the United States of America stands for. This is not a Republican or a Democrat or an Independent thing. This is a human decency thing.”

Juan Enriquez, author and academic

  • Big idea: If the US continues to divide groups of people into “us” and “them,” we open the door to inhumanity and atrocity — and not just at our borders.
  • How? Countries that survive and grow as the years go by are compassionate, kind, smart and brave; countries that don’t govern by cruelty and fear, says Juan Enriquez. In a personal talk, he calls on us to realize that deportation, imprisonment and dehumanization aren’t isolated phenomena directed at people crossing the border illegally but instead things are happening to the people who live and work by our sides in our communities. Now is the time to stand up and do something to stop our country’s slide into fear and division — whether it’s engaging in small acts of humanity, loud protests in the streets or activism directed at enacting legislative or policy changes.
  • Quote of the talk: “This is how you wipe out an economy. This isn’t about kids and borders, it’s about us. This is about who we are, who we the people are, as a nation and as individuals. This is not an abstract debate.”

TEDNot All Is Broken: Notes from Session 6 of TEDSummit 2019

Raconteur Mackenzie Dalrymple regales the TEDSummit audience with a classic Scottish story. He speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In the final session of TEDSummit 2019, the themes from the week — our search for belonging and community, our digital future, our inextricable connection to the environment — ring out with clarity and insight. From the mysterious ways our emotions impact our biological hearts, to a tour-de-force talk on the languages we all speak, it’s a fitting close to a week of revelation, laughter, tears and wonder.

The event: TEDSummit 2019, Session 6: Not All Is Broken, hosted by Chris Anderson and Bruno Giussani

When and where: Thursday, July 25, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Johann Hari, Sandeep Jauhar, Anna Piperal, Eli Pariser, Poet Ali

Interlude: Mackenzie Dalrymple sharing the tale of an uncle and nephew competing to become Lord of the Isles

Music: Djazia Satour, blending 1950s Chaabi (a genre of North African folk music) with modern grooves

The talks in brief:

Johann Hari, journalist

Big idea: The cultural narrative and definitions of depression and anxiety need to change.

Why? We need to talk less about chemical imbalances and more about imbalances in the way we live. Johann Hari met with experts around the world, boiling down his research into a surprisingly simple thesis: all humans have physical needs (food, shelter, water) as well as psychological needs (feeling that you belong, that your life has meaning and purpose). Though antidepressant drugs work for some, biology isn’t the whole picture, and any treatment must be paired with a social approach. Our best bet is to listen to the signals of our bodies, instead of dismissing them as signs of weakness or madness. If we take time to investigate our red flags of depression and anxiety — and take the time to reevaluate how we build meaning and purpose, especially through social connections — we can start to heal in a society deemed the loneliest in human history.

Quote of the talk: “If you’re depressed, if you’re anxious — you’re not weak. You’re not crazy. You’re not a machine with broken parts. You’re a human being with unmet needs.”


“Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways,” says cardiologist Sandeep Jauhar. He speaks at TEDSummit: A Community Beyond Borders, July 21-25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sandeep Jauhar, cardiologist

Big Idea: Emotional stress can be a matter of life and death. Let’s factor that into how we care for our hearts.

How? “The heart may not originate our feelings, but it is highly responsive to them,” says Sandeep Jauhar. In his practice as a cardiologist, he has seen extensive evidence of this: grief and fear can cause profound cardiac injury. “Takotsubo cardiomyopathy,” or broken heart syndrome, has been found to occur when the heart weakens after the death of a loved one or the stress of a large-scale natural disaster. It comes with none of the other usual symptoms of heart disease, and it can resolve in just a few weeks. But it can also prove fatal. In response, Jauhar says that we need a new paradigm of care, one that considers the heart as more than “a machine that can be manipulated and controlled” — and recognizes that emotional stress is as important as cholesterol.

Quote of the talk: “Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways.”


“In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated,” says e-governance expert Anna Piperal. She speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Anna Piperal, e-governance expert 

Big idea: Bureaucracy can be eradicated by going digital — but we’ll need to build in commitment and trust.

How? Estonia is one of the most digital societies on earth. After gaining independence 30 years ago, and subsequently building itself up from scratch, the country decided not only to digitize existing bureaucracy but also to create an entirely new system. Now citizens can conduct everything online, from running a business to voting and managing their healthcare records, and only need to show up in person for literally three things: to claim their identity card, marry or divorce, or sell a property. Anna Piperal explains how, using a form of blockchain technology, e-Estonia builds trust through the “once-only” principle, through which the state cannot ask for information more than once nor store it in more than one place. The country is working to redefine bureaucracy by making it more efficient, granting citizens full ownership of their data — and serving as a model for the rest of the world to do the same.

Quote of the talk: “In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated.”


Eli Pariser, CEO of Upworthy

Big idea: We can find ways to make our online spaces civil and safe, much like our best cities.

How? Social media is a chaotic and sometimes dangerous place. With its trolls, criminals and segregated spaces, it’s a lot like New York City in the 1970s. But like New York City, it’s also a vibrant space in which people can innovate and find new ideas. So Eli Pariser asks: What if we design social media like we design cities, taking cues from social scientists and urban planners like Jane Jacobs? Built around empowered communities, one-on-one interactions and public censure for those who act out, platforms could encourage trust and discourse, discourage antisocial behavior and diminish the sense of chaos that leads some to embrace authoritarianism.

Quote of the talk: “If online digital spaces are going to be our new home, let’s make them a comfortable, beautiful place to live — a place we all feel not just included, but actually some ownership of. A place we get to know each other. A place you’d actually want not just to visit, but to bring your kids.”


“Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds,” says Poet Ali. He speaks at at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Poet Ali, architect of human connection

Big idea: You speak far more languages than you realize, with each language representing a gateway to understanding different societies, cultures and experiences.

How? Whether it’s the recognized tongue of your country or profession, or the social norms of your community, every “language” you speak is more than a lexicon of words: it also encompasses feelings like laughter, solidarity, even a sense of being left out. These latter languages are universal, and the more we embrace their commonality — and acknowledge our fluency in them — the more we can empathize with our fellow humans, regardless of our differences.

Quote of the talk: “Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds.”

CryptogramWhen Biology Becomes Software

All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.

Genes and genomes are based on code-- just like the digital language of computers. But instead of zeros and ones, four DNA letters --- A, C, T, G -- encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.

If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.

Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it's a relatively simple operation by today's standards, it'll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they're running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient's immune system.

We have known the mechanics of DNA for some 60 plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first "recombinant" virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.

In 2010 Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases the researchers created what could arguably be called new forms of life.

This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to "run" it in an already existing organism, from a virus to a wheat plant to a person.

In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.

Within the biological science community, urgent conversations are occurring about "cyberbiosecurity," an admittedly contested term which exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.

These risks have occupied not only learned bodies -- the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively -- but have made it to mainstream media: genome editing was a major plot element in Netflix's Season 3 of "Designated Survivor."

Our worries are more prosaic. As synthetic biology "programming" reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.

Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.

Even finished code still has problems. Again due to the complexity of modern software systems, "works properly" doesn't mean that it's perfectly correct. Modern software is full of bugs -- thousands of software flaws -- that occasionally affect performance or security. That's why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.

Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn't work in biology.

In nature, a similar type of trial and error is handled by "the survival of the fittest" and occurs slowly over many generations. But human-generated code from scratch doesn't have that kind of correction mechanism. Inadvertent or intentional release of these newly coded "programs" may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.

Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn't make its way into the larger body of practitioners who have important contributions to make.

Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.

What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.

Those opportunities will not occur without effort or financial support. Let's find those resources, public, private, philanthropic, or any combination. And then let's use those resources to set up some novel opportunities for digital geeks and bionerds -- as well as ethicists and policymakers -- to share experiences, concerns, and come up with creative, constructive solutions to these problems that are more than just patches.

These are overarching problems; let's not let siloed thinking or funding get in the way of breaking down barriers between communities. And let's not let technology of any kind get in the way of the public good.

This essay previously appeared on CNN.com.

Planet DebianNorbert Preining: Gaming: Puzzle Agent

Two lovely but short puzzle games: Puzzle Agent and Puzzle Agent II, follow agent Nelson Tethers in his quest to solve an obscure case in Scoggins, Minnesota: The erasers factory delivering to the White House stopped production – a dangerous situation for the US and the world. Tethers embarks on a wild journey.

Starting in his office, agent Tethers is used to office work solving puzzles, mostly inspired by chewing gum. Until a strange encounter and a phone call kicks him out in to the wild.

The game is full of puzzles, most of them rather easy, some of them tricky. One can use the spare chewing gums to get a hint in case one gets stuck. Chewing gums are rare in Scoggins, so agent Tethers needs to collect used gums from all kind of surfaces.

Solved puzzles are sent to evaluation, also showing the huge amount of money one single FBI agent costs. After that the performance of agent Tethers is evaluated based on the amount of hints (chewing gums) and false submissions.

The rest are dialog trees to collect information, and driving around in the neighborhood of Scoggins. The game shines by the well balanced list of puzzles to be solved, the quirky dialogs with quirky people of Scoggins.

The game is beautifully drawn in cartoon-style, far from the shine ray-tracing world, but this particularly adds a lot of charme to the game.

A simply, but very enjoyable pair of games. Unfortunately there is not much of replay value. Still, worth getting them when there are on sale.

CryptogramSmart Watches and Cheating on Tests

The Independent Commission on Examination Malpractice in the UK has recommended that all watches be banned from exam rooms, basically because it's becoming very difficult to tell regular watches from smart watches.

Worse Than FailureError'd: Many Languages, One WTF

"It's as if IntelliJ IDEA just gave up trying to parse my code," writes John F.

Henry D. writes, "If you have a phone in English but have it configured to recognize two different languages, simple requests sometimes morph into the weirdest things."

 

 

Carl C. wrote, "Maybe Best Buy's page is referring to a store near Nulltown, Indiana, but really, I think their site is on drugs."

 

"Yeah, Thanks Cisco, but I'm not sure I really want to learn more," writes Matt P.

 

"Ebay is alerting me to something. No idea what it is, but I can tell you what they named their variables," Lincoln K. wrote.

 

"Not quite sure what secrets the Inner Circle holds, I guess knowing Latin?" writes Matt S.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

LongNowShort film of Comet 67P made from 400,000 Rosetta images is released

On August 6, 02014, the European Space Agency’s Rosetta probe successfully reached Comet 67P. In addition to studying the comet, Rosetta was able to place one of Long Now’s Rosetta Disks on its surface via its Philae lander.

In 02017, ESA released over 400,000 images from the Rosetta mission. Now, motion designer Christian Stangl has made a short film out of the images.
The Comet offers a remarkable, beautiful, and haunting look at this alien body from the Kuiper belt. Watch it below:

the Comet from Christian Stangl on Vimeo.

Planet DebianJonas Meurer: debian lts report 2019.08

Debian LTS report for August 2019

This month I was allocated 10 hours. Unfortunately, I didn't find much time to work on LTS issues, so I only spent 0.5 hours on the task listed below. That means that I carry over 9.5 hours to September.

  • Triaged CVE-2019-13640/qbittorrent: After digging through the code, it became obvious that qbittorrent 3.1.10 in Debian Jessie is not affected by this vulnerability as the affected code is not present yet.

Planet DebianBen Hutchings: Debian LTS work, August 2019

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I prepared and, after review, released Linux 3.16.72, including various security and other fixes. I then rebased the Debian package onto that. I uploaded that with a small number of other fixes and issued DLA-1884-1. I also prepared and released Linux 3.16.73 with another small set of fixes.

I backported the latest security update for Linux 4.9 from stretch to jessie and issued DLA-1885-1 for that.

CryptogramFabricated Voice Used in Financial Fraud

This seems to be an identity theft first:

Criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

Another news article.

CryptogramNotPetya

Wired has a long article on NotPetya.

EDITED TO ADD (9/12): Another good article on NotPetya.

CryptogramDefault Password for GPS Trackers

Many GPS trackers are shipped with the default password 123456. Many users don't change them.

We just need to eliminate default passwords. This is an easy win.

EDITED TO ADD (9/12): A California law bans default passwords starting in 2020.

CryptogramMore on Law Enforcement Backdoor Demands

The Carnegie Endowment for International Peace and Princeton University's Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the "going dark" debate. They have released their report: "Moving the Encryption Policy Conversation Forward.

The main contribution seems to be that attempts to backdoor devices like smartphones shouldn't also backdoor communications systems:

Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

I don't believe that backdoor access to encryption data at rest offers "an achievable balance of risk vs. benefit" either, but I agree that the two aspects should be treated independently.

EDITED TO ADD (9/12): This report does an important job moving the debate forward. It advises that policymakers break the issues into component parts. Instead of talking about restricting all encryption, it separates encrypted data at rest (storage) from encrypted data in motion (communication). It advises that policymakers pick the problems they have some chance of solving, and not demand systems that put everyone in danger. For example: no key escrow, and no use of software updates to break into devices).

Data in motion poses challenges that are not present for data at rest. For example, modern cryptographic protocols for data in motion use a separate "session key"� for each message, unrelated to the private/public key pairs used to initiate communication, to preserve the message's secrecy independent of other messages (consistent with a concept known as "forward secrecy"). While there are potential techniques for recording, escrowing, or otherwise allowing access to these session keys, by their nature, each would break forward secrecy and related concepts and would create a massive target for criminal and foreign intelligence adversaries. Any technical steps to simplify the collection or tracking of session keys, such as linking keys to other keys or storing keys after they are used, would represent a fundamental weakening of all the communications.

These are all big steps forward given who signed on to the report. Not just the usual suspects, but also Jim Baker -- former general counsel of the FBI -- and Chris Inglis: former deputy director of the NSA.

Planet DebianThomas Lange: FAI.me service now support backports for Debian 10 (buster)

The FAI.me service for creating customized installation and cloud images now supports a backports kernel for stable release Debian 10 (aka buster). If you enable the backports option, you will currently get kernel 5.2. This will help you if you have newer hardware that is not support by the default kernel 4.19. The backports option is also still available for the images when using the old Debian 9 (stretch) release.

The URL of the FAI.me service is

https://fai-project.org/FAIme/

FAI.me

Worse Than FailureCodeSOD: Time to Wait

When dealing with customers- and here, we mean, “off the street” customers- they often want to know “how long am I going to have to wait?” Whether we’re talking about a restaurant, a mechanic, a doctor’s office, or a computer/phone repair shop, knowing (and sharing with our customers) reasonable expectations about how much time they’re about to spend waiting.

Russell F works on an application which facilitates this sort of customer-facing management. It does much more, too, obviously, but one of its lesser features is to estimate how long a customer is about to spend waiting.

This is how that’s calculated:

TimeSpan tsDifference = dtWorkTime - DateTime.Now;
string strEstWaitHM = ((tsDifference.Hours * 60) + tsDifference.Minutes).ToString();
if (Convert.ToInt32(strEstWaitHM) >= 60)
{
	decimal decWrkH = Math.Floor(Convert.ToDecimal(strEstWaitHM) / 60);
	int intH = Convert.ToInt32(decWrkH);
	txtEstWaitHours.Value = Convert.ToString(intH);
	int intM = Convert.ToInt32(strEstWaitHM) - (60 * intH);
	txtEstWaitMinutes.Value = Convert.ToString(intM);
}
else
{
	txtEstWaitHours.Value = "";
	txtEstWaitMinutes.Value = strEstWaitHM;
}

Hungarian Notation is always a great sign of bad code. It really is, and I think that’s because it’s easy to do, easy to enforce as a standard, and provides the most benefit when you have messy variable scoping and keeping track of what type a given variable is might actually be a challenge.

Or, as we see in this case, it’s useful when you’re passing the same data through a method with different types. We calculate the difference between the WorkTime and Now. That’s the last thing in this code which makes sense.

The key goal here is that, if we’re going to be waiting for more than an hour, we want to display both the hours and minutes, but if it’s just minutes, we want to display just that.

We have that TimeSpan object, which as you can see, has a convenient Hours and Minutes property. Instead of using that, though, we convert the hours to minutes, add it together, if the number is more than 60, we know we’ll be waiting for over an hour, so we want to populate the hours box, and the minutes box, so we have to convert back to hours and minutes.

In that context, the fact that we have to convert from strings to numbers and back almost seems logical. Almost. I especially like that they Convert.ToDecimal (to avoid rounding errors) and Math.floor the result (to round off). If only there were some numeric type that never rounded off, and always had an integer value. If only…

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianNorbert Preining: TeX Services at texlive.info

I have been working over the last weeks to provide four more services for the TeX (Live) community: an archive of TeX Live’s network installation directory tlnet, a git repository of CTAN, a mirror of the TeX Live historic archives, and a new tlpretest mirror. In addition to the services that have already been provided before on my server, this makes a considerable list, and I thought it is a good idea to summarize all of the services.

Overview of the services

New services added recently are marked with an asterisk (*) at the end.

For the git services, anonymous checkouts are supported. If a developer wants to have push rights, please contact me.

tlnet archive

TeX Live is distributed via the CTAN network in CTAN/systems/texlive/tlnet. The packages there are updated on a daily basis according to updates on CTAN that make it into the TeX Live repository. This has created some problems for distributions requiring specific versions, as well as problems with rollbacks in case of buggy packages.

Starting with 2019/08/30, for each day rsync backups of the tlnet directory are done, and they are available at https://www.texlive.info/tlnet-archive/YYYY/MM/DD/tlnet.

CTAN git repository

The second big item is putting CTAN into a git repository. In a perfect world I could get git commits for each single package update, but that would need a lot of collaboration with the CTAN Team (and maybe this will happen in the future), but for now there is one rsync of the CTAN a day committed after the sync.

Considering the total size of CTAN (currently around 40G), we decided to ignore file types that provide no useful information when put into git, mostly large binary files. The concrete list is tar, zip, pkg, cab, jar, dmg, rpm, deb, tgz, iso, exe, cab, as well as files containing one of these extensions (that means that files foobar.iso.gz will be ignored, too). This allows to keep the size of the .git directory for now at something reasonable amount (a few Gb).

We will see how the git repository grows over time, and whether we can support this on a long term time range.

While we exclude the above files from being recorded in the git repository, the actual CTAN directory is complete and contains all files, meaning that rsync checkout contains everything.

Access to these services is provided as follows:

TeX Live historic archives

The TeX Live historic archives hierarchy contains various items of interest in TeX history, from individual files to entire systems. See the article by Ulrik Vieth at https://tug.org/TUGboat/tb29-1/tb91vieth.pdf for an overview.

We provide a mirror available via rsync://texlive.info/historic/.

tlpretest mirror

During preparation of a new TeX Live release (the pretest phase) we are distributing preliminary builds via a few tlpretest mirrors. The current server will provide access to tlpretest, too:

TeX Live svn/git mirror

Since I prefer to work with git, and developing new features with git on separate branches is so much more convenient than working with subversion, I am running a git-svn mirror of the whole TeX Live subversion repository. This repo is updated every 15min with the latest changes. There are also git branches matching the subversion branches, and some dev/ branches where I am working on new features. The git repository carries, similar to the subversion, the full history back to our switch from Perforce to Subversion in 2005.This repository is quite big, so don’t do a casual checkout (checked out size currently close to 40Gb):

TeX Live contrib

The TeX Live Contrib repository is a companion to the core TeX Live (tlnet) distribution in much the same way as Debian’s non-free tree is a companion to the normal distribution. The goal is not to replace TeX Live: packages that could go into TeX Live itself should stay (or be added) there. The TeX Live Contrib is simply trying to fill in a gap in the current distribution system by providing ready made packages for software that is not distributed in TeX Live proper due to license reasons, support for non-free software, etc.:

TeX Live GnuPG

Starting with release 2016, TeX Live provides facilities to verify authenticity of the TeX Live database using cryptographic signatures. For this to work out, a working GnuPG program needs to be available. In particular, either gpg (version 1) or gpg2 (version 2). To ease adoption of verification, this repository provides a TeX Live package tlgpg that ships GnuPG binaries for Windows and MacOS (universal and x86_64). On other systems we expect GnuPG to be installed.

Supporting these services

We will try to keep this service up and running as long as server space, connectivity, and bandwidth allows. If you find them useful, I happily accept donations via PayPal or Patreon to support the server as well as my time and energy!

,

Sociological ImagesNormal Distributions in the Wild

Social scientists rely on the normal distribution all the time. This classic “bell curve” shape is so important because it fits all kinds of patterns in human behavior, from measures of public opinion to scores on standardized tests.

But it can be difficult to teach the normal distribution in social statistics, because at the core it is a theory about patterns we see in the data. If you’re interested in studying people in their social worlds, it can be more helpful to see how the bell curve emerges from real world examples.

One of the best ways to illustrate this is the “Galton Board,” a desk toy that lets you watch the normal distribution emerge from a random drop of ball-bearings. Check out the video below or a slow motion gif here.

The Galton Board is cool, but I’m also always on the lookout for normal distributions “in the wild.” There are places where you can see the distribution in real patterns of social behavior, rather than simulating them in a controlled environment. My absolute favorite example comes from Ed Burmila:

The wear patterns here show exactly what we would expect a normal distribution to tell us about weightlifting. More people use the machine at a middle weight setting for the average strength, and the extreme choices are less common. Not all social behavior follows this pattern, but when we find cases that do, our techniques to analyze that behavior are fairly simple.

Another cool example is grocery shelves. Because stores like to keep popular products together and right in front of your face (the maxim is “eye level is buy level“), they tend to stock in a normally-distributed pattern with popular stuff right in the middle. We don’t necessarily see this in action until there is a big sale or a rush in an emergency. When stores can’t restock in time, you can see a kind of bell curve emerge on the empty shelves. Products that are high up or off to the side are a little less likely to be picked over.

Paul Swansen, Flickr CC

Have you seen normal distributions out in the wild? Send them my way and I might feature them in a future post!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityNY Payroll Company Vanishes With $35 Million

MyPayrollHR, a now defunct cloud-based payroll processing firm based in upstate New York, abruptly ceased operations this past week after stiffing employees at thousands of companies. The ongoing debacle, which allegedly involves malfeasance on the part of the payroll company’s CEO, resulted in countless people having money drained from their bank accounts and has left nearly $35 million worth of payroll and tax payments in legal limbo.

Unlike many stories here about cloud service providers being extorted by hackers for ransomware payouts, this snafu appears to have been something of an inside job. Nevertheless, it is a story worth telling, in part because much of the media coverage of this incident so far has been somewhat disjointed, but also because it should serve as a warning to other payroll providers about how quickly and massively things can go wrong when a trusted partner unexpectedly turns rogue.

Clifton Park, NY-based MyPayrollHR — a subsidiary of ValueWise Corp. — disclosed last week in a rather unceremonious message to some 4,000 clients that it would be shutting its virtual doors and that companies which relied upon it to process payroll payments should kindly look elsewhere for such services going forward.

This communique came after employees at companies that depend on MyPayrollHR to receive direct deposits of their bi-weekly payroll payments discovered their bank accounts were instead debited for the amounts they would normally expect to accrue in a given pay period.

To make matters worse, many of those employees found their accounts had been dinged for two payroll periods — a month’s worth of wages — leaving their bank accounts dangerously in the red.

The remainder of this post is a deep-dive into what we know so far about what transpired, and how such an occurrence might be prevented in the future for other payroll processing firms.

A $26 MILLION TEXT FILE

To understand what’s at stake here requires a basic primer on how most of us get paid, which is a surprisingly convoluted process. In a typical scenario, our employer works with at least one third party company to make sure that on every other Friday what we’re owed gets deposited into our bank account.

The company that handled that process for MyPayrollHR is a California firm called Cachet Financial Services. Every other week for more than 12 years, MyPayrollHR has submitted a file to Cachet that told it which employee accounts at which banks should be credited and by how much.

According to interviews with Cachet, the way the process worked ran something like this: MyPayrollHR would send a digital file documenting deposits made by each of these client companies which laid out the amounts owed to each clients’ employees. In turn, those funds from MyPayrollHR client firms then would be deposited into a settlement or holding account maintained by Cachet.

From there, Cachet would take those sums and disburse them into the bank accounts of people whose employers used MyPayrollHR to manage their bi-weekly payroll payments.

But according to Cachet, something odd happened with the instructions file MyPayrollHR submitted on the afternoon of Wednesday, Sept. 4 that had never before transpired: MyPayrollHR requested that all of its clients’ payroll dollars be sent not to Cachet’s holding account but instead to an account at Pioneer Savings Bank that was operated and controlled by MyPayrollHR.

The total amount of this mass payroll deposit was approximately $26 million. Wendy Slavkin, general counsel for Cachet, told KrebsOnSecurity that her client then inquired with Pioneer Savings about the wayward deposit and was told MyPayrollHR’s bank account had been frozen.

Nevertheless, the payroll file submitted by MyPayrollHR instructed financial institutions for its various clients to pull $26 million from Cachet’s holding account — even though the usual deposits from MyPayrollHR’s client banks had not been made.

REVERSING THE REVERSAL

In response, Cachet submitted a request to reverse that transaction. But according to Slavkin, that initial reversal request was improperly formatted, and so Cachet soon after submitted a correctly coded reversal request.

Financial institutions are supposed to ignore or reject payment instructions that don’t comport with precise formatting required by the National Automated Clearinghouse Association (NACHA), the not-for-profit organization that provides the backbone for the electronic movement of money in the United States. But Slavkin said a number of financial institutions ended up processing both reversal requests, meaning a fair number of employees at companies that use MyPayrollHR suddenly saw a month’s worth of payroll payments withdrawn from their bank accounts.

Dan L’Abbe, CEO of the San Francisco-based consultancy Granite Solutions Groupe, said the mix-up has been massively disruptive for his 250 employees.

“This caused a lot of chaos for employers, but employees were the ones really affected,” L’Abbe said. “This is all very unusual because we don’t even have the ability to take money out of our employee accounts.”

Slavkin said Cachet managed to reach the CEO of MyPayrollHR — Michael T. Mann — via phone on the evening of Sept. 4, and that Mann said he would would call back in a few minutes. According to Slavkin, Mann never returned the call. Not long after that, MyPayrollHR told clients that it was going out of business and that they should find someone else to handle their payroll.

In short order, many people hit by one or both payroll reversals took to Twitter and Facebook to vent their anger and bewilderment at Cachet and at MyPayrollHR. But Slavkin said Cachet ultimately decided to cancel the previous payment reversals, leaving Cachet on the hook for $26 million.

“What we have since done is reached out to 100+ receiving banks to have them reject both reversals,” Slavkin said. “So most — if not all — employees affected by this will in the next day or two have all their money back.”

THE VANISHING MANN

Cachet has since been in touch with the FBI and with federal prosecutors in New York, and Slavkin said both are now investigating MyPayrollHR and its CEO. On Monday, New York Governor Andrew Cuomo called on the state’s Department of Financial Services to investigate the company’s “sudden and disturbing shutdown.”

A tweet sent Sept. 11 by the FBI’s Albany field office.

The $26 million hit against Cachet wasn’t the only fraud apparently perpetrated by MyPayrollHR and/or its parent firm: According to Slavkin, the now defunct New York company also stiffed National Payment Corporation (NatPay) — the Florida-based firm which handles tax withholdings for MyPayrollHR clients — to the tune of more than $9 million.

In a statement provided to KrebsOnSecurity, NatPay said it was alerted late last week that the bank accounts of MyPayrollHR and one of its affiliated companies were frozen, and that the notification came after payment files were processed.

“NatPay was provided information that MyPayrollHR and Cloud Payroll may have been the victims of fraud committed by their holding company ValueWise, whose CEO and owner is Michael Mann,” NatPay said. “NatPay immediately put in place steps to manage the orderly process of recovering funds [and] has more than sufficient insurance to cover actions of attempted or real fraud.”

Requests for comment from different executives at both MyPayrollHR and its parent firm ValueWise Corp. went unanswered, and the latter’s Web site is now offline. Several erstwhile MyPayrollHR employees reached via LinkedIn said none of them had seen or heard from Mr. Mann in days.

Meanwhile, Granite Solutions Groupe CEO L’Abbe said some of his employees have seen their bank accounts credited back the money that was taken, while others are still waiting for those reversals to come through.

“It varies widely,” L’Abbe said. “Every bank processes differently, and everyone’s relationship with the bank is different. Others have absolutely no money right now and are having a helluva time with their bank believing this is all the result of fraud. Things are starting to settle down now, but a lot of employees are still in limbo with their bank.”

For its part, Cachet Financial says it will be looking at solutions to better detect when and if instructions from clients for funding its settlement accounts suddenly change.

“Our system is excellent at protecting against outside hackers,” Slavkin said. “But when it comes to something like this it takes everyone by complete surprise.”

LongNowLong-term Building in Japan

The Ise Shrine in Japan, which has been rebuilt every 20 years for over 1,400 years. 

When I started working with Stewart Brand over two decades ago, he told me about the ideas behind Long Now, and how we might build the seed for a very long-lived institution. One of the first examples he mentioned to me was Ise Shrine in Japan, which has been rebuilt every 20 years in adjacent sites for over 1,400 years. This shrine is made of ephemeral materials like wood and thatch, but its symbiotic relationship with the Shinto belief and craftsmen has kept a version of the temple standing since 692 CE. Over these past decades many of us at Long Now have conjured with these temples as an example of long-term thinking, but it had not occurred to me that I might some day visit them.

That is, until a few years ago, when I came across a news piece about the temples. It announced that the shrine’s foresters were harvesting the trees for the next rebuild, and I decided to do some research to find out how and when visitors could go see the one temple being replaced by the next. This research turned out to be very difficult, in part because of the language barrier, but also because the last rebuild took place well before the world wide web was anything close to ubiquitous. I kept my ear out and asked people who might know about the shrines, but did not get very far.

Then, one morning in late September, Danny Hillis called to tell me that Daniel Erasmus, a Long Now member in Holland, had learned that the shrine transfer ceremony would be taking place the following Saturday. Danny said he was going to try and meet Daniel in Ise, and wanted to know if he should document it. I told him he wouldn’t need to, because I was going to get on a plane and meet them there.

Ise Shrine

The next few days were a blur of difficult travel arrangements to a rural Japanese town where little English was spoken and lodging was already way over-booked. I was greatly aided by a colleague’s Japanese wife, who was able to find us a room in a traditional ryokan home-stay very close to the temples. I also put the word out about the trip, and Ping Fu from the Long Now Board decided to join us, as well.

Streets of Osaka.

A few days later I met Ping at SFO for our flight to Osaka. Danny Hillis and Daniel Erasmus would be coming in from Tokyo a day later. We would stay the night in Osaka and then take the train to Ise. I found out that one of the other sites in Japan I had always wanted to visit was also close by: the Buddhist temples of Nara, considered to be some of the oldest continuously standing wooden structures in the world. We would be visiting Nara after our visit to Ise.

After landing, Ping and I spent a jet-lagged evening wandering around the Blade Runner streets of Osaka to find a restaurant. In Japan the best local food and drink are often tiny neighborhood affairs that only seat 5–10 people. Ping’s ability to read Kanji characters, which transfer over from Chinese, proved to be very helpful in at least figuring out if a sign was for a restaurant or a bathhouse.

“Fast food” in Osaka.

The next morning we headed east on a train to Ise eating “fast food” — morsels of fish and rice wrapped in beautiful origami of leaves. This was not one of the bullet trains; Ise is a small city whose economy has been largely driven by Shinto pilgrims for the last two millennia. A few decades before the birth of Christ, a Japanese princess is said to have spent over twenty years wandering Japan, looking for the perfect place to worship. Around year 4 of the current era she found Ise, where she heard the spirits whisper that this “is a secluded and pleasant land. In this land I wish to dwell.” And thus Ise was established as the Shinto spiritual center of Japan.

This is probably a good time to say a bit more about Shinto. While it is referred to often as a religion with priests and temples, there is actually a much deeper explanation, as with most things in Japan. Shinto is the indigenous belief system that goes back to at least 6 centuries BCE and pre-dates any religions in Japan — including Buddhism, which did not arrive until a millennium or so later. Shinto is an animist world view, which believes that spirits, or Kami, are a part of all things. It is said that nearly all Japanese are Shinto, even though many would self-describe as non-religious, or Buddhist. There are no doctrines or prophets in Shinto; people give reverence to various Kami for different reasons throughout their day, week, or life.

Shinto Priest at Ise gates.

There are over 80,000 Shinto temples, or Jinja, in Japan, and hundreds of thousands of Shinto “priests” who administer them. Of all of these temples, the structures at Ise, collectively referred to as Jingū, are considered the most important and the most highly revered. And of these, the Naikū shrine, which we were there to see, tops them all, and only members of the Japanese imperial family or the senior priests are allowed near or in the shrine. The simple yet stunningly beautiful Kofun-era architecture of the temples dates back over 2500 years, and the traditional construction methods have been refined to an unbelievably high art — even when compared to other Japanese craft.

Roof detail at shrine at Ise.

My understanding of how this twenty-year cycle became a tradition is that these shrines were originally used as seed banks. Since these were made of wood, they would need to be replaced and the seed stock transferred from one to the other. The design of the buildings and even the thatch roof are highly evolved for this. When there are rains, the thatch roof gets heavier, weighing down the wood joinery and making it water-tight. In the dry season, it gets lighter and the gaps between the wood are allowed to breathe again, avoiding mold.

The streets of Ise.

On Friday afternoon we arrived at Ise and, within a short walk, had checked in at our very basic ryokan hotel. The location was perfect, however, as we were directly across from the Naikū shrine area entrance. The town of Ise lies in a mainly flat lowland area across the bay from Nagoya (to the North). Its temples are the end destination of a pilgrimage route which people used to traverse largely by foot, and over the last 2,000 years various food and accommodation services have evolved to cater to those visitors.

Arriving at the temple area.

Ping and I wandered toward the entry and met up with Danny, Daniel, and Maholo Uchida, a friend of Daniel’s who is a curator at the National Museum of Emerging Science and Innovation in Tokyo. Maholo would prove to be an absolutely amazing guide through the next 24 hours, and most of what I now understand about Ise and its customs comes from her.

Danny Hillis and Maholo Uchida purifying at the Temizuya.

We traversed a small bridge and passed a low pool of water with a small roof over it. These Temizuya basins, found at the entry to all Shinto shrines, are a place to purify yourself before entry. As with all things in Japan — especially visits to shrines — there is an order and ceremony to washing your hands and mouth at the Temizuya. After this purification, we headed into the forest on a wide path of light grey gravel that crunched underfoot.

Just where the forest begins, we approached a large and beautifully crafted Shinto arch. These are apparently made from the timbers of an earlier shrine after it has been deconstructed. Visitors generally pass through three consecutive arches to enter a Shinto shrine area. Maholo quickly educated us on how to bow as we passed under the first arch (it is different for entering versus leaving) and on proper path walking etiquette. It is apparently too prideful to walk in the middle of the path: you should walk to one side, which is generally — but not always — the left side. As with everything here, there was etiquette to follow which was steeped in tradition and rules that would take a lifetime to understand fully.

Danny Hillis bowing under the first arch.

As we walked from arch to arch, Maholo explained that the forest here had historically been used exclusively to harvest timbers for all the shrines, but over the last millennia they had been harvested too heavily for various war efforts, or lost in fire. Since the beginning of this century the shrines’ caretakers have been bringing these forests back, and expect them to be self-sustaining again within the next two or three rebuilding periods — 40 to 60 years from now.

Third arch approaching the grand shrine.

Passing through a sequence of arches, we arrived at the Naikū shrine sanctuary area. This area includes a place that sells commemorative gifts. At this point you might be thinking “tourist trap gift shop,” but this adjacent structure is at least centuries old and of course perfectly fits the aesthetic. Instead of cheap plastic trinkets and coffee mugs, it offered hand-screened prints on wood from the last temple deconstruction, as well as calligraphic stamps for your shrine ‘passport’.

The 2,000 year-old gift shop.

Adjacent to the gift shop is the walled-off section of the Naikū shrine. Visitors are allowed to approach one spot, where there is a gap in the wall, and see a glimpse of the main temples. On the left, the one completed in 01993 has begun to grey (pictured below), and on the right gleams the newly finished temple, a dual view only seen once every 20 years. After this event, they will begin disassembly of the old shrine, and will leave just a little doghouse-sized structure in its place for the next two decades.

The old shrine, grey with age.

The audience for this event consisted of only a few hundred people. Maholo explained that this rebuilding has been going on for eight years, and that many people come for different parts of the process, including the harvesting of the trees, the blessing of the tools, the milling of the timbers, the placement of the white river foundation stones, and so on.

As we stood there, crowds were gathering, and we noticed behind us a series of chests that were roped off in the courtyard area. Some of these were plain wood and some of them were lacquered. These chests contained the temple “treasures” that are moved from the old temple to the new. Some are re-created every 20 years by the greatest craftspeople in Japan, some have been moved from temple to temple for 14 centuries, and some are totally secret to all but the priests. The treasures are what the Kami spirits follow from one temple to the next as they are rebuilt. So the Shinto priests move the treasures when the new temple is ready, and the Kami spirits move sometime in the night to follow them in to their new home.

Treasure change ceremony at Ise.

As we took photos, a large group of priests and press started lining up. We were ushered over to the gift building area and held back by white gloved security personnel. It was a bit comical as they did not seem to know exactly what to do with us. Since this ceremony happens only every 20 years, it is unlikely that any of the staff were present at the last occasion: while this is one of the oldest events in the world, it is simultaneously brand new. It was very apparent that none of the ritual acts were performed for the audience. All of this ceremony was designed for the benefit of the Kami spirits, not for people’s entertainment, and much of what we saw were glimpses through trees from a distance. While it was hard to see everything, we all agreed that this perspective made the tradition much more magical and interesting than if it had all been laid bare.

Without fanfare, the princess of Japan led a march of hundreds of Ise priests down the path that we had just walked, and they all lined up in rows next to the chests. After a ceremony with nearly 30 minutes of bowing, the chests were carried into the sanctuary and placed into the new shrine (though this was out of view).

Then they came back out, lined up again, and went through a series of wave like bows before being led away by the princess.

All very calm, very simple, and without any hurrah. The Kami would soon follow the treasures into their new home.

What was a real surprise was to learn that there are 125 shrines in Ise: all are rebuilt every 20 years, but on different schedules. This is also done at other Shinto shrine sites, but not always every 20 years; some have cycles as long as 60 years. Once we were allowed to wander around again, we hiked up the hill to some of the other temples, all built for different Kami. Some recently-built shrines stood next to the ones awaiting deconstruction, and some stood alone. These are all made with similar design and unerring construction, and unlike the main temple, we were allowed to walk right up to these and take pictures.

A recently-built shrine stands next to an old one.

We left the forest on a different path as the sun set, bowing our exit bows twice after each of the three arches. We wandered through the town a bit and I suggested we find a local bar that offered the traditional Japanese “bottle keep” so we could drink half of a bottle and leave it on the shelf to return in 20 years for the other half.

Hopefully we’ll drink from this bottle again in 02033.

Maholo took us to a tiny alley where she peeked into a few shoji screens, eventually finding us the right place. It had only eight or so seats, and the proprietor was a lovely Japanes grandmother. We ordered a bottle of Suntory whiskey and began to pour.

The barkeep was amazed to find out how far we had traveled to see the ceremony, and put our dated Long Now bottle on the highest shelf in a place of honor.

Afterwards, Maholo had arranged for us to have dinner at a beautiful ryokan with one of the Shinto priests, who had come in from Tokyo to help with the events in Ise. We were served course after course of incredible seafood while he gracefully answered our questions, all translated by Maholo.

We learned that the priests who run Ise are their own special group within the Shinto organization, and don’t really follow the line of the main organization. For instance, when several of the Shinto temples were offered UNESCO world heritage site status, they politely declined. I can just imagine them wondering why they would need an organization like UNESCO, that is not even half a century old, to tell them that they had achieved “historic” status. I suspect that maybe in a millennium or two, if UNESCO is still around, they might reconsider.

The priests bringing the Kami their first meal.

The next morning we returned to Naikū to catch a glimpse through the trees of the priests bringing the Kami their first meal. The Kami are fed in the morning and evening of each day from a kitchen building behind the temple sanctuary. We watched priests and their assistants bringing in chests of food as we chatted with an American who works for the Shinto central office in Tokyo. He had put together a beautiful book about the shrines at Ise, The Soul of Japan, to which he later sent me a link to share in this report.

Afterwards, we also visited the small but amazing museum at Ise that displays some of the “treasures” from past shrines, a temple simulacrum, and a display documenting the 1400-year reconstruction history along with the beautiful Japanese tools used for building the shrines.

Bridge to the Gekū shrines.

Then Maholo took us to the Gekū shrine areas, a few kilometers away, which allow much more access. These shrines, and the bridge that leads to them, are also built on the alternating-site, 20-year cycle. But here you walk on the right, and there are four arches — I could not find out why. Most interesting, however, is that in World War II the Japanese emperor ordered a rare temporary delay in shrine rebuilding. While the people of Ise could not defy him, they realized that he had only mentioned the shrines, so they went ahead and rebuilt the bridge as scheduled in the middle of a war-torn year.

Finally, we headed to the train station, from where Danny and Daniel would travel to Kyoto for their flights, and Maholo would return to Tokyo. Ping and I later boarded the train to Osaka to stay the night, and then headed to Nara prefecture the next day.

Entering Hōryū-ji

Hōryū-ji at Nara

Only 45 minutes by train from Osaka is the stop at Hōryū-ji, a bit before you get to Nara center. Almost concurrent to the building of the first shrine at Ise in the 7th century, a complex of Buddhist temples were built here beginning in 607 CE.

The tall pagoda at Hōryū-ji is one of the oldest continuously standing structures in the world. And while there is controversy over which parts of this temple complex are orginal, the central vertical pillar of wood in the Pagoda was definitively felled in 594.

The architecture has a strong Chinese influence, reflecting the route Buddhism traveled before arriving in Japan, and came with a tradition of continual maintenance rather than periodic rebuilding. 

Roof detail at Hōryū-ji

I suspect one of the main reasons these buildings have survived so long is their ceramic roof. The roof tiles can last centuries and are vastly less susceptible to fire than wood or thatch. Like the Shinto shrines, though, no one resides in these buildings, so the chance of human error starting a blaze is vastly diminished. I was amused to see the “no smoking” sign as we entered one of temples.

No smoking sign at Hōryū-ji

As you walk through these temples there are many beautiful little maintenance details. Places where water would have wicked into the bottom of a pillar or around the edge of a metal detail have been carefully removed, with new wood spliced back in over the centuries.

It is striking that this part of Japan houses two sets of structures, both of nearly equal age, and both made of largely ephemeral materials that have lasted over 14 centuries through totally different mechanisms and religions. Both require a continuous, diligent and respectful civilization to sustain them, yet one is punctuated and episodic, while the other is gradual. Both are great models for how to make a building, or an institution, last through millennia.


Learn More

  • Read Alexander Rose’s recent essay in BBC Future, “How to Build Something that Lasts 10,000 Years.”
  • See more photos from Alexander Rose’s trip to Japan here.
  • Read Soul of Japan: An Introduction to Shinto and Ise Jingu (02013) in full here.

CryptogramOn Cybersecurity Insurance

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart's law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): BoingBoing post.

Worse Than FailureCodeSOD: ImAlNumb?

I think it’s fair to say that C, as a language, has never had a particularly great story for working with text. Individual characters are okay, but strings are a nightmare. The need to support unicode has only made that story a little more fraught, especially as older code now suddenly needs to support extended characters. And by “older” I mean, “wchar was added in 1995, which is practically yesterday in C time”.

Lexie inherited some older code. It was not designed to support unicode, which is certainly a problem in 2019, and it’s the problem Lexie was tasked with fixing. But it had an… interesting approach to deciding if a character was alphanumeric.

Now, if we limit ourselves to ASCII, there are a variety of ways we could do this check. We could convert it to a number and do a simple check- characters 48–57 are numeric, 65–90 and 97–122 cover the alphabetic characters. But that’s a conditional expression- six comparison operations! So maybe we should be more clever. There is a built-in library function, isalnum, which might be more optimized, and is available on Lexie’s platform. But we’re dedicated to really doing some serious premature optimization, so there has to be a better way.

bool isalnumCache[256] =
{false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, false, false, false, false, false, false,
false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, false, false, false, false, false,
false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, true, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false};

This is a lookup table. Convert your character to an integer, and then use it to index the array. This is fast. It’s also error prone, and this block does incorrectly identify a non-alphanumeric as an alphanumeric. It also 100% fails if you are dealing with wchar_t, which is how Lexie ended up looking at this block in the first place.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianBenjamin Mako Hill: How Discord moderators build innovative solutions to problems of scale with the past as a guide

Both this blog post and the paper it describes are collaborative work led by Charles Kiene with Jialun “Aaron” Jiang.

Introducing new technology into a work place is often disruptive, but what if your work was also completely mediated by technology? This is exactly the case for the teams of volunteer moderators who work to regulate content and protect online communities from harm. What happens when the social media platforms these communities rely on change completely? How do moderation teams overcome the challenges caused by new technological environments? How do they do so while managing a “brand new” community with tens of thousands of users?

For a new study that will be published in CSCW in November, we interviewed 14 moderators of 8 “subreddit” communities from the social media aggregation and discussion platform Reddit to answer these questions. We chose these communities because each community had recently adopted the real-time chat platform Discord to support real-time chat in their community. This expansion into Discord introduced a range of challenges—especially for the moderation teams of large communities.

We found that moderation teams of large communities improvised their own creative solutions to challenges they faced by building bots on top of Discord’s API. This was not too shocking given that APIs and bots are frequently cited as tools that allow innovation and experimentation when scaling up digital work. What did surprise us, however, was how important moderators’ past experiences were in guiding the way they used bots. In the largest communities that faced the biggest challenges, moderators relied on bots to reproduce the tools they had used on Reddit. The moderators would often go so far as to give their bots the names of moderator tools available on Reddit. Our findings suggest that support for user-driven innovation is important not only in that it allows users to explore new technological possibilities but also in that it allows users to mine their past experiences to introduce old systems into new environments.

What Challenges Emerged in Discord?

Discord’s text channels allow for more natural, in the moment conversations compared to Reddit. In Discord, this social aspect also made moderation work much more difficult. One moderator explained:

“It’s kind of rough because if you miss it, it’s really hard to go back to something that happened eight hours ago and the conversation moved on and be like ‘hey, don’t do that.’ ”

Moderators we spoke to found that the work of managing their communities was made even more difficult by their community’s size:

On the day to day of running 65,000 people, it’s literally like running a small city…We have people that are actively online and chatting that are larger than a city…So it’s like, that’s a lot to actually keep track of and run and manage.”

The moderators of large communities repeatedly told us that the tools provided to moderators on Discord were insufficient. For example, they pointed out tools like Discord’s Audit Log was inadequate for keeping track of the tens of thousands of members of their communities. Discord also lacks automated moderation tools like the Reddit’s Automoderator and Modmail leaving moderators on Discord with few tools to scale their work and manage communications with community members. 

How Did Moderation Teams Overcome These Challenges?

The moderation teams we talked with adapted to these challenges through innovative uses of Discord’s API toolkit. Like many social media platforms, Discord offers a public API where users can develop apps that interact with the platform through a Discord “bot.” We found that these bots play a critical role in helping moderation teams manage Discord communities with large populations.

Guided by their experience with using tools like Automoderator on Reddit, moderators working on Discord built bots with similar functionality to solve the problems associated with scaled content and Discord’s fast-paced chat affordances. This bots would search for regular expressions and URLs that go against the community’s rules:

“It makes it so that rather than having to watch every single channel all of the time for this sort of thing or rely on users to tell us when someone is basically running amuck, posting derogatory terms and terrible things that Discord wouldn’t catch itself…so it makes it that we don’t have to watch every channel.”

Bots were also used to replace Discord’s Audit Log feature with what moderators referred to often as “Mod logs”—another term borrowed from Reddit. Moderators will send commands to a bot like “!warn username” to store information such as when a member of their community has been warned for breaking a rule and automatically store this information in a private text channel in Discord. This information helps organize information about community members, and it can be instantly recalled with another command to the bot to help inform future moderation actions against other community members.

Finally, moderators also used Discord’s API to develop bots that functioned virtually identically to Reddit’s Modmail tool. Moderators are limited in their availability to answer questions from members of their community, but tools like the “Modmail” helps moderation teams manage this problem by mediating communication to community members with a bot:

“So instead of having somebody DM a moderator specifically and then having to talk…indirectly with the team, a [text] channel is made for that specific question and everybody can see that and comment on that. And then whoever’s online responds to the community member through the bot, but everybody else is able to see what is being responded.”

The tools created with Discord’s API — customizable automated content moderation, Mod logs, and a Modmail system — all resembled moderation tools on Reddit. They even bear their names! Over and over, we found that moderation teams essentially created and used bots to transform aspects of Discord, like text channels into Mod logs and Mod Mail, to resemble the same tools they were using to moderate their communities on Reddit. 

What Does This Mean for Online Communities?

We think that the experience of moderators we interviewed points to a potentially important underlooked source of value for groups navigating technological change: the potent combination of users’ past experience combined with their ability to redesign and reconfigure their technological environments. Our work suggests the value of innovation platforms like APIs and bots is not only that they allow the discovery of “new” things. Our work suggests that these systems value also flows from the fact that they allow the re-creation of the the things that communities already know can solve their problems and that they already know how to use.


For more details, check out check out the full 23 page paper. The work will be presented in Austin, Texas at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW’19) in November 2019. The work was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). If you have questions or comments about this study, contact Charles Kiene at ckiene [at] uw [dot] edu.

,

Planet DebianMarkus Koschany: My Free Software Activities in August 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

Misc

  • I fixed two minor CVE in binaryen, a compiler and toolchain infrastructure library for WebAssembly, by packaging the latest upstream release.

Debian LTS

This was my 42. month as a paid contributor and I have been paid to work 21,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 12.8.2019 until 18.08.2019 and from 09.09.2019 until 10.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in kde4libs, apache2, nodejs-mysql, pdfresurrect, nginx, mongodb, nova, radare2, flask, bundler, giflib, ansible, zabbix, salt, imapfilter, opensc and sqlite3.
  • DLA-1886-2. Issued a regression update for openjdk-7. The regression was caused by the removal of several classes in rt.jar by upstream. Since Debian never shipped the SunEC security provider SSL connections based on elliptic curve algorithms could not be established anymore. The problem was solved by building sunec.jar and its native library libsunec.so from source. An update of the nss source package was required too which resolved a five year old bug. (#750400).
  • DLA-1900-1. Issued a security update for apache2 fixing 2 CVE, three more CVE did not affect the version in Jessie.
  • DLA-1914-1. Issued a security update for icedtea-web fixing 3 CVE.
  • I have been working on a backport of opensc, a set of libraries and utilities to access smart cards that support cryptographic operations, from Stretch which will fix more than a dozen CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my fifteenth month and I have been assigned to work 15 hours on ELTS of which I used 10 of them.

  •  I was in charge of our ELTS frontdesk from 26.08.2019 until 01.09.2019 and I triaged CVE in dovecot, libcommons-compress-java, clamav, ghostscript, gosa as end-of-life because security support for them has ended in Wheezy. There were no new issues for supported packages. All in all this was a rather unspectacular week.
  • ELA-156-1. Issued a security update for linux fixing 9 CVE.
  • ELA-154-2. Issued a regression update for openjdk-7 and nss because the removed classes in rt.jar caused the same issues in Wheezy too.

Thanks for reading and see you next time.

Krebs on SecurityPatch Tuesday, September 2019 Edition

Microsoft today issued security updates to plug some 80 security holes in various flavors of its Windows operating systems and related software. The software giant assigned a “critical” rating to almost a quarter of those vulnerabilities, meaning they could be used by malware or miscreants to hijack vulnerable systems with little or no interaction on the part of the user.

Two of the bugs quashed in this month’s patch batch (CVE-2019-1214 and CVE-2019-1215) involve vulnerabilities in all supported versions of Windows that have already been exploited in the wild. Both are known as “privilege escalation” flaws in that they allow an attacker to assume the all-powerful administrator status on a targeted system. Exploits for these types of weaknesses are often deployed along with other attacks that don’t require administrative rights.

September also marks the fourth time this year Microsoft has fixed critical bugs in its Remote Desktop Protocol (RDP) feature, with four critical flaws being patched in the service. According to security vendor Qualys, these Remote Desktop flaws were discovered in a code review by Microsoft, and in order to exploit them an attacker would have to trick a user into connecting to a malicious or hacked RDP server.

Microsoft also fixed another critical vulnerability in the way Windows handles link files ending in “.lnk” that could be used to launch malware on a vulnerable system if a user were to open a removable drive or access a shared folder with a booby-trapped .lnk file on it.

Shortcut files — or those ending in the “.lnk” extension — are Windows files that link easy-to-recognize icons to specific executable programs, and are typically placed on the user’s Desktop or Start Menu. It’s perhaps worth noting that poisoned .lnk files were one of the four known exploits bundled with Stuxnet, a multi-million dollar cyber weapon that American and Israeli intelligence services used to derail Iran’s nuclear enrichment plans roughly a decade ago.

In last month’s Microsoft patch dispatch, I ruefully lamented the utter hose job inflicted on my Windows 10 system by the July round of security updates from Redmond. Many readers responded by saying one or another updates released by Microsoft in August similarly caused reboot loops or issues with Windows repeatedly crashing.

As there do not appear to be any patch-now-or-be-compromised-tomorrow flaws in the September patch rollup, it’s probably safe to say most Windows end-users would benefit from waiting a few days to apply these fixes. 

Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

The trouble is, Windows 10 by default will install patches and reboot your computer whenever it likes. Here’s a tutorial on how to undo that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Most importantly, please have some kind of system for backing up your files before applying any updates. You can use third-party software to do this, or just rely on the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule.

Finally, Adobe fixed two critical bugs in its Flash Player browser plugin, which is bundled in Microsoft’s IE/Edge and Chrome (although now hobbled by default in Chrome). Firefox forces users with the Flash add-on installed to click in order to play Flash content; instructions for disabling or removing Flash from Firefox are here. Adobe will stop supporting Flash at the end of 2020.

As always, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Cory DoctorowCharles de Lint on Radicalized

I’ve been a Charles de Lint fan since I was a kid (see photographic evidence, above, of a 13-year-old me attending one of Charles’s signings at Bakka Books in 1984!), and so I was absolutely delighted to read his kind words in his books column in Fantasy and Science Fiction for my latest book, Radicalized. This book has received a lot of critical acclaim (“among my favorite things I’ve read so far this year”), but to get such a positive notice from Charles is wonderful on a whole different level.

The stories, like “The Masque of the Red Death,” are all set in a very near future. They tackle immigration and poverty, police corruption and brutality, the U.S. health care system and the big pharma companies. None of this is particularly cheerful fodder. The difference is that each of the other three stories give us characters we can really care about, and allow for at least the presence of some hopefulness.

“Unauthorized Bread” takes something we already have and projects it into the future. You’ve heard of Juciero? It’s a Wi-Fi juicer that only lets you use the proprietary pre-chopped produce packs that you have to buy from the company. Produce you already have at home? It doesn’t work because it doesn’t carry the required codes that will let the machine do its work.

In the story, a young woman named Salima discovers that her toaster won’t work, so she goes through the usual steps one does when electronics stop working. Unplug. Reset to factory settings. Finally…

“There was a touchscreen option on the toaster to call support but that wasn’t working, so she used the fridge to look up the number and call it.”

I loved that line.

Books To Look For [Charles de Lint/F&SF]

Planet DebianErich Schubert: Altmetrics of a Retraction Notice

As pointed out by RetractionWatch, AltMetrics even tracks the metrics of a retraction notices.

This retraction notice has an AltMetric of 9 as I write, and it will grow with every mention on blogs (such as this) and Twitter. Even worse, even just one blog post and one tweet by Retraction watch was enough to put the retraction notice “In the top 25% of all research outputs”.

In my opinion, this shows how unreliable these altmetrics are. They are based on the false assumption that Twitter and blogs would be central (or at least representative) of academic importance and attention. But given the very low usage rates of these media by academics, this does not appear to work well, except for a few high-shot papers.

Existing citation indexes, with all their drawbacks, may still be more useful.

Planet DebianJonathan McDowell: Making xinput set-button-map permanent

Since 2006 I’ve been buying a Logitech Trackman Marble (or, as Amazon calls it, a USB Marble Mouse) for both my home and work setups (they don’t die, I just seem to lose them somehow). It’s got a solid feel to it, helps me avoid RSI twinges and when I’m thinking I can take the ball out and play with it. It has 4 buttons, but I find the small one on the right inconvenient to use so I treat it as a 3 button device (the lack of scroll wheel functionality doesn’t generally annoy me). Problem is the small left most button defaults to “Back”, rather than “Middle button”. You can fix this with xinput:

xinput set-button-map "Logitech USB Trackball" 1 8 3 4 5 6 7 2 9

but remembering to do that every boot is annoying. I could put it in a script, but a better approach is to drop the following in /usr/share/X11/xorg.conf.d/50-marblemouse.conf (the fact it’s in /usr/share instead of /etc or ~ is what meant it took me so long to figure out how I’d done it on my laptop for my new machine):

Section "InputClass"
    Identifier      "Marble Mouse"
    MatchProduct    "Logitech USB Trackball"
    MatchIsPointer  "on"
    MatchDevicePath "/dev/input/event*"
    Driver          "evdev"
    Option          "SendCoreEvents" "true"

    #  Physical buttons come from the mouse as:
    #     Big:   1 3
    #     Small: 8 9
    #
    # This makes left small button (8) into the middle, and puts
    #  scrolling on the right small button (9).
    #
    Option "Buttons"            "9"
    Option "ButtonMapping"      "1 8 3 4 5 6 7 2 9"
    Option "EmulateWheel"       "true"
    Option "EmulateWheelButton" "9"

EndSection

This post exists solely for the purpose of reminding future me how I did this on my Debian setup (given that it’s taken me way too long to figure out how I did it 2+ years ago) and apparently original credit goes to Ubuntu for their Logitech Marblemouse USB page.

Worse Than FailureDeath by Consumption

Tryton Party Module Address Database Diagram

The task was simple: change an AMQ consumer to insert data into a new Oracle database instead of an old MS-SQL database. It sounded like the perfect task for the new intern, Rodger; Rodger was fresh out of a boot camp and ready for the real world, if he could only get a little experience under his belt. The kid was bright as they came, but boot camp only does so much, after all.

But there are always complications. The existing service was installed on the old app servers that weren't setup to work with the new corporate app deployment tool. The fix? To uninstall the service on the old app servers and install it on the new ones. Okay, simple enough, if not well suited to the intern.

Rodger got permissions to set up the service on his local machine so he could test his install scripts, and a senior engineer got an uninstall script working as well, so they could seamlessly switch over to the new machines. They flipped the service; deployment day came, and everything went smoothly. The business kicked off their process, the consumer service picked up their message and inserted data correctly to the new database.

The next week, the business kicked off their process again. After the weekend, the owners of the old database realized that the data was inserted into the old database and not the new database. They promptly asked how this had happened. Rodger and his senior engineer friend checked the queue; it correctly had two consumers set up, pointing at the new database. Just to be sure, they also checked the old servers to make sure the service was correctly uninstalled and removed by tech services. All clear.

Hours later, the senior engineer refreshed the queue monitor and saw the queue now had three consumers despite the new setup having only two servers. But how? They checked all three servers—two new and one old—and found no sign of a rogue process.

By that point, Rodger was online for his shift, so the senior engineer headed over to talk to him. "Say, Rodger, any chance one of your installs duplicated itself or inserted itself twice into the consumer list?"

"No way!" Rodger replied. "Here, look, you can see my script, I'll run it again locally to show you."

Running it locally ... with dawning horror, the senior engineer realized what had happened. Roger had the install script, but not the uninstall—meaning he had a copy still running on his local developer laptop, connected to the production queue, but with the old config for some reason. Every time he turned on his computer, hey presto, the service started up.

The moral of the story: always give the intern the destructive task, not the constructive one. That can't go wrong, right?

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Cory DoctorowPodcast: DRM Broke Its Promise

In my latest podcast (MP3), I read my new Locus column, DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

MP3

,

Planet DebianIain R. Learmonth: Spoofing commits to repositories on GitHub

The following has already been reported to GitHub via HackerOne. Someone from GitHub has closed the report as “informative” but told me that it’s a known low-risk issue. As such, while they haven’t explicitly said so, I figure they don’t mind me blogging about it.

Check out this commit in torvalds’ linux.git on GitHub. In case this is fixed, here’s a screenshot of what I see when I look at this link:

GitHub page showing a commit in torvalds/linux with the commit message add super evil code

How did this get past review? It didn’t. You can spoof commits in any repo on GitHub due to the way they handle forks of repositories internally. Instead of copying repositories when forks occur, the objects in the git repository are shared and only the refs are stored per-repository. (GitHub tell me that private repositories are handled differently to avoid private objects leaking out this way. I didn’t verify this but I have no reason to suspect it is not true.)

To reproduce this:

  1. Fork a repository
  2. Push a commit to your fork
  3. Put your commit ref on the end of:
https://github.com/[parent]/[repo]/commit/

That’s all there is to it. You can also add .diff or .patch to the end of the URL and those URLs work too, in the namespace of the parent.

The situation that worries me relates to distribution packaging. Debian has a policy that deltas to packages in the stable repository should be as small as possible, targetting fixes by backporting patches from newer releases.

If you get a bug report on your Debian package with a link to a commit on GitHub, you had better double check that this commit really did come from the upstream author and hasn’t been spoofed in this way. Even if it shows it was authored by the upstream’s GitHub account or email address, this still isn’t proof because this is easily spoofed in git too.

The best defence against being caught out by this is probably signed commits, but if the upstream is not doing that, you can clone the repository from GitHub and check to see that the commit is on a branch that exists in the upstream repository. If the commit is in another fork, the upstream repo won’t have a ref for a branch that contains that commit.

Krebs on SecuritySecret Service Investigates Breach at U.S. Govt IT Contractor

The U.S. Secret Service is investigating a breach at a Virginia-based government technology contractor that saw access to several of its systems put up for sale in the cybercrime underground, KrebsOnSecurity has learned. The contractor claims the access being auctioned off was to old test systems that do not have direct connections to its government partner networks.

In mid-August, a member of a popular Russian-language cybercrime forum offered to sell access to the internal network of a U.S. government IT contractor that does business with more than 20 federal agencies, including several branches of the military. The seller bragged that he had access to email correspondence and credentials needed to view databases of the client agencies, and set the opening price at six bitcoins (~USD $60,000).

A review of the screenshots posted to the cybercrime forum as evidence of the unauthorized access revealed several Internet addresses tied to systems at the U.S. Department of Transportation, the National Institutes of Health (NIH), and U.S. Citizenship and Immigration Services (USCIS), a component of the U.S. Department of Homeland Security that manages the nation’s naturalization and immigration system.

Other domains and Internet addresses included in those screenshots pointed to Miracle Systems LLC, an Arlington, Va. based IT contractor that states on its site that it serves 20+ federal agencies as a prime contractor, including the aforementioned agencies.

In an interview with KrebsOnSecurity, Miracle Systems CEO Sandesh Sharda confirmed that the auction concerned credentials and databases were managed by his company, and that an investigating agent from the Secret Service was in his firm’s offices at that very moment looking into the matter.

But he maintained that the purloined data shown in the screenshots was years-old and mapped only to internal test systems that were never connected to its government agency clients.

“The Secret Service came to us and said they’re looking into the issue,” Sharda said. “But it was all old stuff [that was] in our own internal test environment, and it is no longer valid.”

Still, Sharda did acknowledge information shared by Wisconsin-based security firm Hold Security, which alerted KrebsOnSecurity to this incident, indicating that at least eight of its internal systems had been compromised on three separate occasions between November 2018 and July 2019 by Emotet, a malware strain usually distributed via malware-laced email attachments that typically is used to deploy other malicious software.

The Department of Homeland Security did not respond to requests for comment, nor did the Department of Transportation. A spokesperson for the NIH said the agency had investigated the activity and found it was not compromised by the incident.

“As is the case for all agencies of the Federal Government, the NIH is constantly under threat of cyber-attack,” NIH spokesperson Julius Patterson said. “The NIH has a comprehensive security program that is continuously monitoring and responding to security events, and cyber-related incidents are reported to the Department of Homeland Security through the HHS Computer Security Incident Response Center.”

One of several screenshots offered by the dark web seller as proof of access to a federal IT contractor later identified as Arlington, Va. based Miracle Systems. Image: Hold Security.

The dust-up involving Miracle Systems comes amid much hand-wringing among U.S. federal agencies about how best to beef up and ensure security at a slew of private companies that manage federal IT contracts and handle government data.

For years, federal agencies had few options to hold private contractors to the same security standards to which they must adhere — beyond perhaps restricting how federal dollars are spent. But recent updates to federal acquisition regulations allow agencies to extend those same rules to vendors, enforce specific security requirements, and even kill contracts that are found to be in violation of specific security clauses.

In July, DHS’s Customs and Border Patrol (CPB) suspended all federal contracts with Perceptics, a contractor which sells license-plate scanners and other border control equipment, after data collected by the company was made available for download on the dark web. The CPB later said the breach was the result of a federal contractor copying data on its corporate network, which was subsequently compromised.

For its part, the Department of Defense recently issued long-awaited cybersecurity standards for contractors who work with the Pentagon’s sensitive data.

“This problem is not necessarily a tier-one supply level,” DOD Chief Information Officer Dana Deasy told the Senate Armed Services Committee earlier this year. “It’s down when you get to the tier-three and the tier-four” subcontractors.

Planet DebianBen Hutchings: Distribution kernels at Linux Plumbers Conference 2019

I'm attending the Linux Plumbers Conference in Lisbon from Monday to Wednesday this week. This morning I followed the "Distribution kernels" track, organised by Laura Abbott.

I took notes, included below, mostly with a view to what could be relevant to Debian. Other people took notes in Etherpad. There should also be video recordings available at some point.

Upstream 1st: Tools and workflows for multi kernel version juggling of short term fixes, long term support, board enablement and features with the upstream kernel

Speaker: Bruce Ashfield, working on Yocto at Xilinx.

Details: https://linuxplumbersconf.org/event/4/contributions/467/

Yocto's kernel build recipes need to support multiple active kernel versions (3+ supported streams), multiple architectures, and many different boards. Many patches are required for hardware and other feature support including -rt and aufs.

Goals for maintenance:

  • Changes w.r.t. upstream are visible as discrete patches, so rebased rather than merged
  • Common feature set and configuration
  • Different feature enablements
  • Use as few custom tools as possible

Other distributions have similar goals but very few tools in common. So there is a lot of duplicated effort.

Supporting developers, distro builds and end users is challenging. E.g. developers complained about Yocto having separate git repos for different kernel versions, as this led to them needing more disk space.

Yocto solution:

  • Config fragments, patch tracking repo, generated tree(s)
  • Branched repository with all patches applied
  • Custom change management tools

Using Yocto to build a distro and maintain a kernel tree

Speaker: Senthil Rajaram & Anatol Belski from Microsoft

Details: https://linuxplumbersconf.org/event/4/contributions/469/

Microsoft chose Yocto as build tool for maintaining Linux distros for different internal customers. Wanted to use a single kernel branch for different products but it was difficult to support all hardware this way.

Maintaining config fragments and sensible inheritance tree is difficult (?). It might be helpful to put config fragments upstream.

Laura Abbott said that the upstream kconfig system had some support for fragments now, and asked what sort of config fragments would be useful. There seemed to be consensus on adding fragments for specific applications and use cases like "what Docker needs".

Kernel build should be decoupled from image build, to reduce unnecessary rebuilding.

Initramfs is unpacked from cpio, which doesn't support SELinux. So they build an initramfs into the kernel, and add a separate initramfs containing a squashfs image which the initramfs code will switch to.

Making it easier for distros to package kernel source

Speaker: Don Zickus, working on RHEL at Red Hat.

Details: https://linuxplumbersconf.org/event/4/contributions/466/

Fedora/RHEL approach:

  • Makefile includes Makefile.distro
  • Other distro stuff goes under distro sub-directory (merge or copy)
  • Add targets like fedora-configs, fedora-srpm

Lots of discussion about whether config can be shared upstream, but no agreement on that.

Kyle McMartin(?): Everyone does the hierarchical config layout - like generic, x86, x86-64 - can we at least put this upstream?

Monitoring and Stabilizing the In-Kernel ABI

Speaker: Matthias Männich, working on Android kernel at Google.

Details: https://linuxplumbersconf.org/event/4/contributions/468/

Why does Android need it?

  • Decouple kernel vs module development
  • Provide single ABI/API for vendor modules
  • Reduce fragmentation (multiple kernel versions for same Android version; one kernel per device)

Project Treble made most of Android user-space independent of device. Now they want to make the kernel and in-tree modules independent too. For each kernel version and architecture there should be a single ABI. Currently they accept one ABI bump per year. Requires single kernel configuration and toolchain. (Vendors would still be allowed to change configuration so long as it didn't change ABI - presumably to enable additional drivers.)

ABI stability is scoped - i.e. they include/exclude which symbols need to be stable.

ABI is compared using libabigail, not genksyms. (Looks like they were using it for libraries already, so now using it for kernel too.)

Q: How we can ignore compatible struct extensions with libabigail?

A: (from Dodji Seketeli, main author) You can add specific "suppressions" for such additions.

KernelCI applied to distributions

Speaker: Guillaume Tucker from Collabora.

Details: https://linuxplumbersconf.org/event/4/contributions/470/

Can KernelCI be used to build distro kernels?

KernelCI currently builds arbitrary branch with in-tree defconfig or small config fragment.

Improvements needed:

  • Preparation steps to apply patches, generate config
  • Package result
  • Track OS image version that kernel should be installed in

Some in audience questioned whether building a package was necessary.

Possible further improvements:

  • Enable testing based on user-space changes
  • Product-oriented features, like running installer
Should KernelCI be used to build distro kernels?

Seems like a pretty close match. Adding support for different use-cases is healthy for KernelCI project. It will help distro kernels stay close to upstream, and distro vendors will then want to contribute to KernelCI.

Discussion

Someone pointed out that this is not only useful for distributions. Distro kernels are sometimes used in embedded systems, and the system builders also want to check for regressions on their specific hardware.

Q: (from Takashi Iwai) How long does testing typically takes? SUSE's full automated tests take ~1 week.

A: A few hours to build, depending on system load, and up to 12 hours to complete boot tests.

Automatically testing distribution kernel packages

Speaker: Alice Ferrazzi of Gentoo.

Details: https://linuxplumbersconf.org/event/4/contributions/471/

Gentoo wants to provide safe, tested kernel packages. Currently testing gentoo-sources and derived packages. gentoo-sources combines upstream kernel source and "genpatches", which contains patches for bug fixes and target-specific features.

Testing multiple kernel configurations - allyesconfig, defconfig, other reasonable configurations. Building with different toolchains.

Tests are implemented using buildbot. Kernel is installed on top of a Gentoo image and then booted in QEMU.

Generalising for discussion:

  • Jenkins vs buildbot vs other
  • Beyond boot testing, like LTP and kselftest
  • LAVA integration
  • Supporting other configurations
  • Any other Gentoo or meta-distro topic

Don Zickus talked briefly about Red Hat's experience. They eventually settled on Gitlab CI for RHEL.

Some discussion of what test suites to run, and whether they are reliable. Varying opinions on LTP.

There is some useful scripting for different test suites at https://github.com/linaro/test-definitions.

Tim Bird talked about his experience testing with Fuego. A lot of the test definitions there aren't reusable. kselftest currently is hard to integrate because tests are supposed to follow TAP13 protocol for reporting but not all of them do!

Distros and Syzkaller - Why bother?

Speaker: George Kennedy, working on virtualisation at Oracle.

Details: https://linuxplumbersconf.org/event/4/contributions/473/

Which distros are using syzkaller? Apparently Google uses it for Android, ChromeOS, and internal kernels.

Oracle is using syzkaller as part of CI for Oracle Linux. "syz-manager" schedules jobs on dedicated servers. There is a cron job that automatically creates bug reports based on crashes triggered by syzkaller.

Google's syzbot currently runs syzkaller on GCE. Planning to also run on QEMU with a wider range of emulated devices.

How to make syzkaller part of distro release process? Need to rebuild the distro kernel with config changes to make syzkaller work better (KASAN, KCOV, etc.) and then install kernel in test VM image.

How to correlate crashes detected on distro kernel with those known and fixed upstream?

Example of benefit: syzkaller found regression in rds_sendmsg, fixed upstream and backported into the distro, but then regressed in Oracle Linux. It turned out that patches to upgrade rds had undone the fix.

syzkaller can generate test cases that fail to build on old kernel versions due to symbols missing from UAPI headers. How to avoid this?

Q: How often does this catch bugs in the distro kernel?

A: It doesn't often catch new bugs but does catch missing fixes and regressions.

Q: Is anyone checking the syzkaller test cases against backported fixes?

A: Yes [but it wasn't clear who or when]

Google has public database of reproducers for all the crashes found by syzbot.

Wish list:

  • Syzkaller repo tag indicating which version is suitable for a given kernel version's UAPI
  • tarball of syzbot reproducers

Other possible types of fuzzing (mostly concentrated on KVM):

  • They fuzz MSRs, control & debug regs with "nano-VM"
  • Missing QEMU and PCI fuzzing
  • Intel and AMD virtualisation work differently, and AMD may not be covered well
  • Missing support for other architectures than x86

Worse Than FailureCodeSOD: Making a Nest

Tiffany started the code review with an apology. "I only did this to stay in style with the existing code, because it's either that or we rewrite the whole thing from scratch."

Jim J, who was running the code review nodded. Before Tiffany, this application had been designed from the ground up by Armando. Armando had gone to a tech conference, and learned about F#, and how all those exciting functional features were available in C#, and returned jabbering about "immutable data" and "functors" and "metaprogramming" and decided that he was now a functional programmer, who just happened to work in C#.

Some struggling object-oriented developers use dictionaries for everything. As a struggling functional programmer, Armando used tuples for everything. And these tuples would get deeply nested. Sometimes, you needed to flatten them back out.

Tiffany had contributed this method to do that:

public static Result<Tuple<T1, T2, T3, T4, T5>> FlatternTupleResult<T1, T2, T3, T4, T5>( Result<Tuple<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>, T5>> tuple ) { return tuple.Map(x => new Tuple<T1, T2, T3, T4, T5>(x.Item1.Item1.Item1.Item1, x.Item1.Item1.Item1.Item2, x.Item1.Item1.Item2, x.Item1.Item2, x.Item2)); }

It's safe to say that deeply nested generics are a super clear code smell, and this line: Result<Tuple<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>, T5>> tuple downright reeks. Tuples in tuples in tuples.

Tiffany cringed at the code she had written, but this method lived in the TaskResultHelper class, and lived alongside methods with these signatures:

public static Result<Tuple<T1, T2, T3, T4>> FlatternTupleResult<T1, T2, T3, T4>(Result<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>> tuple) public static Result<Tuple<T1, T2, T3>> FlatternTupleResult<T1, T2, T3>(Result<Tuple<Tuple<T1, T2>, T3>> tuple)

"This does fit in with the way the application currently works," Jim admitted. "I'm sorry."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory DoctorowCome see me in Santa Cruz, San Francisco, Toronto and Maine!

I’m about to leave for a couple of weeks’ worth of lectures, public events and teaching, and you can catch me in many places: Santa Cruz (in conversation with XKCD’s Randall Munroe); San Francisco (for EFF’s Pioneer Awards); Toronto (for Word on the Street, Seeding Utopias and Resisting Dystopias and 6 Degrees); Newry, ME (Maine Library Association) and Portland, ME (in conversation with James Patrick Kelly).

Here’s the full itinerary:

Santa Cruz, September 11, 7PM: Bookshop Santa Cruz Presents an Evening with Randall Munroe, Santa Cruz Bible Church, 440 Frederick St, Santa Cruz, CA 95062

San Francisco, September 12, 6PM: EFF Pioneer Awards, with Adam Savage, William Gibson, danah boyd, and Oakland Privacy; Delancey Street Town Hall, 600 Embarcadero St., San Francisco, California, 94107

Houston and beyond, September 13-22: The Writing Excuses Cruise (sorry, sold out!)

Toronto, September 22: Word on the Street:

Toronto, September 23, 6PM-8PM: Cory Doctorow in Discussion: Seeding Utopias & Resisting Dystopias , with Jim Munroe, Madeline Ashby and Emily Macrae; Oakwood Village Library & Arts Centre, 341 Oakwood Avenue, Toronto, ON M6E 2W1

Toronto, September 24: 360: How to Make Sense at the 6 Degrees Conference, with Aude Favre, Ryan McMahon and Nanjala Nyabola, Art Gallery of Ontario.

Newry, ME, September 30: Keynote for the Maine Library Association Annual Conference, Sunday River Resort, Newry, ME

Portland, ME, September 30, 6:30PM-8PM: In Conversation With James Patrick Kelly, Main Library, Rines Auditorium.

I hope you can make it!

,

Sam VargheseSerena Williams loses another Grand Slam final

Serena Williams has fallen flat on her face again in her bid to equal Margaret Court’s record of 24 Grand Slam titles. This time Williams’ loss was to Canadian teenager Bianca Andreescu – and what makes it better is that she lost in straight sets, 6-3, 7-5.

Andreescu, 19, is a raw hand at the game; she has never played in the main draw of the US Open before. Last year, ranked 208, she was beaten in the first round by Olga Danilovic.

Williams has now lost four Grand Slam finals in pursuit of 24 wins: Angelique Kerber defeated her at Wimbledon in 2018, Naomi Osaka defeated her in the last US Open and Simona Halep accounted for Williams at Wimbledon this year. In all those finals, Williams was unable to win more than four games in any set. And now Andreescu has sent her packing.

Williams appears to be obsessed with being the winner of most Grand Slams before she quits the game. But after returning from maternity leave, she has shown the inability to cope with the pressure of a final. Her last win was in the Australian Open in 2017, when she beat her sister, Venus, 6-4, 6-4.

Unlike many other players, Williams is obsessed with herself. Not for her the low-profile attitude cultivated by the likes of Roger Federer or Steffi Graf. The German woman, who dominated tennis for many years, was a great example for others.

In 1988, Graf thrashed Russian Natasha Zvereva 6-0, 6-0 in the final of the French Open in 34 minutes – the shortest and most one-sided Grand Slam final on record. And Zvereva had beaten the great Martina Navratilova en route to the final!

Yet Graf was low-key at the presentation. She did not laud it over Zvereva who was in tears, she did not indulge in triumphalism. One shudders to think of the way Williams would have carried on in such a situation. Graf was graciousness personified.

Williams is precisely the opposite. When she wins, it is because she played well. And when she loses, it is all because she did not play well. Her opponent only gets some reluctant praise.

It is time for Williams to do some serious soul-searching and consider whether it is time to bow out. This constant search for a 24th title — and I’m sure she will look for a 25th after that to be atop the winners’ list — is getting a little tiresome.

There is a time in life for everything as it says in the Biblical book of Ecclesiastes. Williams has had a good run but now her obsession with another win is getting on people’s nerves. There is much more to women’s tennis than Serena Williams – and it is time that she realised it as well and retired.

Planet DebianDirk Eddelbuettel: pinp 0.0.8: Bugfix

A new release of our pinp package is now on CRAN. pinp allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

pinp vignette

This release was spurned by one of those "CRAN package xyz" emails I received yesterday: processing of pinp-using vignettes was breaking at CRAN under the newest TeX Live release present on Debian testing as well as recent Fedora. The rticles package (which uses the PNAS style directly) apparently has a similar issue with PNAS.

Kurt was a usual extremely helpful in debugging, and we narrowed this down to an interaction with the newer versions of titlesec latex package. So for now we did two things: upgrade our code reusing the PNAS class to their newest verson of the PNAS class (as suggested by Norbert whom I also roped in), but also copying in an older version of titlesec.sty (plus a support file). In the meantime, we are also looking into titlesec directly as Javier offered help—all this was a really decent example of open source firing on all cylinders. It is refreshing.

Because of the move to a newer PNAS version (which seems to clearly help with the occassionally odd formatting of floating blocks near the document end) I may have trampled on earlier extension pull requests. I will reach out to the authors of the PRs to work towards a better process with cleaner diffs, a process I should probably have set up earlier.

The NEWS entry for this release follows.

Changes in pinp version 0.0.8 (2019-09-08)

  • Two erroraneous 'Provides' were removed from the pinp class.

  • The upquote package is now use to use actual (non-fancy) quotes in verbatim mode (Dirk fixing #75)

  • The underlying PNAS style was updated to the most recent v1.44 version of 2018-05-06 to avoid issues with newer TeXLive (Dirk in #79 fixing #77 and #78)

  • The new PNAS code brings some changes eg watermark is longer an option but typesetting paragraphs seems greatly improved. We may have stomped on an existing behavior, if see please file an issue.

  • However, it also conflicts with the current texlive version of titlesec so for now we copy titlesec.sty (and a support file) in using a prior version, just like we do for pinp.cls and jss.bst.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianShirish Agarwal: Depression, Harrapa, Space program

I was and had been depressed mostly when the election results were out. I was expecting like many others that Congress will come into power, but it didn’t . With that, came one bad after other, whether it was on politics (Jammu and Kashmir, Assam) both of which from my POV are inhumane not just on citizenship but even simply on humane levels. How people can behave like this with each other is beyond me. On the Economic Front, the less said the better. We are in midst of a prolonged recession and don’t see things turning out for the better any time soon. But as we have to come to terms with it and somehow live day-to-day, we are living. Because of the web, came to know there are so many countries where it is happening right now, whether it is Britian (Brexit), South Africa, Brazil. In fact, the West Papu thing is similar in many ways to what happened in Kashmir. Of course each region has its own complexities but this can be safely said that such events are happening all over. In every incident, one way ‘The Other’ is demonized. This has happened in all of the above incidents.

One question I have often asked and have had no clear answers. If Germany knew that Israel would be big and strong as it is now, would they have done what they did ? Had they known that Einstein, A Jew would go on to change the face of Science. Would America have been great without Einstein to such a degree ? I was flabbergasted when I saw ‘The Red Sea Diving Resort‘ which is based on real life done by Mossad as shared in the pictures after the movie.

Even among such blackness, I do see some hope. One thing which has good has been the rise of independant media. While the mainstream media has become completely ridiculous and instead of questioning the Government is toeing its line, independant media is trying to do what mainstream media should have been doing all along. I wouldn’t say much about this otherwise the whole blog post would be about independant India only. Maybe some other day 🙂

Harrapan Civilization

One of the more interesting things, videos has been the gamification of Evolution. There is a game called ‘Ancestors, the Humankind Odyssey‘ . While sadly the game is only on Epic Games Store, I have been following the gameplay as shared by GameRiotArmy . While almost all the people who are playing the game have divorced it from their personal beliefs because of the whole evolution, natural selection vs creationism debate, the game itself feeds on the evolution and natural selection bits. The game is open-world in nature. The only quibble I have is it should have started with the big bang but then it probably would have been too long a game perhaps. I am sure for many people, even this gameplay when the game would be complete would be at least 20-30 episodes .

The Harrappan bit comes in when the following bits came onto twitter . While looking into it, saw this also. I think most of the libraries for it are already in Debian. The papers they are presenting can be found at this link for those interested. What is/was interesting is that the ancient DNA they found is supposed to be Dravidian. As can be seen from the atlantic piece it is pretty political in nature hence the researchers are trying to just do their job. It does make for some interesting reading though.

Space, Chandrayaan 2 and Mars-sim

As far as space is concerned, it has been an eventful week. India crash-landed Chandrayaan 2. While it is too early to say what has gone wrong and we are waiting for the scientists to confirm exactly what went wrong, it came to the fore for the wrong reasons. The images with Mr. Modi and how he reacted before and after became the story rather than what Chandrayaan 2 will be doing. Also it came to the fore that ISRO’s scientists salaries have been cut which is a saddening affair. I had already spoken before how I had spoken to some ISRO scientists for merch. and how they had shared, that merchandising only happens in Gujarat. It really seems sad.

The only thing we know as of date is that we lost communications when it was two and half kilometers before touching the surface of the moon. I do hope there are lots of sensors which have captured but do also understand they can’t put many due to problems like cross-talk as well as power issues probably. I do hope that the lander is able to communicate with the orbiter and soon the lander starts on its wheels. Even if it does not, there is lots the orbiter will be able to do as shared by this twitter thread. I shared the unroll from threadreaderapp. Although I do hope it does start talking and takes baby steps.

As far as mars-sim is concerned, a game I am helping in my spare-time, it is going to take lot of time. We are hoping kotlin comes soon. I am thankful to the Java-team and hopefully the packages which are in NEW come to Debian archive soonish and we have kotlin in Debian. I know this will help with update to gradle as well, which is the reason that kotlin is coming in.

Planet DebianAndrew Cater: Chasing around installing CD images for Buster 10.1 ...

and having great fun, as ever, making a few mistakes and contributing mayhem and entropy to the CD release process. Buster 10.1 point update just released, thanks to RattusRattus, Sledge and Isy and Schweer (amongst others).

Waiting on the Stretch point release to try all over again.. I'd much rather be in Cambridge, but hey, you can't have everything.

Planet DebianDebian GSoC Kotlin project blog: Begining of the end.

Work done.

Hey all, since the last page of this post we have come so far into packaging Kotlin 1.3.30. I am glad to announce that Kotlin 1.3.30's dependencies are completely packaged and only refining work on intellij-community-java( which is the source package of the intellij related jars that kotlin depended on) and Kotlin remain.

I have roughly packaged Kotlin, the debian folder is pretty much done, and have pushed it here. Also the bootstrap package can be found here.

The links to all the dependencies of Kotlin 1.3.30 can be found in my previous blog pages but I ll list them here for convinience of the reader.

1.->java-compatibility-1.0.1 -> https://github.com/JetBrains/intellij-deps-java-compatibility (DONE: here)
2.->jps-model -> https://github.com/JetBrains/intellij-community/tree/master/jps (DONE: here)
3.->intellij-core -> https://github.com/JetBrains/intellij-community/tree/183.5153 (DONE: here)
4.->streamex-0.6.7 -> https://github.com/amaembo/streamex/tree/streamex-0.6.7 (DONE: here)
5.->guava-25.1 -> https://github.com/google/guava/tree/v25.1 (DONE: Used guava-19 from libguava-java)
6.->lz4-java -> https://github.com/lz4/lz4-java/blob/1.3.0/build.xml(DONE:here)
7.->libjna-java & libjna-platform-java recompiled in jdk 8. -> https://salsa.debian.org/java-team/libjna-java (DONE : commit)
8.->liboro-java recompiled in jdk8 -> https://salsa.debian.org/java-team/liboro-java (DONE : commit)
9.->picocontainer-1.3 refining -> https://salsa.debian.org/java-team/libpicocontainer-1-java (DONE: here)
10.->platform-api -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
11.->util -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
12.->platform-impl -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
13.->extensions -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
14.->jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow (DONE)
15.->trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j (DONE)
16.->proguard:6.0.3 in jdk8 (DONE: released as libproguard-java 6.0.3-2)
17.->io.javaslang:2.0.6 --> https://github.com/vavr-io/vavr/tree/javaslang-v2.0.6 (DONE)
18.->jline 3.0.3 --> https://github.com/jline/jline3/tree/jline-3.3.1 (DONE)
19.->protobuf-2.6.1 in jdk8 (DONE)
20.->com.jcabi:jcabi-aether:1.0 -> the file that requires this is commented out;can be seen here and here
21.->org.sonatype.aether:aether-api:1.13.1 -> the file that requires this is commented out;can be seen here and here

Important Notes.

It should be noted that at this point in time, 8th September 2019, the kotlin package only aims to package the jars generated by the ":dist" task of the kotlin build scripts. This task builds the kotlin home. So thats all we have, we don't have the kotlin-gradle-plugins or kotlinx or anything that isn't part of the kotlin home.

It can be noted that the kotlin boostrap package has kotlin-gradle-plugin, kotlinx and kotlin-dsl jars. The eventual plan is to build kotlin-gradle-plugins and kotlinx from kotlin source itself and to build kotlindsl from gradle source using kotlin as a dependency for gradle. After we do that we can get rid of the kotlin bootstrap package.

It should also be noted that this kotlin package as of 8th September 2019 may not be perfect and might contain a ton of bugs, this is because of 2 reasons; partly because I have ignored some code that depended on jcabi-aether(mentioned above with link to commit) and mostly because the platform-api.jar and patform-impl.jar from intellij-community-idea are not the same as their upstream counterpart but minimum files that are required to make kotlin compile without errors; I did this because they needed packaging new dependencies and at this time it didn't look like it was worth it.

Work left to be done.

Now I believe most of the building blocks of packaging kotlin are done and whats left is to remove this pesky bootstrap. I believe this can be counted as the completion of my GSoC (officially ended in August 26). The tasks left are as follows:

Major Tasks.

  1. Make kotlin build just using openjdk-11-jdk; now it builds iwth openjdk-8-jdk and openjdk-11-jdk.
  2. Build kotlin-gradle-plugins.
  3. Build kotlinx.
  4. Build kotlindsl from gradle.
  5. Do 2,3 and 4 and make kotlin build without bootstrap.

Things that will help the kotlin effort.

  1. refine intellij-community-idea and do its copyrights file proper.
  2. import kotlin 1.3.30 into a new debian-java-maintainers repository.
  3. move kotlin changes(now maintained as git commits) to quilt patches. link to kotlin -> here.
  4. do kotlin's copyrights file.
  5. refine kotlin.

Authors Notes.

Hey guys its been a wonderful ride so far. I hope to keep doing this and maintain kotlin in debian. I am only a final year student and my career fare starts this october 17nth 2019 so I have to prepare for coding interviews and start searching jobs. So until late November 2019 I'll only be taking on the smaller tasks and be doing them. Please note that I won't be doing it as fast as I used to up until now since I am going to be a little busy during this period. I hope I can land a job that lets me keep doing this :) .

I would love to take this section to thank _hc, ebourg, andrewsh and seamlik for helping and mentoring me trough all this.

So if any of you want to help please kindly take on any of these tasks.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates.

,

Planet DebianDima Kogan: Are planes crashing any less than they used to?

Recently, I've been spending more of my hiking time looking for old plane crashes in the mountains. And I've been looking for data that helps me do that, for instance the last post. A question that came up in conversation is: "are crashes getting more rare?" And since I now have several datasets at my disposal, I can very easily come up with a crude answer.

The last post describes how to map the available NTSB reports describing aviation incidents. I was only using the post-1982 reports in that project, but here let's also look at the older reports. Today I can download both from their site:

$ wget https://app.ntsb.gov/avdata/Access/avall.zip
$ unzip avall.zip    # <------- Post 1982

$ wget https://app.ntsb.gov/avdata/PRE1982.zip
$ unzip PRE1982.zip  # <------- Pre 1982

I import the relevant parts of each of these into sqlite:

$ ( mdb-schema avall.mdb sqlite -T events;
    echo "BEGIN;";
    mdb-export -I sqlite avall.mdb events;
    echo "COMMIT;";
  ) | sqlite3 post1982.sqlite

$ ( mdb-schema PRE1982.MDB sqlite -T tblFirstHalf;
    echo "BEGIN;";
    mdb-export -I sqlite PRE1982.MDB tblFirstHalf;
    echo "COMMIT;";
  ) | sqlite3 pre1982.sqlite

And then I pull out the incident dates, and make a histogram:

$ cat <(sqlite3 pre1982.sqlite 'select DATE_OCCURRENCE from tblFirstHalf') \
      <(sqlite3 post1982.sqlite 'select ev_date from events') |
  perl -pe 's{^../../(..) .*}{$1 + (($1<40)? 2000: 1900)}e'   |
  feedgnuplot --histo 0 --binwidth 1 --xmin 1960 --xlabel Year \
              --title 'NTSB-reported incident counts by year'

ntsb-histogram-by-year.svg

I guess by that metric everything is getting safer. This clearly just counts NTSB incidents, and I don't do any filtering by the severity of the incident (not all reports describe crashes), but close-enough. The NTSB only deals with civilian incidents in the USA, and only after the early 1960s, it looks like. Any info about the military?

At one point I went through "Historic Aircraft Wrecks of Los Angeles County" by G. Pat Macha, and listed all the described incidents in that book. This histogram of that dataset looks like this:

macha-la-histogram-by-year.svg

Aaand there're a few internet resources that list out significant incidents in Southern California. For instance:

I visualize that dataset:

$ < [abc].htm perl -nE '/^ \s* 19(\d\d) | \d\d \s*(?:\s|-|\/)\s* \d\d \s*(?:\s|-|\/)\s* (\d\d)[^\d]/x || next; $y = 1900+($1 or $2); say $y unless $y==1910' |
  feedgnuplot --histo 0 --binwidth 5

carcomm-by-year.svg

So what did we learn? I guess overall crashes are becoming more rare. And there was a glut of military incidents in the 1940s and 1950s in Southern California (not surprising given all the military bases and aircraft construction facilities here at that time). And by one metric there were lots of incidents in the late 1970s/early 1980s, but they were much more interesting to this "carcomm" person, than they were to Pat Macha.

CryptogramMassive iPhone Hack Targets Uyghurs

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google's Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google's announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple's pushbacks.

Valerie AuroraWhy you shouldn’t trust people who support sexual predators

[CW: mention of child sexual abuse]

Should you trust people who support sexual predators? My answer is no. Here’s why:

Anyone who is ethically flexible enough to justify knowingly supporting a sexual predator is ethically flexible enough to justify harming the people who trust and support them.

This week’s news provides a useful case study.

After writing about how to avoid supporting sexual predators, I talked to some of the 250 people who signed a letter of support for Joi Ito to remain as head of MIT Media Lab. They signed this letter between August 26th and September 6th, when they were aware of the initial revelations that Ito and the ML had taken about $2 million from Jeffrey Epstein after his 2008 conviction for child sex offenses.

Here’s the dilemma these signatories were facing: Ito was powerful, and charming, and had inspired loyalty and support in them. The letter says, “We have experienced first-hand Joi’s integrity, and stand in testament to his overwhelmingly positive influence on our lives—and sincerely hope he remains our visionary director for many years to come.” When given evidence that Ito had knowingly supported a convicted serial child rapist, they chose to believe that there was some as-yet unknown explanation which would square with their image of Ito as a person of integrity and ethics. Others viewed taking Epstein’s money as some kind of moral imperative: the money was available, they could do good with it, no one was preventing them from taking it. They denied that Epstein accrued any advantage from the donations. Finally, many of the signatories also depend on Ito for a living; after all, as Upton Sinclair says, it is difficult to get a person to understand something when their salary depends upon their not understanding it.

These 250 people expected their public pledge of loyalty to be rewarded. Instead, on September 6th, we all learned that Ito and other ML staff had been deliberately covering up Epstein’s role in about $8 million in donations to the ML, in contravention of MIT’s explicit disqualification of Epstein as a donor. The article is filled with horrifying details, but most damning of all: Epstein visited the ML in 2015 to meet with Ito in person (a privilege accorded to him for his financial support). The women on the ML staff offered to help two extremely young women accompanying Epstein escape, fearing they were trafficked.

Ito knew Epstein was almost certainly still committing rape after 2008.

Needless to say, this not what the signatories of the letter of support expected. Less than 24 hours after this news broke, the number of signatories had dropped from 250 to 228, and this disclaimer was added: “This petition was drafted by students on August 26th, 2019, and signed by members of the broader Media Lab community in the days that followed, to show their support for Joi and his apology. Given when community members added their names to this petition, their signatures should not be read as continued support of Joi staying on as Media Lab Director following the most recent revelations in the September 6th New Yorker article by Ronan Farrow.”

What happened? This is a phenomenon I’ve seen before, from my time working in the Linux kernel community. It’s this: Every nasty horror show of an abuser is surrounded by a ring of charming enablers who mediate between the abuser and the rest of the world. They make the abuser’s actions more palatable, smooth over the disagreements, invent explanations: the abuser can’t help it, the abuser needs help, the abuser is doing more good than harm, the abuse isn’t real abuse, we’ll always have an abuser so might as well stick with the abuser we know, etc. And around the immediate circle of enablers is a wider circle of dozens and hundreds of kind, trusting, supportive people who believe, in spite of all the evidence, that keeping the abuser and their enablers in power is ethically justified, in some way they aren’t privileged to understand. They don’t fully understand why, but they trust the people in power and keep working on faith.

That first level of charming enabler surrounding the abuser is doing that work with full knowledge of how terrible the abuser is, and they are rationalizing their decision in some way. It might be pure self-interest, it might be in service of some supposed greater goal, it might be a deep psychological need to believe that the abuser can be reformed. Whatever it is, it is a rationalization, and they are daily acting in a way that the surrounding circle of kind, trusting people would consider wildly unethical.

Here’s the key: you can’t trust anyone in that inner circle of enablers. They are people who are ethically flexible enough to rationalize supporting an abuser. They can easily rationalize screwing over the kind people who trust them, as Ito did with the 250 signatories of a letter that said, “We are here for you, we support you, we will forever be grateful for your impact on our lives.” His supporters are finding out the hard way that this kind of devotion and love is only one-way.

I am lucky enough to be in a position where I can refuse to knowingly support sexual predators. I also refuse to associate with people who support sexual predators because I know I can’t trust them to act ethically. I encourage you to join me.

Planet DebianAndreas Metzler: exim update

Testing users might want to manually pull the latest (4.92.1-3) upload of Exim from sid instead of waiting for regular migration to testing. It fixes a nasty vulnerability.

,

CryptogramFriday Squid Blogging: Squid Perfume

It's not perfume for squids. Nor is it perfume made from squids. It's a perfume called Squid, "inspired by life in the sea."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Cory DoctorowTalking RADICALIZED and MAKERS on Writers Voice

The Writers Voice podcast just published their interview with me about Radicalized; as a bonus, they include my decade-old interview about Makers in the recording!

MP3

CryptogramThe Doghouse: Crown Sterling

A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious -- and amusing -- examples of cryptographic "snake oil."

I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out.

Crown Sterling is complete and utter snake oil. The company sells "TIME AI," "the world's first dynamic 'non-factor' based quantum AI encryption software," "utilizing multi-dimensional encryption technology, including time, music's infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs." Those sentence fragments tick three of my snake-oil warning signs -- from 1999! -- right there: pseudo-math gobbledygook (warning sign #1), new mathematics (warning sign #2), and extreme cluelessness (warning sign #4).

More: "In March of 2019, Grant identified the first Infinite Prime Number prediction pattern, where the discovery was published on Cornell University's www.arXiv.org titled: 'Accurate and Infinite Prime Number Prediction from Novel Quasi-Prime Analytical Methodology.' The paper was co-authored by Physicist and Number Theorist Talal Ghannam PhD. The discovery challenges today's current encryption framework by enabling the accurate prediction of prime numbers." Note the attempt to leverage Cornell's reputation, even though the preprint server is not peer-reviewed and allows anyone to upload anything. (That should be another warning sign: undeserved appeals to authority.) PhD student Mark Carney took the time to refute it. Most of it is wrong, and what's right isn't new.

I first encountered the company earlier this year. In January, Tom Yemington from the company emailed me, asking to talk. "The founder and CEO, Robert Grant is a successful healthcare CEO and amateur mathematician that has discovered a method for cracking asymmetric encryption methods that are based on the difficulty of finding the prime factors of a large quasi-prime numbers. Thankfully the newly discovered math also provides us with much a stronger approach to encryption based on entangled-pairs of keys." Sounds like complete snake-oil, right? I responded as I usually do when companies contact me, which is to tell them that I'm too busy.

In April, a colleague at IBM suggested I talk with the company. I poked around at the website, and sent back: "That screams 'snake oil.' Bet you a gazillion dollars they have absolutely nothing of value -- and that none of their tech people have any cryptography expertise." But I thought this might be an amusing conversation to have. I wrote back to Yemington. I never heard back -- LinkedIn suggests he left in April -- and forgot about the company completely until it surfaced at Black Hat this year.

Robert Grant, president of Crown Sterling, gave a sponsored talk: "The 2019 Discovery of Quasi-Prime Numbers: What Does This Mean For Encryption?" I didn't see it, but it was widely criticized and heckled. Black Hat was so embarrassed that it removed the presentation from the conference website. (Parts of it remain on the Internet. Here's a short video from the company, if you want to laugh along with everyone else at terms like "infinite wave conjugations" and "quantum AI encryption." Or you can read the company's press release about what happened at Black Hat, or Grant's Twitter feed.)

Grant has no cryptographic credentials. His bio -- on the website of something called the "Resonance Science Foundation" -- is all over the place: "He holds several patents in the fields of photonics, electromagnetism, genetic combinatorics, DNA and phenotypic expression, and cybernetic implant technologies. Mr. Grant published and confirmed the existence of quasi-prime numbers (a new classification of prime numbers) and their infinite pattern inherent to icositetragonal geometry."

Grant's bio on the Crown Sterling website contains this sentence, absolutely beautiful in its nonsensical use of mathematical terms: "He has multiple publications in unified mathematics and physics related to his discoveries of quasi-prime numbers (a new classification for prime numbers), the world's first predictive algorithm determining infinite prime numbers, and a unification wave-based theory connecting and correlating fundamental mathematical constants such as Pi, Euler, Alpha, Gamma and Phi." (Quasi-primes are real, and they're not new. They're numbers with only large prime factors, like RSA moduli.)

Near as I can tell, Grant's coauthor is the mathematician of the company: "Talal Ghannam -- a physicist who has self-published a book called The Mystery of Numbers: Revealed through their Digital Root as well as a comic book called The Chronicles of Maroof the Knight: The Byzantine." Nothing about cryptography.

There seems to be another technical person. Ars Technica writes: "Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green -- a musician who was 'musical director for Davy Jones of The Monkees' -- was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare."

None of these people have demonstrated any cryptographic credentials. No papers, no research, no nothing. (And, no, self-publishing doesn't count.)

After the Black Hat talk, Grant -- and maybe some of those others -- sat down with Ars Technica and spun more snake oil. They claimed that the patterns they found in prime numbers allows them to break RSA. They're not publishing their results "because Crown Sterling's team felt it would be irresponsible to disclose discoveries that would break encryption." (Snake-oil warning sign #7: unsubstantiated claims.) They also claim to have "some very, very strong advisors to the company" who are "experts in the field of cryptography, truly experts." The only one they name is Larry Ponemon, who is a privacy researcher and not a cryptographer at all.

Enough of this. All of us can create ciphers that we cannot break ourselves, which means that amateur cryptographers regularly produce amateur cryptography. These guys are amateurs. Their math is amateurish. Their claims are nonsensical. Run away. Run, far, far, away.

But be careful how loudly you laugh when you do. Not only is the company ridiculous, it's litigious as well. It has sued ten unnamed "John Doe" defendants for booing the Black Hat talk. (It also sued Black Hat, which may have more merit. The company paid $115K to have its talk presented amongst actual peer-reviewed talks. For Black Hat to remove its nonsense may very well be a breach of contract.)

Maybe Crown Sterling can file a meritless lawsuit against me instead for this post. I'm sure it would think it'd result in all sorts of positive press coverage. (Although any press is good press, so maybe it's right.) But if I can prevent others from getting taken in by this stuff, it would be a good thing.

Worse Than FailureError'd: Does Your Child Say "WTF" at Home?

Abby wrote, "I'm tempted to tell the school that my child mostly speaks Sanskrit."

 

"First of all, I have 58,199 rewards points, so I'm a little bit past joining, second, I'm pretty sure Bing Rewards was rebranded as Microsoft Rewards a while ago, and third...SERPBubbleXL...wat?" writes Zander.

 

"I guess, for T-Mobile, time really is money," Greg writes.

 

Hans K. wrote, "I guess it's sort of fitting, but in a quiz about Generics in Java, I was left a little bit confused.

 

"Wait, so if I do, um, nothing, am I allowed to make further changes or any new appointment?" Jeff K. writes.

 

Soumya wrote, "Yeah...I'm not a big fan of Starbucks' reward program..."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowWhat a Prehistoric Monument Reveals about the Value of Maintenance

Members of Long Now London chalking the White Horse of Uffington, a 3000-year-old prehistoric hill figure in England. Photo by Peter Landers.

Imagine, if you will, that you could travel back in time three thousand years to the late Bronze Age, with a bird’s eye view of a hill near the present-day village of Uffington, in Oxfordshire, England. From that vantage, you’d see the unmistakable outlines of a white horse etched into the hillside. It is enormous — roughly the size of a football field — and visible from 20 miles away.

Now, fast forward. Bounding through the millennia, you’d see groups of people arrive from nearby villages at regular intervals, making their way up the hill to partake in good old fashioned maintenance. Using hammers and buckets of chalk, they scour the hillside to ensure the giant pictogram is not obscured. Without this regular maintenance, the hill figure would not last more than twenty years before becoming entirely eroded and overgrown. After the work is done, a festival is held.

Entire civilizations rise and fall. The White Horse of Uffington remains. Scribes and historians make occasional note of the hill figure, such as in the Welsh Red Book of Hergest in 01382 (“Near to the town of Abinton there is a mountain with a figure of a stallion upon it, and it is white. Nothing grows upon it.”) or by the Oxford archivist Francis Wise in 01736 (“The ceremony of scouring the Horse, from time immemorial, has been solemnized by a numerous concourse of people from all the villages roundabout.”). Easily recognizable by air, the horse is temporarily hidden by turf during World War II to confuse Luftwaffe pilots during bombing raids. Today, the National Trust preserves the site, overseeing a regular act of maintenance 3,000 years in the making.

Long Now London chalking the White Horse. Photo by Peter Landers.

Earlier this summer, members of Long Now London took a field trip to Uffington to participate in the time-honored ceremony. Christopher Daniel, the lead organizer of Long Now London, says the idea to chalk the White Horse came from a conversation with Sarah Davis of Longplayer about the maintenance of art, places and meaning across generations and millennia.

“Sitting there, performing the same task as people in 01819, 00819 and around 800 BCE, it is hard not to consider the types and quantities of meaning and ceremony that may have been attached to those actions in those times,” Daniel says.

The White Horse of Uffington in 01937. Photo by Paul Nash.

Researchers still do not know why the horse was made. Archaeologist David Miles, who was able to date the horse to the late Bronze Age using a technique called optical stimulated luminescence, told The Smithsonian that the figure of the horse might be related to early Celtic art, where horses are depicted pulling the chariot of the sun across the sky. From the bottom of the Uffington hill, the sun appears to rise behind the horse.

“From the start the horse would have required regular upkeep to stay visible,” Emily Cleaver writes in The Smithsonian. “It might seem strange that the horse’s creators chose such an unstable form for their monument, but archaeologists believe this could have been intentional. A chalk hill figure requires a social group to maintain it, and it could be that today’s cleaning is an echo of an early ritual gathering that was part of the horse’s original function.”

In her lecture at Long Now earlier this summer, Monica L. Smith, an archaeologist at UCLA, highlighted the importance of ritual sites like Stonehenge and Göbekli Tepe in the eventual formation of cities.

“The first move towards getting people into larger and larger groups was probably something that was a ritual impetus,” she said. “The idea of coming together and gathering with a bunch of strangers was something that is evident in the earliest physical ritual structures that we have in the world today.”

Photo by Peter Landers.

For Christopher Daniel, the visit to Uffington underscored that there are different approaches to making things last. “The White Horse requires rather more regular maintenance than somewhere like Stonehenge,” he said. “But thankfully the required techniques and materials are smaller, simpler and much closer to hand.”

Though it requires considerably less resources to maintain, and is more symbolic than functional, the Uffington White Horse nonetheless offers a lesson in maintaining the infrastructure of cities today. “As humans, we are historically biased against maintenance,” Smith said in her Long Now lecture. “And yet that is exactly what infrastructure needs.”

The Golden Gate Bridge in San Francisco. Photo by Rich Niewiroski Jr.

When infrastructure becomes symbolic to a built environment, it is more likely to be maintained. Smith gave the example of San Francisco’s Golden Gate Bridge to illustrate this point. Much like the White Horse, the Golden Gate Bridge undergoes a willing and regular form of maintenance. “Somewhere between five to ten thousand gallons of paint a year, and thirty painters, are dedicated to keeping the Golden Gate Bridge golden,” Smith said.

Photos by Peter Landers.

For members of Long Now London, chalking the White Horse revealed that participating in acts of maintenance can be deeply meaningful. “It felt at once both quite ordinary and utterly sublime,” Daniel said. “The physical activity itself is in many ways straightforward. It is the context and history that elevate those actions into what we found to be a profound experience. It was also interesting to realize that on some level it does not matter why we do this. What matters most is that it is done.”

Daniel hopes Long Now London will carry out this “secular pilgrimage” every year. 

“Many of the oldest protected routes across Europe are routes of pilgrimage,” he says. “They were stamped out over centuries by people carrying or searching for meaning. I want the horse chalking to carry meaning across both time and space. If even just a few of us go to the horse each year with this intent, it becomes a tradition. Once something becomes a tradition, it attracts meaning, year by year, generation by generation. On this first visit to the horse, one member brought his kids. A couple of other members said they want to bring theirs in the future. This relatively simple act becomes something we do together—something we remember as much for the communal spirit as for the activity itself. In so doing, we layer new meaning onto old as we bash new chalk into old.”


Learn More

Worse Than FailureCodeSOD: Give Your Date a Workout

Bob E inherited a site which helps amateur sports clubs plan their recurring workouts/practices during the season. To do this, given the start date of the season, and the number of weeks, it needs to figure out all of the days in that range.

function GenWorkoutDates()
{

   global $SeasonStartDate, $WorkoutDate, $WeeksInSeason;

   $TempDate = explode("/", $SeasonStartDate);

   for ($i = 1; $i <= $WeeksInSeason; $i++)
   {
     for ($j = 1; $j <= 7; $j++)
     {

	   $MonthName = substr("JanFebMarAprMayJunJulAugSepOctNovDec", $TempDate[0] * 3 - 3, 3);

       $WorkoutDate[$i][$j] = $MonthName . " " . $TempDate[1] . "  ";
       $TempDate[1] += 1;


       switch ( $TempDate[0] )
	   {
     case 9:
	   case 4:
	   case 6:
	   case 11:
	     $DaysInMonth = 30;
	     break;

	   case 2:
     	 $DaysInMonth = 28;

	     switch ( $TempDate[2] )
	     {
	     case 2012:
	     case 2016:
	     case 2020:
	     	$DaysInMonth = 29;
	        break;

	     default:
	       $DaysInMonth = 28;
	       break;
	     }

	     break;

	   default:
	     $DaysInMonth = 31;
	     break;
	   }

	   if ($TempDate[1] > $DaysInMonth)
	   {
	     $TempDate[1] = 1;
	     $TempDate[0] += 1;
	     if ($TempDate[0] > 12)
	     {
	       $TempDate[0] = 1;
	       $TempDate[2] += 1;
	     }
	   }
     }
   }
}

I do enjoy that PHP’s string-splitting function is called explode. That’s not a WTF. More functions should be called explode.

This method of figuring out the month name, though:

$MonthName = substr("JanFebMarAprMayJunJulAugSepOctNovDec", $TempDate[0] * 3 - 3, 3);

I want to hate it, but I’m impressed with it.

From there, we have lovely hard-coded leap years, the “Thirty days has September…” poem implemented as a switch statement, and then that lovely rollover calculation for the end of a month (and the end of the year).

“I’m not a PHP developer,” Bob writes. “But I know how to use Google.” After some googling, he replaced this block of code with a 6-line version that uses built-in date handling functions.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Cory DoctorowCritical essays (including mine) discuss Toronto’s plan to let Google build a surveillance-based “smart city” along its waterfront

Sidewalk Labs is Google’s sister company that sells “smart city” technology; its showcase partner is Toronto, my hometown, where it has made a creepy shitshow out of its freshman outing, from the mass resignations of its privacy advisors to the underhanded way it snuck in the right to take over most of the lakeshore without further consultations (something the company straight up lied about after they were outed). Unsurprisingly, the city, the province, the country, and the company are all being sued over the plan.

Toronto Life has run a great, large package of short essays by proponents and critics of the project, from Sidewalk Labs CEO Dan Doctoroff (no, really, that’s his name) to former privacy commissioner Ann Cavoukian (who evinces an unfortunate belief in data-deidentification) to city councillor and former Greenpeace campaigner Gord Perks to urban guru Richard Florida to me.

I wrote about the prospect that a city could be organized around the principle that people are sensors, not things to be sensed — that is, imagine an internet of things that doesn’t relegate the humans it notionally serves to the status of “thing.”

Our cities are necessarily complex, and they benefit from sensing and control. From census tracts to John Snow’s 19th-century map of central London cholera infections, we have been gathering telemetry on the performance of our cities in order to tune and optimize them for hundreds of years. As cities advance, they demand ever-higher degrees of sensing and actuating. But smart cities have to be built by cities themselves, democratically controlled and publicly owned. Reinventing company towns with high-tech fillips is not a path to a brighter future. It’s a new form of digital feudalism.

Humans are excellent sensors. We’re spectacular at deciding what we want for dinner, which seat on the subway we prefer, which restaurants we’re likely to enjoy and which strangers we want to talk to at parties. What if people were the things that smart cities were designed to serve, rather than the data that smart cities lived to process? Here’s how that could work. Imagine someone ripped all the surveillance out of Android and all the anti-user controls out of iOS and left behind nothing on your phone but code that serves you, not manufacturers or advertisers. It could still collect data—where you are, who you talk to, what you say—but it would be a roach motel for that data, which would check in to your device but not check out. It wouldn’t be available to third parties without your ongoing consent.

A phone that knows about you—but doesn’t tell anyone what it knows about you—would be your interface to a better smart city. The city’s systems could stream data to your device, which could pick the relevant elements out of the torrent: the nearest public restroom, whether the next bus has a seat for you, where to get a great sandwich.


A smart city should serve its users, not mine their data
[Cory Doctorow/Toronto Life]

The Sidewalk Wars [Toronto Life]

(Image: Cryteria, CC-BY, modified)

CryptogramCredit Card Privacy

Good article in the Washington Post on all the surveillance associated with credit card use.

Worse Than FailureCodeSOD: UnINTentional Errors

Data type conversions are one of those areas where we have rich, well-supported, well-documented features built into most languages. Thus, we also have endless attempts for people to re-implement them. Or worse, wrap a built-in method in a way which makes everything less clear.

Mindy encountered this.

/* For converting (KEY_MSG_INPUT) to int format. */
public static int numberToIntFormat(String value) {
  int returnValue = -1;    	
  if (!StringUtil.isNullOrEmpty(value)) {
    try {
      int temp = Integer.parseInt(value);
      if (temp > 0) {
        returnValue = temp;
      }
    } catch (NumberFormatException e) {}
  }    	
  return returnValue;
}

The isNullOrEmpty check is arguably pointless, here. Any invalid input string, including null or empty ones, would cause parseInt to throw a NumberFormatException, which we’re already catching. Of course, we’re catching and ignoring it.

That’s assuming that StringUtil.isNullOrEmpty does what we think it does, since while there are third party Java libraries that offer that functionality, it’s not a built-in class (and do we really think the culprit here was using libraries?). Who knows what it actually does.

And, another useful highlight: note how we check if (temp > 0)? Well, this is a problem. Not only does the downstream code handle negative numbers, –1 is a perfectly reasonable value, which means when this method takes -10 and returns -1, what it’s actually done is passed incorrect but valid data back up the chain. And since any errors were swallowed, no one knows if this was intentional or not.

This method wasn’t called in any context relating to KEY_MSG_INPUT, but it was called everywhere, as it’s one of those utility methods that finds new uses any time someone wants to convert a string into a number. Due to its use in pretty much every module, fixing this is considered a "high risk" change, and has been scheduled for sometime in the 2020s.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Krebs on Security‘Satori’ IoT Botnet Operator Pleads Guilty

A 21-year-old man from Vancouver, Wash. has pleaded guilty to federal hacking charges tied to his role in operating the “Satori” botnet, a crime machine powered by hacked Internet of Things (IoT) devices that was built to conduct massive denial-of-service attacks targeting Internet service providers, online gaming platforms and Web hosting companies.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

Kenneth Currin Schuchman pleaded guilty to one count of aiding and abetting computer intrusions. Between July 2017 and October 2018, Schuchman was part of a conspiracy with at least two other unnamed individuals to develop and use Satori in large scale online attacks designed to flood their targets with so much junk Internet traffic that the targets became unreachable by legitimate visitors.

According to his plea agreement, Schuchman — who went by the online aliases “Nexus” and “Nexus-Zeta” — worked with at least two other individuals to build and use the Satori botnet, which harnessed the collective bandwidth of approximately 100,000 hacked IoT devices by exploiting vulnerabilities in various wireless routers, digital video recorders, Internet-connected security cameras, and fiber-optic networking devices.

Satori was originally based on the leaked source code for Mirai, a powerful IoT botnet that first appeared in the summer of 2016 and was responsible for some of the largest denial-of-service attacks ever recorded (including a 620 Gbps attack that took KrebsOnSecurity offline for almost four days).

Throughout 2017 and into 2018, Schuchman worked with his co-conspirators — who used the nicknames “Vamp” and “Drake” — to further develop Satori by identifying and exploiting additional security flaws in other IoT systems.

Schuchman and his accomplices gave new monikers to their IoT botnets with almost each new improvement, rechristening their creations with names including “Okiru,” and “Masuta,” and infecting up to 700,000 compromised systems.

The plea agreement states that the object of the conspiracy was to sell access to their botnets to those who wished to rent them for launching attacks against others, although it’s not clear to what extent Schuchman and his alleged co-conspirators succeeded in this regard.

Even after he was indicted in connection with his activities in August 2018, Schuchman created a new botnet variant while on supervised release. At the time, Schuchman and Drake had something of a falling out, and Schuchman later acknowledged using information gleaned by prosecutors to identify Drake’s home address for the purposes of “swatting” him.

Swatting involves making false reports of a potentially violent incident — usually a phony hostage situation, bomb threat or murder — to prompt a heavily-armed police response to the target’s location. According to his plea agreement, the swatting that Schuchman set in motion in October 2018 resulted in “a substantial law enforcement response at Drake’s residence.”

As noted in a September 2018 story, Schuchman was not exactly skilled in the art of obscuring his real identity online. For one thing, the domain name used as a control server to synchronize the activities of the Satori botnet was registered to the email address nexuczeta1337@gmail.com. That domain name was originally registered to a “ZetaSec Inc.” and to a “Kenny Schuchman” in Vancouver, Wash.

People who operate IoT-based botnets maintain and build up their pool of infected IoT systems by constantly scanning the Internet for other vulnerable systems. Schuchman’s plea agreement states that when he received abuse complaints related to his scanning activities, he responded in his father’s identity.

“Schuchman frequently used identification devices belonging to his father to further the criminal scheme,” the plea agreement explains.

While Schuchman may be the first person to plead guilty in connection with Satori and its progeny, he appears to be hardly the most culpable. Multiple sources tell KrebsOnSecurity that Schuchman’s co-conspirator Vamp is a U.K. resident who was principally responsible for coding the Satori botnet, and as a minor was involved in the 2015 hack against U.K. phone and broadband provider TalkTalk.

Multiple sources also say Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

The investigation into Schuchman and his alleged co-conspirators is being run out the FBI field office in Alaska, spearheaded by some of the same agents who helped track down and ultimately secure guilty pleas from the original co-authors of the Mirai botnet.

It remains to be seen what kind of punishment a federal judge will hand down for Schuchman, who reportedly has been diagnosed with Asperger Syndrome and autism. The maximum penalty for the single criminal count to which he’s pleaded guilty is 10 years in prison and fines of up to $250,000.

However, it seems likely his sentencing will fall well short of that maximum: Schuchman’s plea deal states that he agreed to a recommended sentence “at the low end of the guideline range as calculated and adopted by the court.”

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Cory DoctorowThey told us DRM would give us more for less, but they lied

My latest Locus Magazine column is DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

For 40 years, University of Chicago-style market orthodoxy has promised widespread prosperity as a natural consequence of turning everything into unfettered, unregulated, monopolistic businesses. For 40 years, everyone except the paymasters who bankrolled the University of Chicago’s priesthood have gotten poorer.

Today, DRM stands as a perfect example of everything terrible about monopolies, surveillance, and shareholder capitalism.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

DRM Broke Its Promise [Locus/Cory Doctorow]

(Image: Cryteria, CC-BY, modified)

,

Krebs on SecuritySpam In your Calendar? Here’s What to Do.

Many spam trends are cyclical: Spammers tend to switch tactics when one method of hijacking your time and attention stops working. But periodically they circle back to old tricks, and few spam trends are as perennial as calendar spam, in which invitations to click on dodgy links show up unbidden in your digital calendar application from Apple, Google and Microsoft. Here’s a brief primer on what you can do about it.

Image: Reddit

Over the past few weeks, a good number of readers have written in to say they feared their calendar app or email account was hacked after noticing a spammy event had been added to their calendars.

The truth is, all that a spammer needs to add an unwelcome appointment to your calendar is the email address tied to your calendar account. That’s because the calendar applications from Apple, Google and Microsoft are set by default to accept calendar invites from anyone.

Calendar invites from spammers run the gamut from ads for porn or pharmacy sites, to claims of an unexpected financial windfall or “free” items of value, to outright phishing attacks and malware lures. The important thing is that you don’t click on any links embedded in these appointments. And resist the temptation to respond to such invitations by selecting “yes,” “no,” or “maybe,” as doing so may only serve to guarantee you more calendar spam.

Fortunately, the are a few simple steps you can take that should help minimize this nuisance. To stop events from being automatically added to your Google calendar:

-Open the Calendar application, and click the gear icon to get to the Calendar Settings page.
-Under “Event Settings,” change the default setting to “No, only show invitations to which I have responded.”

To prevent events from automatically being added to your Microsoft Outlook calendar, click the gear icon in the upper right corner of Outlook to open the settings menu, and then scroll down and select “View all Outlook settings.” From there:

-Click “Calendar,” then “Events from email.”
-Change the default setting for each type of reservation settings to “Only show event summaries in email.”

For Apple calendar users, log in to your iCloud.com account, and select Calendar.

-Click the gear icon in the lower left corner of the Calendar application, and select “Preferences.”
-Click the “Advanced” tab at the top of the box that appears.
-Change the default setting to “Email to [your email here].”

Making these changes will mean that any events your email provider previously added to your calendar automatically by scanning your inbox for certain types of messages from common events — such as making hotel, dining, plane or train reservations, or paying recurring bills — may no longer be added for you. Spammy calendar invitations may still show up via email; in the event they do, make sure to mark the missives as spam.

Have you experienced a spike in calendar spam of late? Or maybe you have another suggestion for blocking it? If so, sound off in the comments below.

Worse Than FailureCodeSOD: Boxing with the InTern

A few years ago, Naomi did an internship with Initech. Before her first day, she was very clear on what her responsibilities would be: she'd be on a team modernizing an older product called "Gem" (no relation to Ruby libraries).

By the time her first day rolled around, however, Initech had new priorities. There were a collection of fires on some hyperspecific internal enterprise tool, and everyone was running around and screaming about the apocalypse while dealing with that. Except Naomi, because nobody had any time to bring the intern up to speed on this disaster. Instead, she was given a new priority: just maintain Gem. And no, she wouldn't have a mentor. For the next six months, Naomi was the Gem support team.

"Start by looking at the code quality metrics," was the advice she was given.

It was bad advice. First, while Initech had installed an automated code review tool in their source control system, they weren't using the tool. It had started crashing instead of outputting a report six years ago. Nobody had noticed, or perhaps nobody had cared. Or maybe they just didn't like getting bad news, because once Naomi had the tool running again, the report was full of bad news.

A huge mass of the code was reimplemented copies of the standard library, "tuned for performance", which meant instead of a sensible implementation it was a pile of 4,000 line functions wrapping around massive switch statements. The linter didn't catch that they were parsing XML using regular expressions, but Naomi spotted that and wisely decided not to touch that bit.

What she did find, and fix, was this pattern:

private Boolean isSided; // dozens more properties public GemGeometryEntryPoint(GemGeometryEntryPoint gemGeometryEntryPoint) { this.isSided = gemGeometryEntryPoint.isSided == null ? null : new Boolean(gemGeometryEntryPoint.isSided); // and so on, for those dozens of properties }

Java has two boolean types. The Boolean reference type, and boolean primitive type. The boolean is not a full-fledged object, and thus is smaller in memory and faster to allocate. The Boolean is a full class implementation, with all the overhead contained within. A Java developer will generally need to use both, as if you want a list of boolean values, you need to "box" any primitives into Boolean objects.

I say generally need both, because Naomi's predecessors decided that worrying about boxing was complicated, so they only used the reference types. There wasn't a boolean or an int to be found, just Booleans and Integers. Maybe they just thought "primitive" meant "legacy"?

You can't convert a null into a boxed type, so new Boolean(null) would throw an exception. Thus, the ternary check in the code above. At no point did anyone think that "hey, we're doing a null check on pretty much every variable access" mean that there was something wrong in the code.

The bright side to this whole thing was that the unit tests were exemplary. A few hours with sed meant that Naomi was able to switch most everything to primitive types, confirm that she hadn't introduced any regressions in the process, and even demonstrated that using primitives greatly improved performance, as it cut down on heap memory allocations. The downside was replacing all those ternaries with lines like this.isSided = other.gemGeometryEntryPoint.isSided didn't look nearly as impressive.

Of course, changing that many lines of code in a single commit triggered some alarms, which precipitated a mini-crisis as no one knew what to do when the intern submitted a 15,000 line commit.

Naomi adds: "Mabye null was supposed to represent FILE_NOT_FOUND?"

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityFeds Allege Adconion Employees Hijacked IP Addresses for Spamming

Federal prosecutors in California have filed criminal charges against four employees of Adconion Direct, an email advertising firm, alleging they unlawfully hijacked vast swaths of Internet addresses and used them in large-scale spam campaigns. KrebsOnSecurity has learned that the charges are likely just the opening salvo in a much larger, ongoing federal investigation into the company’s commercial email practices.

Prior to its acquisition, Adconion offered digital advertising solutions to some of the world’s biggest companies, including Adidas, AT&T, Fidelity, Honda, Kohl’s and T-Mobile. Amobee, the Redwood City, Calif. online ad firm that acquired Adconion in 2014, bills itself as the world’s leading independent advertising platform. The CEO of Amobee is Kim Perell, formerly CEO of Adconion.

In October 2018, prosecutors in the Southern District of California named four Adconion employees — Jacob Bychak, Mark ManoogianPetr Pacas, and Mohammed Abdul Qayyum —  in a ten-count indictment on charges of conspiracy, wire fraud, and electronic mail fraud. All four men have pleaded not guilty to the charges, which stem from a grand jury indictment handed down in June 2017.

‘COMPANY A’

The indictment and other court filings in this case refer to the employer of the four men only as “Company A.” However, LinkedIn profiles under the names of three of the accused show they each work(ed) for Adconion and/or Amobee.

Mark Manoogian is an attorney whose LinkedIn profile states that he is director of legal and business affairs at Amobee, and formerly was senior business development manager at Adconion Direct; Bychak is listed as director of operations at Adconion Direct; Quayyum’s LinkedIn page lists him as manager of technical operations at Adconion. A statement of facts filed by the government indicates Petr Pacas was at one point director of operations at Company A (Adconion).

According to the indictment, between December 2010 and September 2014 the defendants engaged in a conspiracy to identify or pay to identify blocks of Internet Protocol (IP) addresses that were registered to others but which were otherwise inactive.

The government alleges the men sent forged letters to an Internet hosting firm claiming they had been authorized by the registrants of the inactive IP addresses to use that space for their own purposes.

“Members of the conspiracy would use the fraudulently acquired IP addresses to send commercial email (‘spam’) messages,” the government charged.

HOSTING IN THE WIND

Prosecutors say the accused were able to spam from the purloined IP address blocks after tricking the owner of Hostwinds, an Oklahoma-based Internet hosting firm, into routing the fraudulently obtained IP addresses on their behalf.

Hostwinds owner Peter Holden was the subject of a 2015 KrebsOnSecurity story titled, “Like Cutting Off a Limb to Save the Body,” which described how he’d initially built a lucrative business catering mainly to spammers, only to later have a change of heart and aggressively work to keep spammers off of his network.

That a case of such potential import for the digital marketing industry has escaped any media attention for so long is unusual but not surprising given what’s at stake for the companies involved and for the government’s ongoing investigations.

Adconion’s parent Amobee manages ad campaigns for some of the world’s top brands, and has every reason not to call attention to charges that some of its key employees may have been involved in criminal activity.

Meanwhile, prosecutors are busy following up on evidence supplied by several cooperating witnesses in this and a related grand jury investigation, including a confidential informant who received information from an Adconion employee about the company’s internal operations.

THE BIGGER PICTURE

According to a memo jointly filed by the defendants, “this case spun off from a larger ongoing investigation into the commercial email practices of Company A.” Ironically, this memo appears to be the only one of several dozen documents related to the indictment that mentions Adconion by name (albeit only in a series of footnote references).

Prosecutors allege the four men bought hijacked IP address blocks from another man tied to this case who was charged separately. This individual, Daniel Dye, has a history of working with others to hijack IP addresses for use by spammers.

For many years, Dye was a system administrator for Optinrealbig, a Colorado company that relentlessly pimped all manner of junk email, from mortgage leads and adult-related services to counterfeit products and Viagra.

Optinrealbig’s CEO was the spam king Scott Richter, who later changed the name of the company to Media Breakaway after being successfully sued for spamming by AOL, MicrosoftMySpace, and the New York Attorney General Office, among others. In 2008, this author penned a column for The Washington Post detailing how Media Breakaway had hijacked tens of thousands of IP addresses from a defunct San Francisco company for use in its spamming operations.

Dye has been charged with violations of the CAN-SPAM Act. A review of the documents in his case suggest Dye accepted a guilty plea agreement in connection with the IP address thefts and is cooperating with the government’s ongoing investigation into Adconion’s email marketing practices, although the plea agreement itself remains under seal.

Lawyers for the four defendants in this case have asserted in court filings that the government’s confidential informant is an employee of Spamhaus.org, an organization that many Internet service providers around the world rely upon to help identify and block sources of malware and spam.

Interestingly, in 2014 Spamhaus was sued by Blackstar Media LLC, a bulk email marketing company and subsidiary of Adconion. Blackstar’s owners sued Spamhaus for defamation after Spamhaus included them at the top of its list of the Top 10 world’s worst spammers. Blackstar later dropped the lawsuit and agreed to paid Spamhaus’ legal costs.

Representatives for Spamhaus declined to comment for this story. Responding to questions about the indictment of Adconion employees, Amobee’s parent company SingTel referred comments to Amobee, which issued a brief statement saying, “Amobee has fully cooperated with the government’s investigation of this 2017 matter which pertains to alleged activities that occurred years prior to Amobee’s acquisition of the company.”

ONE OF THE LARGEST SPAMMERS IN HISTORY?

It appears the government has been investigating Adconion’s email practices since at least 2015, and possibly as early as 2013. The very first result in an online search for the words “Adconion” and “spam” returns a Microsoft Powerpoint document that was presented alongside this talk at an ARIN meeting in October 2016. ARIN stands for the American Registry for Internet Numbers, and it handles IP addresses allocations for entities in the United States, Canada and parts of the Caribbean.

As the screenshot above shows, that Powerpoint deck was originally named “Adconion – Arin,” but the file has since been renamed. That is, unless one downloads the file and looks at the metadata attached to it, which shows the original filename and that it was created in 2015 by someone at the U.S. Department of Justice.

Slide #8 in that Powerpoint document references a case example of an unnamed company (again, “Company A”), which the presenter said was “alleged to be one of the largest spammers in history,” that had hijacked “hundreds of thousands of IP addresses.”

A slide from an ARIN presentation in 2016 that referenced Adconion.

There are fewer than four billion IPv4 addresses available for use, but the vast majority of them have already been allocated. In recent years, this global shortage has turned IP addresses into a commodity wherein each IP can fetch between $15-$25 on the open market.

The dearth of available IP addresses has created boom times for those engaged in the acquisition and sale of IP address blocks. It also has emboldened scammers and spammers who specialize in absconding with and spamming from dormant IP address blocks without permission from the rightful owners.

In May, KrebsOnSecurity broke the news that Amir Golestan — the owner of a prominent Charleston, S.C. tech company called Micfo LLC — had been indicted on criminal charges of fraudulently obtaining more than 735,000 IP addresses from ARIN and reselling the space to others.

KrebsOnSecurity has since learned that for several years prior to 2014, Adconion was one of Golestan’s biggest clients. More on that in an upcoming story.

Worse Than FailureClassic WTF: Hyperlink 2.0

It's Labor Day in the US, where we celebrate the workers of the world by having a barbecue. Speaking of work, in these days of web frameworks and miles of unnecessary JavaScript to do basic things on the web, let's look back at a simpler time, where we still used server-side code and miles of unnecessary JavaScript to do basic things on the web. Original. --Remy

For those of you who haven't upgraded to Web 2.0 yet, today's submission from Daniel is a perfect example of what you're missing out on. Since the beginning of the Web (the "1.0 days"), website owners have always wanted to know who was visiting their website, how often, and when. Back then, this was accomplished by recording each website "hit" in a log file and running a report on the log later.

But the problem with this method in Web 2.0 is that people don't use logs anymore; they use blogs, and everyone knows that blogs are a pretty stupid way of tracking web traffic. Fortunately, Daniel's colleagues developed an elegant, clever, and -- most importantly -- "AJAX" way of solving this problem. Instead of being coded in HTML pages, all hyperlinks are assigned a numeric identifier and kept in a database table. This identifier is then used on the HTML pages within an anchor tag:

<a href="Javascript: followLink(124);">View Products</a>

When the user clicks on the hyperlink, the followLink() Javascript function is executed and the following occur:

  • a translucent layer (DIV) is placed over the entire page, causing it to appear "grayed out", and ...
  • a "please wait" layer is placed on top of that, with an animated pendulum swinging back and forth, then ...
  • the XmlHttpRequest object is used to call the "GetHyperlink" web service which, in turn ...
  • opens a connection to the database server to ...
  • log the request in the RequestedHyperlinks table and ...
  • retrieves the URL from the Hyperlinks table, then ...
  • returns it to the client script, which then ...
  • sets the window.location property to the URL retrieved, which causes ...
  • the user to be redirected to the appropriate page

Now that's two-point-ohey.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowThe Amazon is not the Earth’s Lungs

An aerial view of forest fire of the Amazon taken with a drone is seen from an Indigenous territory in the state of Mato Grosso, in Brazil, August 23, 2019, obtained by Reuters on August 25, 2019. Marizilda Cruppe/Amnesty International/Handout via REUTERS.

In the wake of the troubling reports about fires in Brazil’s Amazon rainforest, much misinformation spread across social media. On Facebook posts and news reports, the Amazon was described as being the “lungs of the Earth.” Peter Brannen, writing in The Atlantic, details why that isn’t the case—not to downplay the impact of the fires, but to educate audiences on how the various systems of our planet interact:

The Amazon is a vast, ineffable, vital, living wonder. It does not, however, supply the planet with 20 percent of its oxygen.

As the biochemist Nick Lane wrote in his 2003 book Oxygen, “Even the most foolhardy destruction of world forests could hardly dint our oxygen supply, though in other respects such short-sighted idiocy is an unspeakable tragedy.”

The Amazon produces about 6 percent of the oxygen currently being made by photosynthetic organisms alive on the planet today. But surprisingly, this is not where most of our oxygen comes from. In fact, from a broader Earth-system perspective, in which the biosphere not only creates but also consumes free oxygen, the Amazon’s contribution to our planet’s unusual abundance of the stuff is more or less zero. This is not a pedantic detail. Geology provides a strange picture of how the world works that helps illuminate just how bizarre and unprecedented the ongoing human experiment on the planet really is. Contrary to almost every popular account, Earth maintains an unusual surfeit of free oxygen—an incredibly reactive gas that does not want to be in the atmosphere—largely due not to living, breathing trees, but to the existence, underground, of fossil fuels.

Read Brannen’s piece in full here.

,

Sam VargheseAustralian politicians are in it for the money

Australian politicians are in the game for one thing: money. Most of them are so incompetent that they would not be paid even half of what they earn were they to try for jobs in the private sector.

That’s why former members of the Victorian state parliament, who were voted out at the last election in 2018, are struggling to find jobs.

Apparently, some have been told by recruitment agencies that they “don’t know where to fit you”, according to a news report from the Melbourne tabloid Herald Sun.

People who enter politics are paid well in Australia, far above what people are paid by the private sector, unless one is very high up in the hierarchy.

Politicians get where they are by doing favours for people in high places and moving up the greasy pole.

They get all kinds of fancy allowances and benefits. They have no scruples about taking from the public purse whenever they can without getting caught.

They are the worst kind of scum.

Australia is a highly over-governed place, with three levels of government: the national parliament, the parliaments in the different states and territories and the local governments.

At each level there is plenty of scope for fattening one’s own lamb. There are a handful of people who have some kind of vocation for public service; the rest are out to grab whatever they can before they are voted out.

Nobody should have any pity for people of this kind given what they do when they are in office. About the only thing they do is to prepare things so that they will have a job here, there or anywhere when they finally get thrown out of politics.

Some get lanced so early in their political lives that they are unprepared. Perhaps they should be put to work as garbage collectors. But one doubts they would have the physical and mental fortitude to get through such a job.

,

CryptogramFriday Squid Blogging: Why Mexican Jumbo Squid Populations Have Declined

A group of scientists conclude that it's shifting weather patterns and ocean conditions.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDWhat does it mean to become a TED Fellow?

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Every year, TED begins a new search looking for the brightest thinkers and innovators to be part of the TED Fellows program. With nearly 500 visionaries representing 300 different disciplines, these extraordinary individuals are making waves, disrupting the status quo and creating real impact.

Through a rigorous application process, we narrow down our candidate pool of thousands to just 20 exceptional people. (Trust us, this is not easy to do.) You may be wondering what makes for a good application (read more about that here), but just as importantly: What exactly does it mean to be a TED Fellow? Yes, you’ll work hand-in-hand with the Fellows team to give a TED Talk on stage, but being a Fellow is so much more than that. Here’s what happens once you get that call.

1. You instantly have a built-in support system.

Once selected, Fellows become part of our active global community. They are connected to a diverse network of other Fellows who they can lean on for support, resources and more. To get a better sense of who these people are (fishing cat conservationists! space environmentalists! police captains!), take a closer look at our class of 2019 Fellows, who represent 12 countries across four continents. Their common denominator? They are looking to address today’s most complex challenges and collaborate with others — which could include you.

2. You can participate in TED’s coaching and mentorship program.

To help Fellows achieve an even greater impact with their work, they are given the opportunity to participate in a one-of-a-kind coaching and mentoring initiative. Collaboration with a world-class coach or mentor helps Fellows maximize effectiveness in their professional and personal lives and make the most of the fellowship.

The coaches and mentors who support the program are some of the world’s most effective and intuitive individuals, each inspired by the TED mission. Fellows have reported breakthroughs in financial planning, organizational effectiveness, confidence and interpersonal relationships thanks to coaches and mentors. Head here to learn more about this initiative. 

3. You’ll receive public relations guidance and professional development opportunities, curated through workshops and webinars. 

Have you published exciting new research or launched a groundbreaking project? We partner with a dedicated PR agency to provide PR training and valuable media opportunities with top tier publications to help spread your ideas beyond the TED stage. The TED Fellows program has been recognized by PR News for our “PR for Fellows” program.

In addition, there are vast opportunities for Fellows to hone their skills and build new ones through invigorating workshops and webinars that we arrange throughout the year. We also maintain a Fellows Blog, where we continue to spotlight Fellows long after they give their talks.

***

Over the last decade, our program has helped Fellows impact the lives of more than 180 million people. Success and innovation like this doesn’t happen in a vacuum — it’s sparked by bringing Fellows together and giving them this kind of support. If this sounds like a community you want to join, apply to become a TED Fellow by August 27, 2019 11:59pm UTC.

Sociological ImagesSurviving Student Debt

Recent estimates indicate that roughly 45 million students in the United States have incurred student loans during college. Democratic candidates like Senators Elizabeth Warren and Bernie Sanders have proposed legislation to relieve or cancel  this debt burden. Sociologist Tressie McMillan Cottom’s congressional testimony on behalf of Warren’s student loan relief plan last April reveals the importance of sociological perspectives on the debt crisis. Sociologists have recently documented the conditions driving student loan debt and its impacts across race and gender. 

College debt is the new black.
Photo Credit: Mike Rastiello, Flickr CC

In recent decades, students have enrolled in universities at increasing rates due to the “education gospel,” where college credentials are touted as public goods and career necessities, encouraging students to seek credit. At the same time, student loan debt has rapidly increased, urging students to ask whether the risks of loan debt during early adulthood outweigh the reward of a college degree. Student loan risks include economic hardship, mental health problems, and delayed adult transitions such as starting a family.Individual debt has also led to disparate impacts among students of color, who are more likely to hail from low-income families. Recent evidence suggests that Black students are more likely to drop out of college due to debt and return home after incurring more debt than their white peers. Racial disparities in student loan debt continue into their mid-thirties and impact the white-Black racial wealth gap.

365.75
Photo Credit: Kirstie Warner, Flickr CC

Other work reveals gendered disparities in student debt. One survey found that while women were more likely to incur debt than their male peers, men with higher levels of student debt were more likely to drop out of college than women with similar amounts of debt. The authors suggest that women’s labor market opportunities — often more likely to require college degrees than men’s — may account for these differences. McMillan Cottom’s interviews with 109 students from for-profit colleges uncovers how Black, low-income women in particular bear the burden of student loans. For many of these women, the rewards of college credentials outweigh the risks of high student loan debt.

Amber Joy is a PhD candidate in the Department of Sociology at the University of Minnesota. Her current research interests include punishment, policing, victimization, youth, and the intersections of race, gender, and sexuality. Her dissertation explores youth responses to sexual violence within youth correctional facilities.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityPhishers are Angling for Your Cloud Providers

Many companies are now outsourcing their marketing efforts to cloud-based Customer Relationship Management (CRM) providers. But when accounts at those CRM providers get hacked or phished, the results can be damaging for both the client’s brand and their customers. Here’s a look at a recent CRM-based phishing campaign that targeted customers of Fortune 500 construction equipment vendor United Rentals.

Stamford, Ct.-based United Rentals [NYSE:URI] is the world’s largest equipment rental company, with some 18,000 employees and earnings of approximately $4 billion in 2018. On August 21, multiple United Rental customers reported receiving invoice emails with booby-trapped links that led to a malware download for anyone who clicked.

While phony invoices are a common malware lure, this particular campaign sent users to a page on United Rentals’ own Web site (unitedrentals.com).

A screen shot of the malicious email that spoofed United Rentals.

In a notice to customers, the company said the unauthorized messages were not sent by United Rentals. One source who had at least two employees fall for the scheme forwarded KrebsOnSecurity a response from UR’s privacy division, which blamed the incident on a third-party advertising partner.

“Based on current knowledge, we believe that an unauthorized party gained access to a vendor platform United Rentals uses in connection with designing and executing email campaigns,” the response read.

“The unauthorized party was able to send a phishing email that appears to be from United Rentals through this platform,” the reply continued. “The phishing email contained links to a purported invoice that, if clicked on, could deliver malware to the recipient’s system. While our investigation is continuing, we currently have no reason to believe that there was unauthorized access to the United Rentals systems used by customers, or to any internal United Rentals systems.”

United Rentals told KrebsOnSecurity that its investigation so far reveals no compromise of its internal systems.

“At this point, we believe this to be an email phishing incident in which an unauthorized third party used a third-party system to generate an email campaign to deliver what we believe to be a banking trojan,” said Dan Higgins, UR’s chief information officer.

United Rentals would not name the third party marketing firm thought to be involved, but passive DNS lookups on the UR subdomain referenced in the phishing email (used by UL for marketing since 2014 and visible in the screenshot above as “wVw.unitedrentals.com”) points to Pardot, an email marketing division of cloud CRM giant Salesforce.

Companies that use cloud-based CRMs sometimes will dedicate a domain or subdomain they own specifically for use by their CRM provider, allowing the CRM to send emails that appear to come directly from the client’s own domains. However, in such setups the content that gets promoted through the client’s domain is actually hosted on the cloud CRM provider’s systems.

Salesforce told KrebsOnSecurity that this was not a compromise of Pardot, but of a Pardot customer account that was not using multi-factor authentication.

“UR uses a third party marketing agency that utilizes the Pardot platform,” said Salesforce spokesman Bradford Burns. “The third party marketing agency is who was compromised, not a Pardot employee.”

This attack comes on the heels of another targeted phishing campaign leveraging Pardot that was documented earlier this month by Netskope, a cloud security firm. Netskope’s Ashwin Vamshi said users of cloud CRM platforms have a high level of trust in the software because they view the data and associated links as internal, even though they are hosted in the cloud.

“A large number of enterprises provide their vendors and partners access to their CRM for uploading documents such as invoices, purchase orders, etc. (and often these happen as automated workflows),” Vamshi wrote. “The enterprise has no control over the vendor or partner device and, more importantly, over the files being uploaded from them. In many cases, vendor- or partner-uploaded files carry with them a high level of implicit trust.”

Cybercriminals increasingly are targeting cloud CRM providers because compromised accounts on these systems can be leveraged to conduct extremely targeted and convincing phishing attacks. According to the most recent stats (PDF) from the Anti-Phishing Working Group, software-as-a-service providers (including CRM and Webmail providers) were the most-targeted industry sector in the first quarter of 2019, accounting for 36 percent of all phishing attacks.

Image: APWG

Update, 2:55 p.m. ET: Added comments and responses from Salesforce.

CryptogramAttacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that's not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn't imagine that they would be necessary. The results are predictable.

The paper: "Practical Enclave Malware with Intel SGX."

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel's threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user's behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

Worse Than FailureError'd: Resistant to Change

Tom H. writes, "They got rid of their old, outdated fax machine, but updating their website? Yeah, that might take a while."

 

"In casinos, they say the house always wins. In this case, when I wanted to cash in my winnings, I gambled and lost against Windows 7 Professional," Michelle M. wrote.

 

Martin writes, "Wow! It's great to see Apple is going the extra mile by protecting my own privacy from myself!"

 

"Yes, Amazon Photos, with my mouse clicks, I will fix you," wrote Amos B.

 

"When searches go wrong at AliExpress they want you to know these three things," Erwan R. wrote.

 

Chris A. writes, "It's like Authy is saying 'I have no idea what you just did, but, on the bright side, there weren`t any errors!'"

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Krebs on SecurityRansomware Bites Dental Data Backup Firm

PerCSoft, a Wisconsin-based company that manages a remote data backup service relied upon by hundreds of dental offices across the country, is struggling to restore access to client systems after falling victim to a ransomware attack.

West Allis, Wis.-based PerCSoft is a cloud management provider for Digital Dental Record (DDR), which operates an online data backup service called DDS Safe that archives medical records, charts, insurance documents and other personal information for various dental offices across the United States.

The ransomware attack hit PerCSoft on the morning of Monday, Aug. 26, and encrypted dental records for some — but not all — of the practices that rely on DDS Safe.

PercSoft did not respond to requests for comment. But Brenna Sadler, director of  communications for the Wisconsin Dental Association, said the ransomware encrypted files for approximate 400 dental practices, and that somewhere between 80-100 of those clients have now had their files restored.

Sadler said she did not know whether PerCSoft and/or DDR had paid the ransom demand, what ransomware strain was involved, or how much the attackers had demanded.

But updates to PerCSoft’s Facebook page and statements published by both PerCSoft and DDR suggest someone may have paid up: The statements note that both companies worked with a third party software company and were able to obtain a decryptor to help clients regain access to files that were locked by the ransomware.

Update: Several sources are now reporting that PerCSoft did pay the ransom, although it is not clear how much was paid. One member of a private Facebook group dedicated to IT professionals serving the dental industry shared the following screenshot, which is purportedly from a conversation between PerCSoft and an affected dental office, indicating the cloud provider was planning to pay the ransom:

Another image shared by members of that Facebook group indicates the ransomware that attacked PerCSoft is an extremely advanced and fairly recent strain known variously as REvil and Sodinokibi.

Original story:

However, some affected dental offices have reported that the decryptor did not work to unlock at least some of the files encrypted by the ransomware. Meanwhile, several affected dentistry practices said they feared they might be unable to process payroll payments this week as a result of the attack.

Cloud data and backup services are a prime target of cybercriminals who deploy ransomware. In July, attackers hit QuickBooks cloud hosting firm iNSYNQ, holding data hostage for many of the company’s clients. In February, cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider Dataresolution.net took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files. In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual.

It remains unclear whether PerCSoft or DDR — or perhaps their insurance provider — paid the ransom demand in this attack. But new reporting from independent news outlet ProPublica this week sheds light on another possible explanation why so many victims are simply coughing up the money: Their insurance providers will cover the cost — minus a deductible that is usually far less than the total ransom demanded by the attackers.

More to the point, ProPublica found, such attacks may be great for business if you’re in the insurance industry.

“More often than not, paying the ransom is a lot cheaper for insurers than the loss of revenue they have to cover otherwise,” said Minhee Cho, public relations director of ProPublica, in an email to KrebsOnSecurity. “But, by rewarding hackers, these companies have created a perverted cycle that encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.”

“In fact, it seems hackers are specifically extorting American companies that they know have cyber insurance,” Cho continued. “After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware.”

Read the full ProPublica piece here. And if you haven’t already done so, check out this outstanding related reporting by ProPublica from earlier this year on how security firms that help companies respond to ransomware attacks also may be enabling and emboldening attackers.

CryptogramAI Emotion-Detection Arms Race

Voice systems are increasingly using AI techniques to determine emotion. A new paper describes an AI-based countermeasure to mask emotion in spoken words.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

Academic paper.

Worse Than FailureCodeSOD: Bassackwards Compatibility

A long time ago, you built a web service. It was long enough ago that you chose XML as your serialization format. It worked fine, but before long, customers started saying that they’d really like to use JSON, so now you need to expose a slightly different, JSON-powered version of your API. To make it easy, you release a JSON client developers can drop into their front-ends.

Conor is one of those developers, and while examining the requests the client sent, he discovered a unique way of making your XML web-service JSON-friendly.

{"fetch":"<fetch version='1.0'><entity><entityDescriptor id='10'/>…<loadsMoreXML/></entity></fetch>"}

Simplicity itself!

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Google AdsenseSimplifying our content policies for publishers

One of our top priorities is to sustain a healthy digital advertising ecosystem, one that works for everyone: users, advertisers and publishers. On a daily basis, teams of Google engineers, policy experts, and product managers combat and stop bad actors. Just last year, we removed 734,000 publishers and app developers from our ad network and ads from nearly 28 million pages that violated our publisher policies.

But we’re not just stopping bad actors. Just as critical to our mission is the work we do every day to help good publishers in our network succeed. One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so they are easier to understand and follow. That’s why we'll be simplifying the way our content policies are presented to publishers, and standardizing content policies across our publisher products.

A simplified publisher experience
In September, we’ll update the way our publisher content policies are presented with a clear outline of the types of content where advertising is not allowed or will be restricted.

Our Google Publisher Policies will outline the types of content that are not allowed to show ads through any of our publisher products. This includes policies against illegal content, dangerous or derogatory content, and sexually explicit content, among others.

Our Google Publisher Restrictions will detail the types of content, such as alcohol or tobacco, that don’t violate policy, but that may not be appealing for all advertisers. Publishers will not receive a policy violation for trying to monetize this content, but only some advertisers and advertising products—the ones that choose this kind of content—will bid on it. As a result, Google Ads will not appear on this content and this content will receive less advertising than non-restricted content will. 

The Google Publisher Policies and Google Publisher Restrictions will apply to all publishers, regardless of the products they use—AdSense, AdMob or Ad Manager.

These changes are the next step in our ongoing efforts to make it easier for publishers to navigate our policies so their businesses can continue to thrive with the help of our publisher products.


Posted by:
Scott Spencer, Director of Sustainable Ads


CryptogramThe Myth of Consumer-Grade Security

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that's not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: "After all, we are not talking about protecting the nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications."

The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.

This wasn't true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn't have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military. Today, the predominant encryption algorithm for commercial applications -- Advanced Encryption Standard (AES) -- is approved by the National Security Agency (NSA) to secure information up to the level of Top Secret. The Department of Defense's classified analogs of the Internet­ -- Secret Internet Protocol Router Network (SIPRNet), Joint Worldwide Intelligence Communications System (JWICS) and probably others whose names aren't yet public -- use the same Internet protocols, software, and hardware that the rest of the world does, albeit with additional physical controls. And the NSA routinely assists in securing business and consumer systems, including helping Google defend itself from Chinese hackers in 2010.

Yes, there are some military applications that are different. The US nuclear system Barr mentions is one such example -- and it uses ancient computers and 8-inch floppy drives. But for pretty much everything that doesn't see active combat, it's modern laptops, iPhones, the same Internet everyone else uses, and the same cloud services.

This is also true for corporate applications. Corporations rarely use customized encryption to protect their operations. They also use the same types of computers, networks, and cloud services that the government and consumers use. Customized security is both more expensive because it is unique, and less secure because it's nonstandard and untested.

During the Cold War, the NSA had the dual mission of attacking Soviet computers and communications systems and defending domestic counterparts. It was possible to do both simultaneously only because the two systems were different at every level. Today, the entire world uses Internet protocols; iPhones and Android phones; and iMessage, WhatsApp and Signal to secure their chats. Consumer-grade encryption is the same as military-grade encryption, and consumer security is the same as national security.

Barr can't weaken consumer systems without also weakening commercial, government, and military systems. There's one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It's time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

This essay previously appeared on Lawfare.com.

LongNowThe Vineyard Gazette on Revive & Restore’s Heath Hen De-extinction Efforts

 The world’s last heath hen went extinct in Martha’s Vineyard in 01932. The Revive & Restore team recently paid a visit there to discuss their efforts to bring the species back.

Members of the Revive & Restore team next to a statue of Booming Ben, the last heath hen.

From the Vineyard Gazette:

Buried deep within the woods of the Manuel Correllus State Forest is a statue of Booming Ben, the world’s final heath hen. Once common all along the eastern seaboard, the species was hunted to near-extinction in the 1870s. Although a small number of the birds found refuge on Martha’s Vineyard, they officially disappeared in 1932 — with Booming Ben, the last of their kind, calling for female mates who were no longer there to hear him.

“There is no survivor, there is no future, there is no life to be recreated in this form again,” Gazette editor Henry Beetle Hough wrote. “We are looking upon the uttermost finality which can be written, glimpsing the darkness which will not know another ray of light. We are in touch with the reality of extinction.”

The statue memorializes that reality.

Since 2013, however, a group of cutting-edge researchers with the group Revive and Restore have been hard at work to bring back the heath hen as part of an ambitious avian de-extinction project. The project got started when Ryan Phelan, who co-founded Revive and Restore with her husband, scientist and publisher of the Whole Earth Catalogue, Stewart Brand, began to think broadly about the goals for their organization.

“We started by saying what’s the most wild idea possible?” Ms. Phelan said. “What’s the most audacious? That would be bringing back an extinct species.”

Read the piece in full here.

Worse Than FailureTeleported Release

Matt works at an accounting firm, as a data engineer. He makes reports for people who don’t read said reports. Accounting firms specialize in different areas of accountancy, and Matt’s firm is a general firm with mid-size clients.

The CEO of the firm is a legacy from the last century. The most advanced technology on his desk is a business calculator and a pencil sharpener. He still doesn’t use a cellphone. But he does have a son, who is “tech savvy”, which gives the CEO a horrible idea of how things work.

Usually, this is pretty light, in that it’s sorting Excel files or sorting the output of an existing report. Sometimes the requests are bizarre or utter nonsense. And, because the boss doesn’t know what the technical folks are doing, some of the IT staff may be a bit lazy about following best practices.

This means that most of Matt’s morning is spent doing what is essentially Tier 1 support before he gets into doing his real job. Recently, there was a worse crunch, as actual support person Lucinda was out for materinity leave, and Jackie, the one other developer, was off on vacation on a foreign island with no Internet. Matt was in the middle of eating a delicious lunch of take-out lo mein when his phone rang. He sighed when he saw the number.

“Matt!” the CEO exclaimed. “Matt! We need to do a build of the flagship app! And a deploy!”

The app was rather large, and a build could take upwards of 45 minutes, depending on the day and how the IT gods were feeling. But the process was automated, the latest changes all got built and deployed each night. Anything approved was released within 24 hours. With everyone out of the office, there hadn’t been any approved changes for a few weeks.

Matt checked the Github to see if something went wrong with the automated build. Everything was fine.

“Okay, so I’m seeing that everything built on GitHub and everything is available in production,” Matt said.

“I want you to do a manual build, like you used to.”

“If I were to compile right now, it could take quite awhile, and redeploying runs the risk of taking our clients offline, and nothing would be any different.”

“Yes, but I want a build that has the changes which Jackie was working on before she left for vacation.”

Matt checked the commit history, and sure enough, Jackie hadn’t committed any changes since two weeks before leaving on vacation. “It doesn’t looked like she pushed those changes to Github.”

“Githoob? I thought everything was automated. You told me the process was automated,” the CEO said.

“It’s kind of like…” Matt paused to think of an analogy that could explain this to a golden retriever. “Your dishwasher, you could put a timer on it to run it every night, but if you don’t load the dishwasher first, nothing gets cleaned.”

There was a long pause as the CEO failed to understand this. “I want Jackie’s front-page changes to be in the demo I’m about to do. This is for Initech, and there’s millions of dollars riding on their account.”

“Well,” Matt said, “Jackie hasn’t pushed- hasn’t loaded her metaphorical dishes into the dishwasher, so I can’t really build them.”

“I don’t understand, it’s on her computer. I thought these computers were on the cloud. Why am I spending all this money on clouds?”

“If Jackie doesn’t put it on the cloud, it’s not there. It’s uh… like a fax machine, and she hasn’t sent us the fax.”

“Can’t you get it off her laptop?”

“I think she took it home with her,” Matt said.

“So?”

“Have you ever seen Star Trek? Unless Scotty can teleport us to Jackie’s laptop, we can’t get at her files.”

The CEO locked up on that metaphor. “Can’t you just hack into it? I thought the NSA could do that.”

“No-” Matt paused. Maybe Matt could try and recreate the changes quickly? “How long before this meeting?” he asked.

“Twenty minutes.”

“Just to be clear, you want me to do a local build with files I don’t have by hacking them from a computer which may or may not be on and connected to the Internet, and then complete a build process which usually takes 45 minutes- at least- deploy to production, so you can do a demo in twenty minutes?”

“Why is that so difficult?” the CEO demanded.

“I can call Jackie, and if she answers, maybe we can figure something out.”

The CEO sighed. “Fine.”

Matt called Jackie. She didn’t answer. Matt left a voicemail and then went back to eating his now-cold lo mein.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityCybersecurity Firm Imperva Discloses Breach

Imperva, a leading provider of Internet firewall services that help Web sites block malicious cyberattacks, alerted customers on Tuesday that a recent data breach exposed email addresses, scrambled passwords, API keys and SSL certificates for a subset of its firewall users.

Redwood Shores, Calif.-based Imperva sells technology and services designed to detect and block various types of malicious Web traffic, from denial-of-service attacks to digital probes aimed at undermining the security of Web-based software applications.

Image: Imperva

Earlier today, Imperva told customers that it learned on Aug. 20 about a security incident that exposed sensitive information for some users of Incapsula, the company’s cloud-based Web Application Firewall (WAF) product.

“On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017,” wrote Heli Erickson, director of analyst relations at Imperva.

“We want to be very clear that this data exposure is limited to our Cloud WAF product,” Erickson’s message continued. “While the situation remains under investigation, what we know today is that elements of our Incapsula customer database from 2017, including email addresses and hashed and salted passwords, and, for a subset of the Incapsula customers from 2017, API keys and customer-provided SSL certificates, were exposed.”

Companies that use the Incapsula WAF route all of their Web site traffic through the service, which scrubs the communications for any suspicious activity or attacks and then forwards the benign traffic on to its intended destination.

Rich Mogull, founder and vice president of product at Kansas City-based cloud security firm DisruptOps, said Imperva is among the top three Web-based firewall providers in business today.

According to Mogull, an attacker in possession of a customer’s API keys and SSL certificates could use that access to significantly undermine the security of traffic flowing to and from a customer’s various Web sites.

At a minimum, he said, an attacker in possession of these key assets could reduce the security of the WAF settings and exempt or “whitelist” from the WAF’s scrubbing technology any traffic coming from the attacker. A worst-case scenario could allow an attacker to intercept, view or modify traffic destined for an Incapsula client Web site, and even to divert all traffic for that site to or through a site owned by the attacker.

“Attackers could whitelist themselves and begin attacking the site without the WAF’s protection,” Mogull told KrebsOnSecurity. “They could modify any of the security Incapsula security settings, and if they got [the target’s SSL] certificate, that can potentially expose traffic. For a security-as-a-service provider like Imperva, this is the kind of mistake that’s up there with their worst nightmare.”

Imperva urged all of its customers to take several steps that might mitigate the threat from the data exposure, such as changing passwords for user accounts at Incapsula, enabling multi-factor authentication, resetting API keys, and generating/uploading new SSL certificates.

Alissa Knight, a senior analyst at Aite Group, said the exposure of Incapsula users’ scrambled passwords and email addresses was almost incidental given that the intruders also made off with customer API keys and SSL certificates.

Knight said although we don’t yet know the cause of this incident, such breaches at cloud-based firms often come down to small but ultimately significant security failures on the part of the provider.

“The moral of the story here is that people need to be asking tough questions of software-as-a-service firms they rely upon, because those vendors are being trusted with the keys to the kingdom,” Knight said. “Even if the vendor in question is a cybersecurity company, it doesn’t necessarily mean they’re eating their own dog food.”

CryptogramThe Threat of Fake Academic Research

Interesting analysis of the possibility, feasibility, and efficacy of deliberately fake scientific research, something I had previously speculated about.

Worse Than FailureCodeSOD: Yesterday's Enterprise

I bumped into a few co-workers (and a few readers- that was a treat!) at Abstractions last week. My old co-workers informed me that the mainframe system, which had been “going away any day now” since about 1999, had finally gone away, as of this year.

A big part of my work at that job had been about running systems in parallel with the mainframe in some fashion, which meant I made a bunch of “datapump” applications which fed data into or pulled data out of the mainframe. Enterprise organizations often don’t know what their business processes are: the software which encodes the process is older than most anyone working in the organization, and it must work that way, because that’s the process (even though no one knows why).

Robert used to work for a company which offers an “enterprise” product, and since they know that their customers don’t actually know what they’re doing, this product can run in parallel with their existing systems. Of course, running in parallel means that you need to synchronize with the existing system.

So, for example, there were two systems. One we’ll call CQ and one we’ll call FP. Let’s say FP has the latest data. We need a method which updates CQ based on the state of FP. This is that method.

private boolean updateCQAttrFromFPAttrValue(CQRecordAttribute cqAttr, String cqtype,
        Attribute fpAttr)
        throws Exception
    {
        AppLogService.debug("Invoking " + this.getClass().getName()
            + "->updateCSAttrFromFPAttrValue()");

        String csAttrName = cqAttr.getName();
        String csAttrtype = cqAttr.getAttrType();
        String str = avt.getFPAttributeValueAsString(fpAttr);
        if (str == null)
            return false;

        if (csAttrtype.compareTo(CQConstants.CQ_SHORT_STRING_TYPE) != 0
            || csAttrtype.compareTo(CQConstants.CQ_MULTILINE_STRING_TYPE) != 0)
        {
            String csStrValue = cqAttr.getStringValue();
            if (str == null) {
                return false;
            }
            if (csStrValue != null) {
                if (str.compareTo(csStrValue) == 0) // No need to update. Still
                // same values
                {
                    return false;
                }
            }
            cqAttr.setStringValue(str);
            AppLogService.debug("CQ Attribute Name- " + csAttrName + ", Type- "
                + csAttrtype + ", Value- " + str);
            AppLogService.debug("Exiting " + this.getClass().getName()
                + "->updateCSAttrFromFPAttrValue()");
            return true;
        }
        if (csAttrtype.compareTo(CQConstants.CQ_SHORT_STRING_TYPE) == 0) {
            AttributeChoice_type0 choicetype = fpAttr
                .getAttributeChoice_type0();
            if (choicetype.getCheckBox() != null) {
                boolean val = choicetype.getCheckBox().getValue();

                if (val) {
                    str = "1";
                }

                if (str.equals(cqAttr.getStringValue())) {
                    return false;
                }

                cqAttr.setStringValue(str);

                AppLogService.debug("CS Attribute Name- " + csAttrName
                    + ", Type- " + csAttrtype + ", Value- " + str);
                AppLogService.debug("Exiting " + this.getClass().getName()
                    + "->updateCQAttrFromFPAttrValue()");
                return true;
            }
            return false;
        }
        if (csAttrtype.compareTo(CQConstants.CQ_DATE_TYPE) == 0) {
            AttributeChoice_type0 choicetype = fpAttr
                .getAttributeChoice_type0();
            if (choicetype.getDate() != null) {
                Calendar cald = choicetype.getDate().getValue();
                if (cald == null) {
                    return false;
                } else {
                    SimpleDateFormat fmt = new SimpleDateFormat(template
                        .getAdapterdateformat());
                    cqAttr.setStringValue(fmt.format(cald.getTime()));
                }
                AppLogService.debug("CS Attribute Name- " + csAttrName
                    + ", Type- " + csAttrtype + ", Value- " + str);
                AppLogService.debug("Exiting " + this.getClass().getName()
                    + "->updateCSAttrFromFPAttrValue()");
                return true;
            }
            return false;
        }

        AppLogService.debug("Exiting " + this.getClass().getName()
            + "->updateCSAttrFromFPAttrValue()");
        return false;
    }

For starters, I have to say that the method name is a thing of beauty: updateCQAttrFromFPAttrValue. It’s meaningful if you know the organizational jargon, but utterly opaque to everyone else in the world. Of course, this is the last time the code is clear even to those folks, as the very first line is a log message which outputs the wrong method name: updateCSAttrFromFPAttrValue. After that, all of our cqAttr properties get stuffed into csAttr variables.

And the fifth line: String str = avt.getFPAttributeValueAsString(fpAttr);

avt stands for “attribute value translator”, and yes, everything is string-ly typed, because of course it is.

That gets us five lines in, and it’s all downhill from there. Judging from all the getCheckBox() calls, we’re interacting with UI components directly, pretty much every logging message outputs the wrong method name, except the rare case where it doesn’t.

And as ugly and awful as this code is, it’s strangely familiar. Oh, I’ve never seen this particular bit of code before. But I have seen the code my old job wrote to keep the mainframe in sync with the Oracle ERP and the home-grown Access databases and internally developed dashboards and… it all looked pretty much like this.

The code you see here? This is the code that runs the world. This is what gets invoices processed, credit cards billed, inventory shipped, factories staffed, and hazardous materials accounted for.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Chaotic IdealismHow I Live Now

Years ago, when I was a biomedical engineering major and I thought I was going to be employable, I lived in an apartment and had a car and did all those things non-disabled people do. And I was stressed out, really stressed out, living on the edge of independence and just teetering, trying to keep my balance.

Eventually I switched majors from BME to psychology–an easier program, and one that interested me.

The car didn’t last long, totaled thanks to my poor reflexes and lack of the sort of short-notice judgment that makes me a dangerous driver. My driver’s license ran out; now I just have a state ID. I moved closer to WSU, but my executive function was still bad, and it was hard for me to get to class. They sent a van across the street to pick me up. I forgot to study; they provided one of their testing rooms, distraction-free, so I would have somewhere away from the temptations of my apartment to study. They interceded with professors and got me extra time.

I was taking classes part-time, with intensive help from the department of disability services; I couldn’t sustain full-time work. If Wright State hadn’t been willing to go out of its way for me, I’d never have gotten a degree at all. I was diagnosed with dysthymia as well as episodic major depression, which explained why I never seemed to get my energy back after an episode.

I graduated. GPA 3.5, respectable. Dreaming of graduate school. Blew the practice GRE out of the water.

I tried to get a job. I worked with my job coach for more than a year. I wanted a graduate assistantship, but nobody wanted me. We looked at jobs that would let me use my education, but nobody was hiring. Eventually we branched out into more low-level work–hospital receptionist, dog kennel attendant, pharmacy technician. They were all part-time; by that point I knew better than to assume I could stick it out for a 40-hour work week.

The pharmacy tech job almost succeeded, but the boss couldn’t work with the assisted transport service that could only deliver me between the hours of 9 and 3–plus, they’d assured me it was part time, only to schedule me for 35 hours. I can only assume they hired “part-time” workers to avoid paying them benefits.

I signed up with Varsity Tutors to teach math, science, writing, and statistics. I enjoyed the work, especially when I got to use my writing ability to help someone communicate clearly, or made statistics understandable to someone unused to thinking about math. But it wasn’t steady work; you were competing with all the other tutors. You had to accept a new assignment within seconds, even before you knew what it was or whether you could teach that topic, because if you didn’t someone else would click on it first. Students paid a huge fee–$50 an hour or thereabouts–of which we only got about $10. Sometimes, when I grabbed a job that involved teaching something I myself hadn’t learned yet, I had to spend hours preparing for a one-hour session–and no, preparation hours aren’t paid.

I grew tired of cheating the customers; I’m not worth a $50-an-hour tutoring fee, and practically all of the money went to the company for doing nothing more than maintaining a Web service to match tutors and clients. And since I’d paid, out of my own pocket, for a tablet, Web cam, and internet connection, I hadn’t actually made any money anyway. I suppose I would have, if I’d stuck with it, but I just don’t like feeling so dishonest. It’s been more than a year since I last had contact with them, so I can say that. No more non-disclosure agreement. I’m sure they haven’t changed, though.

I was running out of money. My disability payments couldn’t pay for my rent. Eventually, a friend who was remodeling a house in a Cincinnati suburb offered me a rented room, within my means, and I accepted.

For a year, I lived in a room of a house undergoing remodeling. Eventually, I moved downstairs, into a finished basement room. College loan companies bombarded me with mail, demanding money I didn’t have. With the US government becoming increasingly unstable, I worried that if I even tried to work, I might lose Medicaid, and without a Medicaid buy-in available, I would have to choose between working and taking my medication (note: I cannot work if I am not taking my meds; in fact, I am in deadly danger if I do not take my meds). It didn’t help that my area has no particularly good public transport service, and the assisted transport service is–as always–unreliable and cannot be used to get to work.

Eventually I gave in. I applied for permanent disability discharge of my student loans, and was granted it. I feel dishonest–again–for not being able to predict, when I got my degree, that it wouldn’t make me employable. But there it is. The world doesn’t like to hire people who are different, or who need accommodations, or who can’t fit into the machinery of society.

But a person can’t just sit around. I do a lot of volunteer work now. I’m the primary researcher for ASAN’s disability day of mourning web site; I spend an hour or more every day monitoring the news, keeping records, and writing bios of disabled people murdered by their families and caregivers. I’ve kept up with my own Autism Memorial site, too, and the list is nearly 500 names long now. Seems like a lot, but my spreadsheet of disabled homicide victims in general is approaching five thousand.

Two days a week, I volunteer at the library. I put away books, straighten shelves, help patrons find things. The board of directors of the library fired all the pages years ago as a cost-cutting measure, so it’s volunteers like me that keep the books on the shelves while the employees are stuck manning the checkout desk or the book return. I find the work very meaningful, especially in the current political climate; libraries are wonderful, subversive places that teach a person to think on their own.

In the backyard of the house, I’m growing a garden. Gardening is new to me, but last year I had an overabundance of cherry tomatoes, and this year I’m growing tomatoes, eppers, cucumbers, carrots, sunflowers, and various herbs. I keep the lawn mowed and the bushes trimmed. The garden is a good thing, because lately my food stamps have been cut and I can’t really afford produce anymore.

My housemate’s girlfriend moved in with him last summer. She’s a sweet teacher with two guinea pigs and a love of stories. On Fridays, we drive for an hour to go play D&D with friends, and I bake cookies. I’ve learned to bake cookies over the last few years; at first it was just frozen cookie dough, then from scratch. I’ve gotten pretty good at it.

After my cat Tiny died of kidney failure, Christy got more vocal and demanding. She yells at me now when she wants attention, and climbs up on my bed to snuggle with me. She seems to think she needs to do the job of two cats. She’s getting older now, less able to climb to the top of the furniture or snatch a fly out of the air with her paws; but she still gets the kitty crazies, running around and skating on the rag rugs I made to keep the concrete floor from being quite so chilly.

I’m still myself–idealistic, protective, with a deep need to be useful. Living now is easier than it used to be when I had college loans; I just don’t buy anything I don’t absolutely need, help where I can, and let the rest go. I still have to deal with depression and with the executive dysfunction and weird brain of autism, but that’s a part of me, and I see no sense in looking down on myself just because I’m disabled.

I worry about the future. Just when it’s becoming crucial, our country’s dropping the ball on climate change. Our president is erratic, untrustworthy, and unethical. Authoritarianism looms large on the horizon. I do my best as a private citizen to help change things–with a focus on preserving democracy–but it’s still frightening, because disabled people are always the ones who get hurt first, right along with the poor and the minorities. I have quite a few deaths in ICE detainment in that database of mine, all of disabled immigrants. Why do people have to hate each other so much? Life is not a zero-sum game; if we help others, we ourselves benefit. We have so much to give; why are we refusing to share?

I find meaning in life from all the little things I do do make the world a little better, even if it’s just making cookies or showing a kid where to find the “Harry Potter” books. I used to think I might do something grand with my life, but now I don’t really think so. I think maybe a better world is made up of a lot of little people, all doing little things, all pushing in the right direction, until the sheer weight of numbers can move mountains.

CryptogramDetecting Credit Card Skimmers

Modern credit card skimmers hidden in self-service gas pumps communicate via Bluetooth. There's now an app that can detect them:

The team from the University of California San Diego, who worked with other computer scientists from the University of Illinois, developed an app called Bluetana which not only scans and detects Bluetooth signals, but can actually differentiate those coming from legitimate devices -- like sensors, smartphones, or vehicle tracking hardware -- from card skimmers that are using the wireless protocol as a way to harvest stolen data. The full details of what criteria Bluetana uses to differentiate the two isn't being made public, but its algorithm takes into account metrics like signal strength and other telltale markers that were pulled from data based on scans made at 1,185 gas stations across six different states.

LongNowDavid Byrne Launches New Online Magazine, Reasons to Be Cheerful

In his Long Now talk earlier this summer, David Byrne announced that he would soon launch a new website called Reasons to Be Cheerful. The premise, Byrne said, was to document stories and projects that give cause for optimism in troubles times. He was after solutions-oriented efforts that provided tangible lessons that could be broadly utilized in different parts of the world.

“I didn’t want something that would only be applied to one culture,” Byrne said.

Reasons to Be Cheerful has now officially launched. Here is Byrne on the project from the press release:

It often seems as if the world is going straight to Hell. I wake up in the morning, I look at the paper, and I say to myself, “Oh no!” Often I’m depressed for half the day. I imagine some of you feel the same.

Recently, I realized this isn’t helping. Nothing changes when you’re numb. So, as a kind of remedy, and possibly as a kind of therapy, I started collecting good news. Not schmaltzy, feel-good news, but stuff that reminded me, “Hey, there’s positive stuff going on! People are solving problems and it’s making a difference!”

I began telling others about what I’d found.

Their responses were encouraging, so I created a website called Reasons to be Cheerful and started writing. Later on, I realized I wanted to make the endeavor a bit more formal. So we got a team together and began commissioning stories from other writers and redesigned the website. Today, we’re relaunching Reasons to be Cheerful as an ongoing editorial project.

We’re telling stories that reveal that there are, in fact, a surprising number of reasons to feel cheerful — that provide a more optimistic and, we believe, more accurate depiction of the world. We hope to balance out some of the amplified negativity and show that things might not be as bad as we think. Stop by whenever you need a reminder.

Learn More

  • Byrne also released a trailer for the website, which you can watch below:
  • Watch David Byrne’s Long Now talk here.

Worse Than FailureCodeSOD: Checksum Yourself Before you Wrecksum Yourself

Mistakes happen. Errors crop up. Since we know this, we need to defend against it. When it comes to things like account numbers, we can make a rule about which numbers are valid by using a checksum. A simple checksum might be, "Add the digits together, and repeat until you get a single digit, which, after modulus with a constant, must be zero." This means that most simple data-entry errors will result in an invalid account number, but there's still a nice large pool of valid numbers to draw from.

James works for a company that deals with tax certificates, and thus needs to generate numbers which meet a similar checksum rule. Unfortunately for James, this is how his predecessor chose to implement it:

while (true) { digits = ""; for (int i = 0; i < certificateNumber.ToString().Length; i++) { int doubleDigit = Convert.ToInt32(certificateNumber.ToString().Substring(i, 1)) * 2; digits += (doubleDigit.ToString().Length > 1 ? Convert.ToInt32(doubleDigit.ToString().Substring(0, 1)) + Convert.ToInt32(doubleDigit.ToString().Substring(1, 1)) : Convert.ToInt32(doubleDigit.ToString().Substring(0, 1))); } int result = digits.ToString().Sum(c => c - '0'); if ((result % 10) == 0) break; else certificateNumber++; }

Whitespace added to make the ternary vaguely more readable.

We start by treating the number as a string, which allows us to access each digit individually, and as we loop, we'll grab a digit and double it. That, unfortunately, gives us a number, which is a big problem. There's absolutely no way to tell if a number is two digits long without turning it back into a string. Absolutely no way! So that's what we do. If the number is two digits, we'll split it back up and add those digits together.

Which again, gives us one of those pesky numbers. So once we've checked every digit, we'll convert that number back to a useful string, then Sum the characters in the string to produce a result. A result which, we hope, is divisible by 10. If not, we check the next number. Repeat and repeat until we get a valid result.

The worst part is, though, is that you can see from the while loop that this is just dropped into a larger method. This isn't a single function which generates valid certificate numbers. This is a block that gets dropped in line. Similar, but slightly different blocks are dropped in when numbers need to be validated. There's no single isValidCertificate method.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Valerie AuroraHow to avoid supporting sexual predators

[TW: child sex abuse]

Recently, I received an email from a computer security company asking for more information on why I refuse to work with them. My reason? The company was founded by a registered child sex offender who still serves as its CTO, which I found out during my standard client research process.

My first reaction was, “Do I really need to explain why I won’t work with you???” but as I write this, we’re at the part of the Jeffrey Epstein news cycle where we are learning about the people in computer science who supported Epstein—after Epstein pleaded guilty to two counts of “procuring prostitution with a child under 18,” registered as a sex offender, and paid restitution to dozens of victims. As someone who outed her own father as a serial child molester, I can tell you that it is quite common for people to support and help known sexual predators in this way.

I would like to share how I actively avoid supporting sexual predators, as someone who provides diversity and inclusion training, mostly to software companies:

  1. When a new client approaches me, I find the names of the CEO, CTO, COO, board members, and founders—usually on the “About Us” or “Who We Are” or “Founders” page of the company’s web site. Crunchbase and LinkedIn are also useful for this step.
  2. For each of the CEO, CTO, COO, board members, and/or founders, I search their name plus “allegations,” “sexism,” “sexual assault,” “sexual harassment,” and “women.” I do this for the company name too.
  3. If I find out any executives, board members, or founders have been credibly accused of sexual harassment or assault, I refuse to work with that company.
  4. I look up the funders of the company on Crunchbase. If any of their funders are listed on Sexism and Racism in Venture Capital, I give the company extra scrutiny.
  5. If the company agreed to take funding from a firm (or person) after knowing the lead partner(s) were sexual harassers or predators, I refuse to work with that company.

If you don’t have time to do this personally, I recommend hiring or contracting with someone to do it for you.

That’s just part of my research process (I search for other terms, such as “racism”). This has saved me from agreeing to help make money for a sexual predator or harasser many times. Specifically, I’ve turned down 13 out of 303 potential clients for this reason, or about 4% of clients who approached me. To be sure, it has also cost me money—I’d estimate at least $50,000—but I’d like to believe that my reputation and conscience are worth more than that. If you’re not in a position where you can say no to supporting a sexual predator, you have my sympathy and respect, and I hope you can find a way out sooner or later.

Your research process will look different depending on your situation, but the key elements will be:

  1. Assume that sexual predators exist in your field and you don’t know who all of them are.
  2. When you are asked to work with or support someone new, do research to find out if they are a sexual predator.
  3. When you find out someone is probably a sexual predator, refuse to support them.

What do I do if, say, the CEO has been credibly accused of sexual harassment or assault but the company has taken appropriate steps to make amends and heal the harm done to the victims? I don’t know, because I can’t remember a potential client who did that. I’ve had plenty that published a non-apology, forced victims to sign NDAs for trivial sums of money, or (very rarely) fired the CEO but allowed them to keep all or most of their equity, board seat, voting rights, etc. That’s not enough, because the CEO hasn’t shown remorse, made amends, or removed themselves from positions of power.

I don’t think all sexual predators should be ostracized completely, but I do think everyone has a moral responsibility not to help known sexual predators back into positions of power and influence without strong evidence of reform. Power and influence are privileges which should only be granted to people who are unlikely to abuse them, not rights which certain people “deserve” as long as they claim to have reformed. Someone with a history of sexually predatory behavior should be assumed to be dangerous unless exhaustively proven otherwise. One sign of complete reform is that the former sexual predator will themselves avoid and reject situations in which power and access would make sexual abuse easy to resume.

In this specific case, the CTO of this company maintains a public web site which briefly and vaguely mentions the harm done to victims of sex abuse—and then devotes the majority of the text to passionately advocating for the repeal of sex offender registry laws because of the incredible harm they do to the health and happiness of convicted sex offenders. So, no, I don’t think he has changed meaningfully, he is not a safe person to be around, he should not be the CTO of a computer security company, and I should not help him gain more wealth.

Don’t be the person helping the sexual predator insinuate themself back into a position with easy access to victims. If your first instinct is to feel sorry for the powerful and predatory, you need to do some serious work on your sense of empathy. Plenty of people have shared what it’s like to be the victim of sexual harassment and assault; go read their stories and try to imagine the suffering they’ve been through. Then compare that to the suffering of people who occasionally experience moderate consequences for sexually abusing people with less power than themselves. I hope you will adjust your empathy accordingly.

,

Rondam RamblingsFedex: three months and counting

It has now been three months since we shipped a package via Fedex that turned out to be undeliverable (we sent it signature-required, and the recipient, unbeknownst to us, had moved).  We expected that in a situation like that, the package would simply be returned to us, but it wasn't because we paid cash for the original shipment and (again, unbeknownst to us) the shipping cost doesn't include