Planet Russell

,

Planet DebianFrançois Marier: Printing hard-to-print PDFs on Linux

I recently found a few PDFs which I was unable to print due to those files causing insufficient printer memory errors:

I found a detailed explanation of what might be causing this which pointed the finger at transparent images, a PDF 1.4 feature which apparently requires a more recent version of PostScript than what my printer supports.

Using Okular's Force rasterization option (accessible via the print dialog) does work by essentially rendering everything ahead of time and outputing a big image to be sent to the printer. The quality is not very good however.

Converting a PDF to DjVu

The best solution I found makes use of a different file format: .djvu

Such files are not PDFs, but can still be opened in Evince and Okular, as well as in the dedicated DjVuLibre application.

As an example, I was unable to print page 11 of this paper. Using pdfinfo, I found that it is in PDF 1.5 format and so the transparency effects could be the cause of the out-of-memory printer error.

Here's how I converted it to a high-quality DjVu file I could print without problems using Evince:

pdf2djvu -d 1200 2002.04049.pdf > 2002.04049-1200dpi.djvu

Converting a PDF to PDF 1.3

I also tried the DjVu trick on a different unprintable PDF, but it failed to print, even after lowering the resolution to 600dpi:

pdf2djvu -d 600 dow-faq_v1.1.pdf > dow-faq_v1.1-600dpi.djvu

In this case, I used a different technique and simply converted the PDF to version 1.3 (from version 1.6 according to pdfinfo):

ps2pdf13 -r1200x1200 dow-faq_v1.1.pdf dow-faq_v1.1-1200dpi.pdf

This eliminates the problematic transparency and rasterizes the elements that version 1.3 doesn't support.

,

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 284.5 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 10.0h (out of 14h assigned), thus carrying over 4h to May.
  • Adrian Bunk did nothing (out of 28.75h assigned), thus is carrying over 28.75h for May.
  • Ben Hutchings did 26h (out of 20h assigned and 8.5h from March), thus carrying over 2.5h to May.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 6h (out of 6h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 28.75h assigned plus 17.25h from March), thus is carrying over 46h for May.
  • Markus Koschany did 11.5h (out of 28.75h assigned and 38.75h from March), thus carrying over 56h to May.
  • Mike Gabriel did 1.5h (out of 8h assigned), thus carrying over 6.5h to May.
  • Ola Lundqvist did 13.5h (out of 12h assigned and 8.5h from March), thus carrying over 7h to May.
  • Roberto C. Sánchez did 28.75h (out of 28.75h assigned).
  • Sylvain Beucler did 28.75h (out of 28.75h assigned).
  • Thorsten Alteholz did 28.75h (out of 28.75h assigned).
  • Utkarsh Gupta did 24h (out of 24h assigned).

Evolution of the situation

In April we dispatched more hours than ever and another was new too, we had our first (virtual) contributors meeting on IRC! Logs and minutes are available and we plan to continue doing IRC meetings every other month.
Sadly one contributor decided to go inactive in April, Hugo Lefeuvre.
Finally, we like to remind you, that the end of Jessie LTS is coming in less than two months!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 4 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianDirk Eddelbuettel: RcppSimdJson 0.0.5: Updated Upstream

A new RcppSimdJson release with updated upstream simdjson code just arrived on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

This release brings updated upstream code (thanks to Brendan Knapp) plus a new example and minimal tweaks. The full NEWS entry follows.

Changes in version 0.0.5 (2020-05-23)

  • Add parseExample from earlier upstream announcement (Dirk).

  • Synced with upstream (Brendan in #12) closing #11).

  • Updated example parseExample to API changes (Brendan).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityRiding the State Unemployment Fraud ‘Wave’

When a reliable method of scamming money out of people, companies or governments becomes widely known, underground forums and chat networks tend to light up with activity as more fraudsters pile on to claim their share. And that’s exactly what appears to be going on right now as multiple U.S. states struggle to combat a tsunami of phony Pandemic Unemployment Assistance (PUA) claims. Meanwhile, a number of U.S. states are possibly making it easier for crooks by leaking their citizens’ personal data from the very websites the unemployment scammers are using to file bogus claims.

Last week, the U.S. Secret Service warned of “massive fraud” against state unemployment insurance programs, noting that false filings from a well-organized Nigerian crime ring could end up costing the states and federal government hundreds of millions of dollars in losses.

Since then, various online crime forums and Telegram chat channels focused on financial fraud have been littered with posts from people selling tutorials on how to siphon unemployment insurance funds from different states.

Denizens of a Telegram chat channel newly rededicated to stealing state unemployment funds discussing cashout methods.

Yes, for roughly $50 worth of bitcoin, you too can quickly jump on the unemployment fraud “wave” and learn how to swindle unemployment insurance money from different states. The channel pictured above and others just like it are selling different “methods” for defrauding the states, complete with instructions on how best to avoid getting your phony request flagged as suspicious.

Although, at the rate people in these channels are “flexing” — bragging about their fraudulent earnings with screenshots of recent multiple unemployment insurance payment deposits being made daily — it appears some states aren’t doing a whole lot of fraud-flagging.

A still shot from a video a fraudster posted to a Telegram channel overrun with people engaged in unemployment insurance fraud shows multiple $800+ payments in one day from Massachusetts’ Department of Unemployment Assistance (DUA).

A federal fraud investigator who’s helping to trace the source of these crimes and who spoke with KrebsOnSecurity on condition of anonymity said many states have few controls in place to spot patterns in fraudulent filings, such as multiple payments going to the same bank accounts, or filings made for different people from the same Internet address.

In too many cases, he said, the deposits are going into accounts where the beneficiary name does not match the name on the bank account. Worse still, the source said, many states have dramatically pared back the amount of information required to successfully request an unemployment filing.

“The ones we’re seeing worst hit are the states that aren’t aren’t asking where you worked,” the investigator said. “It used to be they’d have a whole list of questions about your previous employer, and you had to show you were trying to find work. But now because of the pandemic, there’s no such requirement. They’ve eliminated any controls they had at all, and now they’re just shoveling money out the door based on Social Security number, name, and a few other details that aren’t hard to find.”

CANARY IN THE GOLDMINE

Earlier this week, email security firm Agari detailed a fraud operation tied to a seasoned Nigerian cybercrime group it dubbed “Scattered Canary,” which has been busy of late bilking states and the federal government out of economic stimulus and unemployment payments. Agari said this group has been filing hundreds of successful claims, all effectively using the same email address.

“Scattered Canary uses Gmail ‘dot accounts’ to mass-create accounts on each target website,” Agari’s Patrick Peterson wrote. “Because Google ignores periods when interpreting Gmail addresses, Scattered Canary has been able to create dozens of accounts on state unemployment websites and the IRS website dedicated to processing CARES Act payments for non-tax filers (freefilefillableforms.com).”

Image: Agari.

Indeed, the very day the IRS unveiled its site for distributing CARES Act payments last month, KrebsOnSecurity warned that it was very likely to be abused by fraudsters to intercept stimulus payments from U.S. citizens, mainly because the only information required to submit a claim was name, date of birth, address and Social Security number.

Agari notes that since April 29, Scattered Canary has filed at least 174 fraudulent claims for unemployment with the state of Washington.

“Based on communications sent to Scattered Canary, these claims were eligible to receive up to $790 a week for a total of $20,540 over a maximum of 26 weeks,” Peterson wrote. “Additionally, the CARES Act includes $600 in Federal Pandemic Unemployment Compensation each week through July 31. This adds up to a maximum potential loss as a result of these fraudulent claims of $4.7 million.”

STATE WEB SITE WOES

A number of states have suffered security issues with the PUA websites that exposed personal details of citizens filing unemployment insurance claims. Perhaps the most galling example comes from Arkansas, whose site exposed the SSNs, bank account and routing numbers for some 30,000 applicants.

In that instance, The Arkansas Times alerted the state after hearing from a computer programmer who was filing for unemployment on the site and found he could see other applicants’ data simply by changing the site’s URL slightly. State officials reportedly ignored the programmer’s repeated attempts to get them to fix the issue, and when it was covered by the newspaper the state governor accused the person who found it of breaking the law.

Over the past week, several other states have discovered similar issues with their PUA application sites, including Colorado, Illinois, and Ohio.

Planet Linux AustraliaMichael Still: A totally cheating sour dough starter

Share

This is the third in a series of posts documenting my adventures in making bread during the COVID-19 shutdown. I’d like to imagine I was running science experiments in making bread on my kids, but really all I was trying to do was eat some toast.

I’m not sure what it was like in other parts of the world, but during the COVID-19 pandemic Australia suffered a bunch of shortages — toilet paper, flour, and yeast were among those things stores simply didn’t have any stock of. Luckily we’d only just done a costco shop so were ok for toilet paper and flour, but we were definitely getting low on yeast. The obvious answer is a sour dough starter, but I’d never done that thing before.

In the end my answer was to cheat and use this recipe. However, I found the instructions unclear, so here’s what I ended up doing:

Starting off

  • 2 cups of warm water
  • 2 teaspoons of dry yeast
  • 2 cups of bakers flour

Mix these three items together in a plastic container with enough space for the mix to double in size. Place in a warm place (on the bench on top of the dish washer was our answer), and cover with cloth secured with a rubber band.

Feeding

Once a day you should feed your starter with 1 cup of flour and 1 cup of warm water. Stir throughly.

Reducing size

The recipe online says to feed for five days, but the size of my starter was getting out of hand by a couple of days, so I started baking at that point. I’ll describe the baking process in a later post. The early loaves definitely weren’t as good as the more recent ones, but they were still edible.

Hybernation

Once the starter is going, you feed daily and probably need to bake daily to keep the starters size under control. That obviously doesn’t work so great if you can’t eat an entire loaf of bread a day. You can hybernate the starter by putting it in the fridge, which means you only need to feed it once a week.

To wake a hybernated starter up, take it out of the fridge and feed it. I do this at 8am. That means I can then start the loaf for baking at about noon, and the starter can either go back in the fridge until next time or stay on the bench being fed daily.

I have noticed that sometimes the starter comes out of the fridge with a layer of dark water on top. Its worked out ok for us to just ignore that and stir it into the mix as part of the feeding process. Hopefully we wont die.

Share

Planet Linux AustraliaStewart Smith: Refurbishing my Macintosh Plus

Somewhere in the mid to late 1990s I picked myself up a Macintosh Plus for the sum of $60AUD. At that time there were still computer Swap Meets where old and interesting equipment was around, so I headed over to one at some point (at the St Kilda Town Hall if memory serves) and picked myself up four 1MB SIMMs to boost the RAM of it from the standard 1MB to the insane amount of 4MB. Why? Umm… because I could? The RAM was pretty cheap, and somewhere in the house to this day, I sometimes stumble over the 256KB SIMMs as I just can’t bring myself to get rid of them.

This upgrade probably would have cost close to $2,000 at the system’s release. If the Macintosh system software were better at disk caching you could have easily held the whole 800k of the floppy disk in memory and still run useful software!

One of the annoying things that started with the Macintosh was odd screws and Apple gear being hard to get into. Compare to say, the Apple ][ which had handy clips to jump inside whenever. In fitting my massive FOUR MEGABYTES of RAM back in the day, I recall using a couple of allen keys sticky-taped together to be able to reach in and get the recessed Torx screws. These days, I can just order a torx bit off Amazon and have it arrive pretty quickly. Well, two torx bits, one of which is just too short for the job.

My (dusty) Macintosh Plus

One thing had always struck me about it, it never really looked like the photos of the Macintosh Plus I saw in books. In what is an embarrassing number of years later, I learned that a lot can be gotten from the serial number printed on the underside of the front of the case.

So heading over to the My Old Mac Serial Number Decoder I can find out:

Manufactured in: F => Fremont, California, USA
Year of production: 1985
Week of production: 14
Production number: 3V3 => 4457
Model ID: M0001WP => Macintosh 512K (European Macintosh ED)

Your Macintosh 512K (European Macintosh ED) was the 4457th Mac manufactured during the 14th week of 1985 in Fremont, California, USA.

Pretty cool! So it is certainly a Plus as the logic board says that, but it’s actually an upgraded 512k! If you think it was madness to have a GUI with only 128k of RAM in the original Macintosh, you’d be right. I do not envy anybody who had one of those.

Some time a decent (but not too many, less than 10) years ago, I turn on the Mac Plus to see if it still worked. It did! But then… some magic smoke started to come out (which isn’t so good), but the computer kept working! There’s something utterly bizarre about looking at a computer with smoke coming out of it that continues to function perfectly fine.

Anyway, as the smoke was coming out, I decided that it would be an opportune time to turn it off, open doors and windows, and put it away until I was ready to deal with it.

One Global Pandemic Later, and now was the time.

I suspected it was going to be a capacitor somewhere that blew, and figured that I should replace it, and probably preemptively replace all the other electrolytic capacitors that could likely leak and cause problems.

First thing’s first though: dismantle it and clean everything. First, taking the case off. Apple is not new to the game of annoying screws to get into things. I ended up spending $12 on this set on Amazon, as the T10 bit can actually reach the screws holding the case on.

Cathode Ray Tubes are not to be messed with. We’re talking lethal voltages here. It had been many years since electricity went into this thing, so all was good. If this all doesn’t work first time when reassembling it, I’m not exactly looking forward to discharging a CRT and working on it.

The inside of my Macintosh Plus, with lots of grime.

You can see there’s grime everywhere. It’s not the worst in the world, but it’s not great (and kinda sticky). Obviously, this needs to be cleaned! The best way to do that is take a lot of photos, dismantle everything, and clean it a bit at a time.

There’s four main electronic components inside a Macintosh Plus:

  1. The CRT itself
  2. The floppy disk drive
  3. The Logic Board (what Mac people call what PC people call the motherboard)
  4. The Analog Board

There’s also some metal structure that keeps some things in place. There’s only a few connectors between things, which are pretty easy to remove. If you don’t know how to discharge a CRT and what the dangers of them are you should immediately go and find out through reading rather than finding out by dying. I would much prefer it if you dyed (because creative fun) rather than died.

Once the floppy connector and the power connector is unplugged, the logic board slides out pretty easily. You can see from the photo below that I have the 4MB of RAM installed and the resistor you need to snip is, well, snipped (but look really closely for that). Also, grime.

Macintosh Plus Logic Board

Cleaning things? Well, there’s two ways that I have used (and considering I haven’t yet written the post with “hurray, it all works”, currently take it with a grain of salt until I write that post). One: contact cleaner. Two: detergent.

Macintosh Plus Logic Board (being washed in my sink)

I took the route of cleaning things first, and then doing recapping adventures. So it was some contact cleaner on the boards, and then some soaking with detergent. This actually all worked pretty well.

Logic Board Capacitors:

  • C5, C6, C7, C12, C13 = 33uF 16V 85C (measured at 39uF, 38uF, 38uF, 39uF)
  • C14 = 1uF 50V (measured at 1.2uF and then it fluctuated down to around 1.15uF)

Analog Board Capacitors

  • C1 = 35V 3.9uF (M) measured at 4.37uF
  • C2 = 16V 4700uF SM measured at 4446uF
  • C3 = 16V 220uF +105C measured at 234uF
  • C5 = 10V 47uF 85C measured at 45.6uF
  • C6 = 50V 22uF 85C measured at 23.3uF
  • C10 = 16V 33uF 85C measured at 37uF
  • C11 = 160V 10uF 85C measured at 11.4uF
  • C12 = 50V 22uF 85C measured at 23.2uF
  • C18 = 16V 33uF 85C measured at 36.7uF
  • C24 = 16V 2200uF 105C measured at 2469uF
  • C27 = 16V 2200uF 105C measured at 2171uF (although started at 2190 and then went down slowly)
  • C28 = 16V 1000uF 105C measured at 638uF, then 1037uF, then 1000uF, then 987uF
  • C30 = 16V 2200uF 105C measured at 2203uF
  • C31 = 16V 220uF 105C measured at 236uF
  • C32 = 16V 2200uF 105C measured at 2227uF
  • C34 = 200V 100uF 85C measured at 101.8uF
  • C35 = 200V 100uF 85C measured at 103.3uF
  • C37 = 250V 0.47uF measured at <exploded>. wheee!
  • C38 = 200V 100uF 85C measured at 103.3uF
  • C39 = 200V 100uF 85C mesaured at 99.6uF (with scorch marks from next door)
  • C42 = 10V 470uF 85C measured at 556uF
  • C45 = 10V 470uF 85C measured at 227uF, then 637uF then 600uF

I’ve ordered an analog board kit from https://console5.com/store/macintosh-128k-512k-plus-analog-pcb-cap-kit-630-0102-661-0462.html and when trying to put them in, I learned that the US Analog board is different to the International Analog board!!! Gah. Dammit.

Note that C30, C32, C38, C39, and C37 were missing from the kit I received (probably due to differences in the US and International boards). I did have an X2 cap (for C37) but it was 0.1uF not 0.47uF. I also had two extra 1000uF 16V caps.

Macintosh Repair and Upgrade Secrets (up to the Mac SE no less!) holds an Appendix with the parts listing for both the US and International Analog boards, and this led me to conclude that they are in fact different boards rather than just a few wires that are different. I am not sure what the “For 120V operation, W12 must be in place” and “for 240V operation, W12 must be removed” writing is about on the International Analog board, but I’m not quite up to messing with that at the moment.

So, I ordered the parts (linked above) and waited (again) to be able to finish re-capping the board.

I found https://youtu.be/H9dxJ7uNXOA video to be a good one for learning a bunch about the insides of compact Macs, I recommend it and several others on his YouTube channel. One interesting thing I learned is that the X2 cap (C37 on the International one) is before the power switch, so could blow just by having the system plugged in and not turned on! Okay, so I’m kind of assuming that it also applies to the International board, and mine exploded while it was plugged in and switched on, so YMMV.

Additionally, there’s an interesting list of commonly failing parts. Unfortunately, this is also for the US logic board, so the tables in Macintosh Repair and Upgrade Secrets are useful. I’m hoping that I don’t have to replace anything more there, but we’ll see.

But, after the Nth round of parts being delivered….

Note the lack of an exploded capacitor

Yep, that’s where the exploded cap was before. Cleanup up all pretty nicely actually. Annoyingly, I had to run it all through a step-up transformer as the board is all set for Australian 240V rather than US 120V. This isn’t going to be an everyday computer though, so it’s fine.

Woohoo! It works. While I haven’t found my supply of floppy disks that (at least used to) work, the floppy mechanism also seems to work okay.

Next up: waiting for my Floppy Emu to arrive as it’ll certainly let it boot. Also, it’s now time to rip the house apart to find a floppy disk that certainly should have made its way across the ocean with the move…. Oh, and also to clean up the mouse and keyboard.

,

CryptogramFriday Squid Blogging: Squid Can Edit Their Own Genomes

This is new news:

Revealing yet another super-power in the skillful squid, scientists have discovered that squid massively edit their own genetic instructions not only within the nucleus of their neurons, but also within the axon -- the long, slender neural projections that transmit electrical impulses to other neurons. This is the first time that edits to genetic information have been observed outside of the nucleus of an animal cell.

[...]

The discovery provides another jolt to the central dogma of molecular biology, which states that genetic information is passed faithfully from DNA to messenger RNA to the synthesis of proteins. In 2015, Rosenthal and colleagues discovered that squid "edit" their messenger RNA instructions to an extraordinary degree -- orders of magnitude more than humans do -- allowing them to fine-tune the type of proteins that will be produced in the nervous system.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDListening to nature: The talks of TED2020 Session 1

TED looks a little different this year, but much has also stayed the same. The TED2020 mainstage program kicked off Thursday night with a session of talks, performances and visual delights from brilliant, creative individuals who shared ideas that could change the world — and stories of people who already have. But instead of convening in Vancouver, the TED community tuned in to the live, virtual broadcast hosted by TED’s Chris Anderson and Helen Walters from around the world — and joined speakers and fellow community members on an interactive, TED-developed second-screen platform to discuss ideas, ask questions and give real-time feedback. Below, a recap of the night’s inspiring talks, performances and conversations.

Sharing incredible footage of microscopic creatures, Ariel Waldman takes us below meters-thick sea ice in Antarctica to explore a hidden ecosystem. She speaks at TED2020: Uncharted on May 21, 2020. (Photo courtesy of TED)

Ariel Waldman, Antarctic explorer, NASA advisor

Big idea: Seeing microbes in action helps us more fully understand (and appreciate) the abundance of life that surrounds us. 

How: Even in the coldest, most remote place on earth, our planet teems with life. Explorer Ariel Waldman introduces the thousands of organisms that call Antarctica home — and they’re not all penguins. Leading a five-week expedition, Waldman descended the sea ice and scaled glaciers to investigate and film myriad microscopic, alien-looking creatures. Her footage is nothing short of amazing — like wildlife documentary at the microbial level! From tiny nematodes to “cuddly” water bears, mini sea shrimp to geometric bugs made of glass, her camera lens captures these critters in color and motion, so we can learn more about their world and ours. Isn’t nature brilliant?

Did you know? Tardigrades, also known as water bears, live almost everywhere on earth and can even survive in the vacuum of space. 


Tracy Edwards, Trailblazing sailor

Big Idea: Despite societal limits, girls and women are capable of creating the future of their dreams. 

How: Though competitive sailing is traditionally dominated by men, women sailors have proven they are uniquely able to navigate the seas. In 1989, Tracy Edwards led the first all-female sailing crew in the Whitbread Round the World Yacht Race. Though hundreds of companies refused to sponsor the team and bystanders warned that an all-female team was destined to fail, Edwards knew she could trust in the ability of the women on her team. Despite the tremendous odds, they completed the trip and finished second in their class. The innovation, kindness and resourcefulness of the women on Edwards’s crew enabled them to succeed together, upending all expectations of women in sailing. Now, Edwards advocates for girls and women to dive into their dream fields and become the role models they seek to find. She believes women should understand themselves as innately capable, that the road to education has infinite routes and that we all have the ability to take control of our present and shape our futures.

Quote of the talk: “This is about teaching girls: you don’t have to look a certain way; you don’t have to feel a certain way; you don’t have to behave a certain way. You can be successful. You can follow your dreams. You can fight for them.”


Classical musicians Sheku Kanneh-Mason and Isata Kanneh-Mason perform intimate renditions of Sergei Rachmaninov’s “Muse” and Frank Bridge’s “Spring Song” at TED2020: Uncharted on May 21, 2020. (Photo courtesy of TED)

Virtuosic cellist Sheku Kanneh-Mason, whose standout performance at the wedding of Prince Harry and Meghan Markle made waves with music fans across the world, joins his sister, pianist Isata Kanneh-Mason, for an intimate living room performance of “Muse” by Sergei Rachmaninov and “Spring Song” by Frank Bridge.

And for a visual break, podcaster and design evangelist Debbie Millman shares an animated love letter to her garden — inviting us to remain grateful that we are still able to make things with our hands.


Dallas Taylor, Host/creator of Twenty Thousand Hertz podcast

Big idea: There is no such thing as true silence.

Why? In a fascinating challenge to our perceptions of sound, Dallas Taylor tells the story of a well-known, highly-debated and perhaps largely misunderstood piece of music penned by composer John Cage. Written in 1952, 4′33″ is more experience than expression, asking the listener to focus on and accept things the way they are, through three movements of rest — or, less technically speaking, silence. In its “silence,” Cage invites us to contemplate the sounds that already exist when we’re ready to listen, effectively making each performance a uniquely meditative encounter with the world around us. “We have a once in a lifetime opportunity to reset our ears,” says Taylor, as he welcomes the audience to settle into the first movement of 4’33” together. “Listen to the texture and rhythm of the sounds around you right now. Listen for the loud and soft, the harmonic and dissonant … enjoy the magnificence of hearing and listening.”

Quote of the talk: “Quietness is not when we turn our minds off to sound, but when we really start to listen and hear the world in all of its sonic beauty.”


Dubbed “the woman who redefined man” by her biographer, Jane Goodall has changed our perceptions of primates, people and the connection between the two. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 21, 2020. (Photo courtesy of TED)

Jane Goodall, Primatologist, conservationist

Big idea: Humanity’s long-term livelihood depends on conservation.

Why? After years in the field reinventing the way the world thinks about chimpanzees, their societies and their similarities to humans, Jane Goodall began to realize that as habitats shrink, humanity loses not only resources and life-sustaining biodiversity but also our core connection to nature. Worse still, as once-sequestered animals are pulled from their environments and sold and killed in markets, the risk of novel diseases like COVID-19 jumping into the human population rises dramatically. In conversation with head of TED Chris Anderson, Goodall tells the story of a revelatory scientific conference in 1986, where she awakened to the sorry state of global conservation and transformed from a revered naturalist into a dedicated activist. By empowering communities to take action and save their neighboring natural habitats all over the world, Goodall’s institute now gives communities tools they need to protect their environment. As a result of her work, conservation has become part of the DNA of cultures from China to countries throughout Africa, and is leading to visible transformations of once-endangered forests and habitats.

Quote of the talk: Every day you live, you make an impact on the planet. You can’t help making an impact … If we all make ethical choices, then we start moving towards a world that will be not quite so desperate to leave for our great-grandchildren.”

TEDFragility, resilience and restoration at TED2020: The Prequel

It’s a new, strange, experimental day for TED. In a special Earth Day event, TED2020: The Prequel brought the magic of the TED conference to the virtual stage, inviting TED2020 community members to gather for three sessions of talks and engaging, innovative opportunities to connect. Alongside world-changing ideas from leaders in science, political strategy and environmental activism, attendees also experienced the debut of an interactive, TED-developed second-screen technology that gave them the opportunity to discuss ideas, ask questions of speakers and give real-time (emoji-driven) feedback to the stage. Below, a recap of the day’s inspiring talks, performances and conversations.

Session 1: Fragility

The opening session featured thinking on the fragile state of the present — and some hopes for the future.

Larry Brilliant, epidemiologist

Big idea: Global cooperation is the key to ending the novel coronavirus pandemic.

How? In a live conversation with head of TED Chris Anderson, epidemiologist Larry Brilliant reviews the global response to SARS-CoV-2 and reflects on what we can do to end the outbreak. While scientists were able to detect and identify the virus quickly, Brilliant says, political incompetence and fear delayed action. Discussing the deadly combination of a short incubation period with a high transmissibility rate, he explains how social distancing doesn’t stamp out the disease but rather slows its spread, giving us the time needed to execute crucial contact tracing and develop a vaccine. Brilliant shares how scientists are collaborating to speed up the vaccine timeline by running multiple processes (like safety testing and manufacturing) in parallel, rather than in a time-consuming sequential process. And he reminds us that to truly conquer the pandemic, we must work together across national boundaries and political divides. Watch the conversation on TED.com » 

Quote of the talk: This is what a pandemic forces us to realize: we are all in it together, we need a global solution to a global problem. Anything less than that is unthinkable.”


Now is a time “to be together rather than to try to pull the world apart and crawl back into our own nationalistic shells,” says Huang Hung.

Huang Hung, writer, publisher

Big idea: Individual freedom as an abstract concept in a pandemic is meaningless. It’s time for the West to take a step toward the East.

How? By embracing and prizing collective responsibility. In conversation with TED’s head of curation, Helen Walters, writer and publisher Huang Hung discusses how the Chinese people’s inherent trust in their government to fix problems (even when the solutions are disliked) played out with COVID-19, the handling of coronavirus whistleblower Dr. Li Wenliang and what, exactly, “wok throwing” is. What seems normal and appropriate to the Chinese, Hung says — things like contact tracing and temperature checks at malls — may seem surprising and unfamiliar to Westerners at first, but these tools can be our best bet to fight a pandemic. What’s most important now is to think about the collective, not the individual. “It is a time to be together rather than to try to pull the world apart and crawl back into our own nationalistic shells,” she says.

Fun fact: There’s a word — 乖, or “guai” — that exists only in Chinese: it means a child who listens to their parents.


Watch Oliver Jeffer’s TED Talk, “An ode to living on Earth,” at go.ted.com/oliverjeffers.

Oliver Jeffers, artist, storyteller

Big idea: In the face of infinite odds, 7.5 billion of us (and counting) find ourselves here, on Earth, and that shared existence is the most important thing we have.

Why? In a poetic effort to introduce life on Earth to someone who’s never been here before, artist Oliver Jeffers wrote his newborn son a letter (which grew into a book, and then a sculpture) full of pearls of wisdom on our shared humanity. Alongside charming, original illustrations, he gives some of his best advice for living on this planet. Jeffers acknowledges that, in the grand scheme of things, we know very little about existence — except that we are experiencing it together. And we should relish that connection. Watch the talk on TED.com »

Quote of the talk: “‘For all we know,’ when said as a statement, means the sum total of all knowledge. But ‘for all we know’ when said another way, means that we do not know at all. This is the beautiful, fragile drama of civilization. We are the actors and spectators of a cosmic play that means the world to us here but means nothing anywhere else.”


Musical interludes from 14-year old prodigy Lydian Nadhaswaram, who shared an energetic, improvised version of Gershwin’s “Summertime,” and musician, singer and songwriter Sierra Hull, who played her song “Beautifully Out of Place.”

 

Session 2: Resilience

Session 2 focused on The Audacious Project, a collaborative funding initiative housed at TED that’s unlocking social impact on a grand scale. The session saw the debut of three 2020 Audacious grantees — Crisis Text Line, The Collins Lab and ACEGID — that are spearheading bold and innovative solutions to the COVID-19 pandemic. Their inspirational work on the front lines is delivering urgent support to help the most vulnerable through this crisis.

Pardis Sabeti and Christian Happi, disease researchers

Big idea: Combining genomics with new information technologies, Sentinel — an early warning system that can detect and respond to emerging viral threats in real-time — aims to radically change how we catch and control outbreaks. With the novel coronavirus pandemic, Sentinel is pivoting to become a frontline responder to COVID-19.

How? From advances in the field of genomics, the team at Sentinel has developed two tools to detect viruses, track outbreaks and watch for mutations. First is Sherlock, a new method to test viruses with simple paper strips — and identify them within hours. The second is Carmen, which enables labs to test hundreds of viruses simultaneously, massively increasing diagnostic ability. By pairing these tools with mobile and cloud-based technologies, Sentinel aims to connect health workers across the world and share critical information to preempt pandemics. As COVID-19 sweeps the globe, the Sentinel team is helping scientists detect the virus quicker and empower health workers to connect and better contain the outbreak. See what you can do for this idea »

Quote of the talk: “The whole idea of Sentinel is that we all stand guard over each other, we all watch. Each one of us is a sentinel.”


Jim Collins, bioengineer

Big idea: AI is our secret weapon against the novel coronavirus.

How? Bioengineer Jim Collins rightly touts the promise and potential of technology as a tool to discover solutions to humanity’s biggest molecular problems. Prior to the coronavirus pandemic, his team combined AI with synthetic biology data, seeking to avoid a similar battle that’s on the horizon: superbugs and antibiotic resistance. But in the shadow of the present global crisis, they pivoted these technologies to help defeat the virus. They have made strides in using machine learning to discover new antiviral compounds and develop a hybrid protective mask-diagnostic test. Thanks to funding from The Audacious Project, Collins’s team will develop seven new antibiotics over seven years, with their immediate focus being treatments to help combat bacterial infections that occur alongside SARS-CoV-2. See what you can do for this idea »

Quote of the talk: “Instead of looking for a needle in a haystack, we can use the giant magnet of computing power to find many needles in multiple haystacks simultaneously.”


“This will be strangers helping strangers around the world — like a giant global love machine,” says Crisis Text Line CEO Nancy Lublin, outlining the expansion of the crisis intervention platform.

Nancy Lublin, health activist

Big idea: Crisis Text Line, a free 24-hour service that connects with people via text message, delivers crucial mental health support to those who need it. Now they’re going global.

How? Using mobile technology, machine learning and a large distributed network of volunteers, Crisis Text Line helps people in times of crisis, no matter the situation. Here’s how it works: If you’re in the United States or Canada, you can text HOME to 741741 and connect with a live, trained Crisis Counselor, who will provide confidential help via text message. (Numbers vary for the UK and Ireland; find them here.) The not-for-profit launched in August 2013 and within four months had expanded to all 274 area codes in the US. Over the next two-and-a-half years, they’re committing to providing aid to anyone who needs it not only in English but also in Spanish, Portuguese, French and Arabic — covering 32 percent of the globe. Learn how you can join the movement to spread empathy across the world by becoming a Crisis Counselor. See what you can do for this idea »

Quote of the talk: “This will be strangers helping strangers around the world — like a giant global love machine.”


Music and interludes from Damian Kulash and OK Go, who showed love for frontline pandemic workers with the debut of a special quarantine performance, and David Whyte, who recited his poem “What to Remember When Waking,” inviting us to celebrate that first, hardly-noticed moment when we wake up each day. “What you can plan is too small for you to live,” Whyte says.

 

Session 3: Restoration

The closing session considered ways to restore our planet’s health and work towards a beautiful, clean, carbon-free future.

Watch Tom Rivett-Carnac’s TED Talk, “How to shift your mindset and choose your future,” at go.ted.com/tomrivettcarnac.

Tom Rivett-Carnac, political strategist

Big idea: We need stubborn optimism coupled with action to meet our most formidable challenges.

How: Speaking from the woods outside his home in England, political strategist Tom Rivett-Carnac addresses the loss of control and helplessness we may feel as a result of overwhelming threats like climate change. Looking to leaders from history who have blazed the way forward in dark times, he finds that people like Rosa Parks, Winston Churchill and Mahatma Gandhi had something in common: stubborn optimism. This mindset, he says, is not naivety or denial but rather a refusal to be defeated. Stubborn optimism, when paired with action, can infuse our efforts with meaning and help us choose the world we want to create. Watch the talk on TED.com »

Quote of the talk: “This stubborn optimism is a form of applied love … and it is a choice for all of us.”


Kristine Tompkins, Earth activist, conservationist

Big idea: Earth, humanity and nature are all interconnected. To restore us all back to health, let’s “rewild” the world. 

Why? The disappearance of wildlife from its natural habitat is a problem to be met with action, not nostalgia. Activist and former Patagonia CEO Kristine Tompkins decided she would dedicate the rest of her life to that work. By purchasing privately owned wild habitats, restoring their ecosystems and transforming them into protected national parks, Tompkins shows the transformational power of wildlands philanthropy. She urgently spreads the importance of this kind of “rewilding” work — and shows that we all have a role to play. “The power of the absent can’t help us if it just leads to nostalgia or despair,” she says. “It’s only useful if it motivates us toward working to bring back what’s gone missing.”

Quote of the talk: “Every human life is affected by the actions of every other human life around the globe. And the fate of humanity is tied to the health of the planet. We have a common destiny. We can flourish or we can suffer, but we’re going to be doing it together.”


Music and interludes from Amanda Palmer, who channels her inner Gonzo with a performance of “I’m Going To Go Back There Someday” from The Muppet Movie; Baratunde Thurston, who took a moment to show gratitude for Earth and reflect on the challenge humanity faces in restoring balance to our lives; singer-songwriter Alice Smith, who gives a hauntingly beautiful vocal performance of her original song “The Meaning,” dedicated of Mother Earth; and author Neil Gaiman, reading an excerpt about the fragile beauty that lies at the heart of life.

TEDConversations on rebuilding a healthy economy: Week 1 at TED2020

To kick off TED2020, leaders in business, finance and public health joined the TED community for lean-forward conversations to answer the question: “What now?” Below, a recap of the fascinating insights they shared.

“If you don’t like the pandemic, you are not going to like the climate crisis,” says Kristalina Georgieva, Managing Director of the International Monetary Fund. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 18, 2020. (Photo courtesy of TED)

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF)

Big idea: The coronavirus pandemic shattered the global economy. To put the pieces back together, we need to make sure money is going to countries that need it the most — and that we rebuild financial systems that are resilient to shocks.

How? Kristalina Georgieva is encouraging an attitude of determined optimism to lead the world toward recovery and renewal amid the economic fallout of COVID-19. The IMF has one trillion dollars to lend — it’s now deploying these funds to areas hardest hit by the pandemic, particularly in developing countries, and it’s also put a debt moratorium into effect for the poorest countries. Georgieva admits recovery is not going to be quick, but she thinks that countries can emerge from this “great transformation” stronger than before if they build resilient, disciplined financial systems. Within the next ten years, she hopes to see positive shifts towards digital transformation, more equitable social safety nets and green recovery. And as the environment recovers while the world grinds to a halt, she urges leaders to maintain low carbon footprints — particularly since the pandemic foreshadows the devastation of global warming. “If you don’t like the pandemic, you are not going to like the climate crisis,” Georgieva says. Watch the interview on TED.com »


“I’m a big believer in capitalism. I think it’s in many ways the best economic system that I know of, but like everything, it needs an upgrade. It needs tuning,” says Dan Schulman, president and CEO of PayPal. He speaks with TED business curators Corey Hajim at TED2020: Uncharted on May 19, 2020. (Photo courtesy of TED)

Dan Schulman, President and CEO of PayPal

Big idea: Employee satisfaction and consumer trust are key to building the economy back better.

How? A company’s biggest competitive advantage is its workforce, says Dan Schulman, explaining how Paypal instituted a massive reorientation of compensation to meet the needs of its employees during the pandemic. The ripple of benefits of this shift have included increased productivity, financial health and more trust. Building further on the concept of trust, Schulman traces how the pandemic has transformed the managing and moving of money — and how it will require consumers to renew their focus on privacy and security. And he shares thoughts on the new roles of corporations and CEOs, the cashless economy and the future of capitalism. “I’m a big believer in capitalism. I think it’s in many ways the best economic system that I know of, but like everything, it needs an upgrade. It needs tuning,” Schulman says. “For vulnerable populations, just because you pay at the market [rate] doesn’t mean that they have financial health or financial wellness. And I think everyone should know whether or not their employees have the wherewithal to be able to save, to withstand financial shocks and then really understand what you can do about it.”


Biologist Uri Alon shares a thought-provoking idea on how we could get back to work: a two-week cycle of four days at work followed by 10 days of lockdown, which would cut the virus’s reproductive rate. He speaks with head of TED Chris Anderson at TED2020: Uncharted on May 20, 2020. (Photo courtesy of TED)

Uri Alon, Biologist

Big idea: We might be able to get back to work by exploiting one of the coronavirus’s key weaknesses. 

How? By adopting a two-week cycle of four days at work followed by 10 days of lockdown, bringing the virus’s reproductive rate (R₀ or R naught) below one. The approach is built around the virus’s latent period: the three-day delay (on average) between when a person gets infected and when they start spreading the virus to others. So even if a person got sick at work, they’d reach their peak infectious period while in lockdown, limiting the virus’s spread — and helping us avoid another surge. What would this approach mean for productivity? Alon says that by staggering shifts, with groups alternating their four-day work weeks, some industries could maintain (or even exceed) their current output. And having a predictable schedule would give people the ability to maximize the effectiveness of their in-office work days, using the days in lockdown for more focused, individual work. The approach can be adopted at the company, city or regional level, and it’s already catching on, notably in schools in Austria.


“The secret sauce here is good, solid public health practice … this one was a bad one, but it’s not the last one,” says Georges C. Benjamin, Executive Director of the American Public Health Association. He speaks with TED science curator David Biello at TED2020: Uncharted on May 20, 2020. (Photo courtesy of TED)

Georges C. Benjamin, Executive Director of the American Public Health Association

Big Idea: We need to invest in a robust public health care system to lead us out of the coronavirus pandemic and prevent the next outbreak.

How: The coronavirus pandemic has tested the public health systems of every country around the world — and, for many, exposed shortcomings. Georges C. Benjamin details how citizens, businesses and leaders can put public health first and build a better health structure to prevent the next crisis. He envisions a well-staffed and equipped governmental public health entity that runs on up-to-date technology to track and relay information in real-time, helping to identify, contain, mitigate and eliminate new diseases. Looking to countries like that have successfully lowered infection rates, such as South Korea, he emphasizes the importance of early and rapid testing, contact tracing, self-isolation and quarantining. Our priority, he says, should be testing essential workers and preparing now for a spike of cases during the summer hurricane and fall flu seasons.The secret sauce here is good, solid public health practice,” Benjamin says. “We should not be looking for any mysticism or anyone to come save us with a special pill … because this one was a bad one, but it’s not the last one.”

Worse Than FailureError'd: Rest in &;$(%{>]$73!47;£*#’v\

"Should you find yourself at a loss for words at the loss of a loved one, there are other 'words' you can try," Steve M. writes.

 

"Cool! I can still use the premium features for -3 days! Thanks, Mailjet!" writes Thomas.

 

David C. wrote, "In this time of virus outbreak, we all know you've been to the doctor so don't try and lie about it."

 

Gavin S. wrote, "I guess Tableau sets a low bar for its Technical Program Managers?"

 

"Ubutuntu: For when your Linux desktop isn't frilly enough!" Stuart L. wrote.

 

"Per Dropbox's rules, this prompt valid only for strings with a length of 5 that are greater than or equal to 6," Robert H. writes.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianSteve Kemp: Updated my linux-security-modules for the Linux kernel

Almost three years ago I wrote my first linux-security-module, inspired by a comment I read on LWN

I did a little more learning/experimentation and actually produced a somewhat useful LSM, which allows you to restrict command-execution via the use of a user-space helper:

  • Whenever a user tries to run a command the LSM-hook receives the request.
  • Then it executes a userspace binary to decide whether to allow that or not (!)

Because the detection is done in userspace writing your own custom rules is both safe and easy. No need to touch the kernel any further!

Yesterday I rebased all the modules so that they work against the latest stable kernel 5.4.22 in #7.

The last time I'd touched them they were built against 5.1, which was itself a big jump forwards from the 4.16.7 version I'd initially used.

Finally I updated the can-exec module to make it gated, which means you can turn it on, but not turn it off without a reboot. That was an obvious omission from the initial implementation #11.

Anyway updated code is available here:

I'd kinda like to write more similar things, but I lack inspiration.

Planet DebianBits from Debian: Debian welcomes the 2020 GSOC interns

GSoC logo

We are very excited to announce that Debian has selected nine interns to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, students, and details of the tasks to be performed.


Project: Android SDK Tools in Debian

  • Student(s): Manas Kashyap, Raman Sarda, and Samyak-jn

Deliverables of the project: Make the entire Android toolchain, Android Target Platform Framework, and SDK tools available in the Debian archives.


Project: Packaging and Quality assurance of COVID-19 relevant applications

  • Student: Nilesh

Deliverables of the project: Quality assurance including bug fixing, continuous integration tests and documentation for all Debian Med applications that are known to be helpful to fight COVID-19


Project: BLAS/LAPACK Ecosystem Enhancement

  • Student: Mo Zhou

Deliverables of the project: Better environment, documentation, policy, and lintian checks for BLAS/LAPACK.


Project: Quality Assurance and Continuous integration for applications in life sciences and medicine

  • Student: Pranav Ballaney

Deliverables of the project: Continuous integration tests for all Debian Med applications, QA review, and bug fixes.


Project: Systemd unit translator

  • Student: K Gopal Krishna

Deliverables of the project: A systemd unit to OpenRC init script translator. Updated OpenRC package into Debian Unstable.


Project: Architecture Cross-Grading Support in Debian

  • Student: Kevin Wu

Deliverables of the project: Evaluate, test, and develop tools to evaluate cross-grade checks for system and user configuration.


Project: Upstream/Downstream cooperation in Ruby

  • Student: utkarsh2102

Deliverables of the project: Create guide for rubygems.org on good practices for upstream maintainers, develop a tool that can detect problems and, if possible fix those errors automatically. Establish good documentation, design the tool to be extensible for other languages.


Congratulations and welcome to all the interns!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

,

CryptogramAnn Mitchell, Bletchley Park Cryptanalyst, Dies

Worse Than FailureCodeSOD: Checking Your Options

If nulls are a “billion dollar mistake”, then optional/nullable values are the $50 of material from the hardware store that you use to cover up that mistake. It hasn’t really fixed anything, but if you’re handy, you can avoid worrying too much about null references.

D. Dam Wichers found some “interesting” Java code that leverages optionals, and combines them with the other newish Java feature that everyone loves to misuse: streams.

First, let’s take a look at the “right” way to do this though. The code needs to take a list of active sessions, filter out any older than a certain threshold, and then summarize them together into a single composite session object. This is a pretty standard filter/reduce scenario, and in Java, you might write it something like this:

return sessions.stream()
  .filter(this::filterOldSessions)
  .reduce(this::reduceByStatus);

The this::… syntax is Java’s way of passing references to methods around, which isn’t a replacement for lambdas but is often easier to use in Java. The stream call starts a stream builder, and then we attach the filter and reduce operations. One of the key advantages here is that this can be lazily evaluated, so we haven’t actually filtered yet. This also might not actually return anything, so the result is implicitly wrapped in an Optional type.

With the “right” way firmly in mind, let’s look at the body of a method D. Dam found.

   Optional<CachedSession> theSession;

   theSession = sessions.stream()
                     .filter(session -> filterOldSessions(session))
                     .reduce((first, second) -> reduceByStatus(first, second));

   if (theSession.isPresent()) {
        return Optional.of(theSession.get());
   } else {
        return Optional.empty();
   }

This code isn’t wrong, it just highlights a developer unfamiliar with their tools. First, note the use of lambdas instead of the this::… syntax. It’s functionally the same, but this is harder to read- it’s less clear.

The real confusion, though, is after they’ve gotten the result. They understand that the stream operation has returned an Optional. So they check if that Optional isPresent- if it has a value. If it does, they get the value and wrap it in a new Optional (Optional.of is a static factory method which generates new Optionals). Otherwise, if it’s empty, we return an empty optional. Which, if they’d just returned the result of the stream operation, they would have gotten the same result.

It’s always frustrating to see this kind of code. It’s a developer who is so close to getting it, but who just isn’t quite there yet. That said, it’s not all bad, as D. Dam points out:

In defense of the original code: it is a little more clear that an Optional is setup properly and returned.

I’m not sure that it’s necessary to make that clear, but this code isn’t bad, it’s just annoying. It’s the kind of thing that you need to bring up in a code review, but somebody’s going to think you’re nit-picking, and when you start using words like readability, there’ll always be a manager who just wants this commit in production yesterday and says, “Readability is different for everyone, it’s fine.”

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

TED“TEDx SHORTS”, a TED original podcast hosted by actress Atossa Leoni, premieres May 18

Launching on Monday, May 18, TED’s new podcast TEDx SHORTS gives listeners a quick and meaningful taste of curiosity, skepticism, inspiration and action drawn from TEDx Talks. In less than 10 minutes, host Atossa Leoni guides listeners through fresh perspectives, inspiring stories and surprising information from some of the most compelling TEDx Talks. 

TEDx events are organized and run by a passionate community of independent volunteers who are at the forefront of giving a platform to global voices and sharing new ideas that spark conversations in their local areas. Since 2009, there have been more than 28,000 independently organized TEDx events in over 170 countries across the world. TEDx organizers have given voice to some of the world’s most recognized speakers, including Brené Brown and Greta Thunberg. 

TEDx SHORTS host and actress Atossa Leoni is known for her roles in the award-winning television series Homeland and the film adaptation of The Kite Runner, based on Khaled Hosseini’s best-selling novel. Atossa is fluent in five languages and is recognized for her work in promoting international human rights and women’s rights.

“Every day, TEDx Talks surface new ideas, research and perspectives from around the world,” says Jay Herratti, Executive Director of TEDx. “With TEDx SHORTS, we’ve curated short excerpts from some of the most thought-provoking and inspiring TEDx Talks so that listeners can discover them in bite-sized episodes.”

Produced by TED in partnership with PRX, TEDx SHORTS is one of TED’s seven original podcasts, which also include The TED Interview, TED Talks Daily, TED en Español, Sincerely, X, WorkLife with Adam Grant and TED Radio Hour. TED’s podcasts are downloaded more than 420 million times annually.

TEDx SHORTS debuts Monday, May 18 on Apple Podcasts or wherever you like to listen to podcasts.

CryptogramBart Gellman on Snowden

Bart Gellman's long-awaited (at least by me) book on Edward Snowden, Dark Mirror: Edward Snowden and the American Surveillance State, will finally be published in a couple of weeks. There is an adapted excerpt in the Atlantic.

It's an interesting read, mostly about the government surveillance of him and other journalists. He speaks about an NSA program called FIRSTFRUITS that specifically spies on US journalists. (This isn't news; we learned about this in 2006. But there are lots of new details.)

One paragraph in the excerpt struck me:

Years later Richard Ledgett, who oversaw the NSA's media-leaks task force and went on to become the agency's deputy director, told me matter-of-factly to assume that my defenses had been breached. "My take is, whatever you guys had was pretty immediately in the hands of any foreign intelligence service that wanted it," he said, "whether it was Russians, Chinese, French, the Israelis, the Brits. Between you, Poitras, and Greenwald, pretty sure you guys can't stand up to a full-fledged nation-state attempt to exploit your IT. To include not just remote stuff, but hands-on, sneak-into-your-house-at-night kind of stuff. That's my guess."

I remember thinking the same thing. It was the summer of 2013, and I was visiting Glenn Greenwald in Rio de Janeiro. This was just after Greenwald's partner was detained in the UK trying to ferry some documents from Laura Poitras in Berlin back to Greenwald. It was an opsec disaster; they would have been much more secure if they'd emailed the encrypted files. In fact, I told them to do that, every single day. I wanted them to send encrypted random junk back and forth constantly, to hide when they were actually sharing real data.

As soon as I saw their house I realized exactly what Ledgett said. I remember standing outside the house, looking into the dense forest for TEMPEST receivers. I didn't see any, which only told me they were well hidden. I guessed that black-bag teams from various countries had already been all over the house when they were out for dinner, and wondered what would have happened if teams from different countries bumped into each other. I assumed that all the countries Ledgett listed above -- plus the US and a few more -- had a full take of what Snowden gave the journalists. These journalists against those governments just wasn't a fair fight.

I'm looking forward to reading Gellman's book. I'm kind of surprised no one sent me an advance copy.

TED“Pindrop,” a TED original podcast hosted by filmmaker Saleem Reshamwala, premieres May 27

TED launches Pindrop — its newest original podcast — on May 27. Hosted by filmmaker Saleem Reshamwala, Pindrop will take listeners on a journey across the globe in search of the world’s most surprising and imaginative ideas. It’s not a travel show, exactly. It’s a deep dive into the ideas that shape a particular spot on the map, brought to you by local journalists and creators. From tiny islands to megacities, each episode is an opportunity to visit a new location — Bangkok, Mantua Township, Nairobi, Mexico City, Oberammergau — to find out: If this place were to give a TED Talk, what would it be about?

With Saleem as your guide, you’ll hear stories of police officers on motorbikes doubling as midwives in Bangkok, discover a groundbreaking paleontology site behind a Lowe’s in New Jersey’s Mantua Township, learn about Nairobi’s Afrobubblegum art movement and more. With the guidance of local journalists and TED Fellows, Pindrop gives listeners a unique lens into a spectrum of fascinating places  — an important global connection during this time of travel restrictions.

My family is from all over, and I’ve spent a lot of my life moving around,” said Saleem. “I’ve always wanted to work on something that captured the feeling of diving deep into conversation in a place you’ve never been before, where you’re getting hit by new ideas and you just feel more open to the world. Pindrop is a go at recreating that.”

Produced by TED and Magnificent Noise, Pindrop is one of TED’s nine original podcasts, which also include TEDxSHORTS, Checking In with Susan David, WorkLife with Adam Grant, The TED Interview, TED Talks Daily, TED en Español, Sincerely, X and TED Radio Hour.  TED’s podcasts are downloaded more than 420 million times annually.

TED strives to tell partner stories in the form of authentic, story-driven content developed in real time and aligned with the editorial process — finding and exploring brilliant ideas from all over the world. Pindrop is made possible with support from Women Will, a Grow with Google program. Working together, we’re spotlighting women who are finding unique ways of impacting their communities. Active in 48 countries, this Grow with Google program helps inspire, connect and educate millions of women.

Pindrop launches May 27 for a five-episode run, with five additional episodes this fall. New 30-minute episodes air weekly and are available on Apple Podcasts, Spotify and wherever you like to listen to podcasts.

CryptogramCriminals and the Normalization of Masks

I was wondering about this:

Masks that have made criminals stand apart long before bandanna-wearing robbers knocked over stagecoaches in the Old West and ski-masked bandits held up banks now allow them to blend in like concerned accountants, nurses and store clerks trying to avoid a deadly virus.

"Criminals, they're smart and this is a perfect opportunity for them to conceal themselves and blend right in," said Richard Bell, police chief in the tiny Pennsylvania community of Frackville. He said he knows of seven recent armed robberies in the region where every suspect wore a mask.

[...]

Just how many criminals are taking advantage of the pandemic to commit crimes is impossible to estimate, but law enforcement officials have no doubt the numbers are climbing. Reports are starting to pop up across the United States and in other parts of the world of crimes pulled off in no small part because so many of us are now wearing masks.

In March, two men walked into Aqueduct Racetrack in New York wearing the same kind of surgical masks as many racing fans there and, at gunpoint, robbed three workers of a quarter-million dollars they were moving from gaming machines to a safe. Other robberies involving suspects wearing surgical masks have occurred in North Carolina, and Washington, D.C, and elsewhere in recent weeks.

The article is all anecdote and no real data. But this is probably a trend.

Worse Than FailureCodeSOD: A Maskerade

Josh was writing some code to interact with an image sensor. “Fortunately” for Josh, a co-worker had already written a nice large pile of utility methods in C to make this “easy”.

So, when Josh wanted to know if the sensor was oriented in landscape or portrait (or horizontal/vertical), there was a handy method to retrieve that information:

// gets the sensor orientation
// 0 = horizontal, 1 = vertical
uint8_t get_sensor_orient(void);

Josh tried that out, and it correctly reported horizontal. Then, he switched the sensor into vertical, and it incorrectly reported horizontal. In fact, no matter what he did, get_sensor_orient returned 0. After trying to diagnose problems with the sensor, with the connection to the sensor, and so on, Josh finally decided to take a look at the code.


#define BYTES_TO_WORD(lo, hi)   (((uint16_t)hi << 8) + (uint16_t)lo)
#define SENSOR_ADDR             0x48  
#define SENSOR_SETTINGS_REG     0x24

#define SENSOR_ORIENT_MASK      0x0002

// gets the sensor orientation  
// 0 = horizontal, 1 = vertical  
uint8_t get_sensor_orient(void)  
{
    uint8_t buf;  
    read_sensor_reg(SENSOR_ADDR, SENSOR_SETTINGS_REG, &buf, 1);

    uint16_t tmp = BYTES_TO_WORD(0, buf) & SENSOR_ORIENT_MASK;

    return tmp & 0x0004;  
}

This starts reasonable. We create byte called buf and pass a reference to that byte to read_sensor_reg. Under the hood, that does some magic and talks to the image sensor and returns a byte that is a bitmask of settings on the sensor.

Now, at that point, assuming the the SENSOR_ORIENT_MASK value is correct, we should just return (buf & SENSOR_ORIENT_MASK) != 0. They could have done that, and been done. Or one of many variations on that basic concept which would let them return either a 0 or a 1.

But they can’t just do that. What comes next isn’t a simple matter of misusing bitwise operations, but a complete breakdown of thinking: they convert the byte into a word. They have a handy macro defined for that, which does some bitwise operations to combine two bytes.

Let’s assume the sensor settings mask is simply b00000010. We bitshift that to make b0000001000000000, and then add b00000000 to it. Then we and it with SENSOR_ORIENT_MASK, which would be b0000000000000010, which of course isn’t aligned with the layout of the word, so that returns zero.

There’s no reason to expand the single byte into two. That BYTES_TO_WORD macro might have other uses in the program, but certainly not here. Even if it is used elsewhere in the program, I wonder if they’re aware of the parameter order; it’s unusual (to me, anyway) to accept the lower order bits as the first parameter, and I suspect that’s part of what tripped this programmer up. Once they decided to expand the word, they assumed the macro would expand it in the opposite order, in which case their bitwise operation would have worked.

Of course, even if they had correctly extracted the correct bit, the last line of this method completely undoes all of that anyway: tmp & 0x0004 can’t possibly return a non-zero value after you’ve done a buf & 0x0002, as b00000100 and b00000010 have no bits in common.

As written, you could just replace this method with return 0 and it’d do the same thing, but more efficiently. “Zero” also happens to be how much faith I have in the developer who originally wrote this.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianNorbert Preining: Plasma 5.19 coming to Debian

The KDE Plasma desktop is soon getting an update to 5.19, and beta versions are out for testing.

In this release, we have prioritized making Plasma more consistent, correcting and unifying designs of widgets and desktop elements; worked on giving you more control over your desktop by adding configuration options to the System Settings; and improved usability, making Plasma and its components easier to use and an overall more pleasurable experience.

There are lots of new features mentioned in the release announcement, I like in particular the much more usable settings application as well as the new info center.




I have been providing builds of KDE related packages since quite some time now, see everything posted under the KDE tag. In the last days I have prepared Debian packages for Plasma 5.18.90 on OBS, for now only targeting Debian/sid and amd64 architecture.

These packages require Qt 5.14, which is only available in the experimental suite, and there is no way to simply update to Qt 5.14 since all Qt related packages need to be recompiled. So as long as Qt 5.14 doesn’t hit unstable, I cannot really run these packages on my main machine, but I tried a clean Debian virtual machine installing only Plasma 5.18.90 and depending packages, plus some more for a pleasant desktop experience. This worked out quite well, the VM runs Plasma 5.18.90.

I don’t have 3D running on the VM, so I cannot really check all the nice new effects, but I am sure on my main system they would work.

Well, bottom line, as soon as we have Qt 5.14 in Debian/unstable, we are also ready for Plasma 5.19!

,

Krebs on SecurityUkraine Nabs Suspect in 773M Password ‘Megabreach’

In January 2019, dozens of media outlets raised the alarm about a new “megabreach” involving the release of some 773 million stolen usernames and passwords that was breathlessly labeled “the largest collection of stolen data in history.” A subsequent review by KrebsOnSecurity quickly determined the data was years old and merely a compilation of credentials pilfered from mostly public data breaches. Earlier today, authorities in Ukraine said they’d apprehended a suspect in the case.

The Security Service of Ukraine (SBU) on Tuesday announced the detention of a hacker known as Sanix (a.k.a. “Sanixer“) from the Ivano-Frankivsk region of the country. The SBU said they found on Sanix’s computer records showing he sold databases with “logins and passwords to e-mail boxes, PIN codes for bank cards, e-wallets of cryptocurrencies, PayPal accounts, and information about computers hacked for further use in botnets and for organizing distributed denial-of-service (DDoS) attacks.”

Items SBU authorities seized after raiding Sanix’s residence. Image: SBU.

Sanix became famous last year for posting to hacker forums that he was selling the 87GB password dump, labeled “Collection #1.” Shortly after his sale was first detailed by Troy Hunt, who operates the HaveIBeenPwned breach notification service, KrebsOnSecurity contacted Sanix to find out what all the fuss was about. From that story:

“Sanixer said Collection#1 consists of data pulled from a huge number of hacked sites, and was not exactly his ‘freshest’ offering. Rather, he sort of steered me away from that archive, suggesting that — unlike most of his other wares — Collection #1 was at least 2-3 years old. His other password packages, which he said are not all pictured in the above screen shot and total more than 4 terabytes in size, are less than a year old, Sanixer explained.”

Alex Holden, chief technology officer and founder of Milwaukee-based Hold Security, said Sanixer’s claim to infamy was simply for disclosing the Collection #1 data, which was just one of many credential dumps amalgamated by other cyber criminals.

“Today, it is even a more common occurrence to see mixing new and old breached credentials,” Holden said. “In fact, large aggregations of stolen credentials have been around since 2013-2014. Even the original attempt to sell the Yahoo breach data was a large mix of several previous unrelated breaches. Collection #1 was one of many credentials collections output by various cyber criminals gangs.”

Sanix was far from a criminal mastermind, and left a long trail of clues that made it almost child’s play to trace his hacker aliases to the real-life identity of a young man in Burshtyn, a city located in Ivano-Frankivsk Oblast in western Ukraine.

Still, perhaps Ukraine’s SBU detained Sanix for other reasons in addition to his peddling of Collection 1. According to cyber intelligence firm Intel 471, Sanix has stayed fairly busy selling credentials that would allow customers to remotely access hacked resources at several large organizations. For example, as recently as earlier this month, Intel 471 spotted Sanix selling access to nearly four dozen universities worldwide, and to a compromised VPN account for the government of San Bernadino, Calif.

KrebsOnSecurity is covering Sanix’s detention mainly to close the loop on an incident that received an incredible amount of international attention. But it’s also another excuse to remind readers about the importance of good password hygiene. A core reason so many accounts get compromised is that far too many people have the nasty habit(s) of choosing poor passwords, re-using passwords and email addresses across multiple sites, and not taking advantage of multi-factor authentication options when available.

By far the most important passwords are those protecting our email inbox(es). That’s because in nearly all cases, the person who is in control of that email address can reset the password of any services or accounts tied to that email address – merely by requesting a password reset link via email. For more on this dynamic, please see The Value of a Hacked Email Account.

Your email account may be worth far more than you imagine.

And instead of thinking about passwords, consider using unique, lengthy passphrases — collections of words in an order you can remember — when a site allows it. In general, a long, unique passphrase takes far more effort to crack than a short, complex one. Unfortunately, many sites do not let users choose passwords or passphrases that exceed a small number of characters, or they will otherwise allow long passphrases but ignore anything entered after the character limit is reached.

If you are the type of person who likes to re-use passwords, then you definitely need to be using a password manager, which helps you pick and remember strong and unique passwords/passphrases and essentially lets you use the same strong master password/passphrase across all Web sites.

Finally, if you haven’t done so lately, mosey on over to twofactorauth.org and see if you are taking full advantage of the strongest available multi-factor authentication option at sites you trust with your data. The beauty of multi-factor is that even if thieves manage to guess or steal your password just because they hacked some Web site, that password will be useless to them unless they can also compromise that second factor — be it your mobile device, phone number, or security key. Not saying these additional security methods aren’t also vulnerable to compromise (they absolutely are), but they’re definitely better than just using a password.

CryptogramAI and Cybersecurity

Planet DebianJunichi Uekawa: After much investigation I decided to write a very simple page for getUserMedia.

After much investigation I decided to write a very simple page for getUserMedia. When I am performing music I provide audio in line input with all echo and noise and other concerns resolved. The idea is that I can cast the tab to video conferencing software and conferencing software will hopefully not reduce noise or echo. here. If the video conferencing software is reducing noise or echo from a tab, I will ask, why is it doing so?

Worse Than FailureThe Dangerous Comment

It is my opinion that every developer should dabble in making their own scripting language at least once. Not to actually use, mind you, but to simply to learn how languages work. If you do find yourself building a system that needs to be extendable via scripts, don’t use your own language, but use a well understood and well-proven embeddable scripting language.

Which is why Neil spends a lot of time looking at Tcl. Tcl is far from a dead language, and its bundled in pretty much every Linux or Unix, including ones for embedded platforms, meaning it runs anywhere. It’s also a simple language, with its syntax described by a relatively simple collection of rules.

Neil’s company deployed embedded network devices from a vendor. Those embedded network devices were one of the places that Tcl runs, and the company which shipped the devices decided that configuration and provisioning of the devices would be done via Tcl.

It was nobody’s favorite state of affairs, but it was more-or-less fine. The challenges were less about writing Tcl and more about learning the domain-specific conventions for configuring these devices. The real frustration was that most of the time, when something went wrong, especially in this vendor-specific dialect, the error was simply: “Unknown command.”

As provisioning needs got more and more complicated, scripts calling out to other scripts became a more and more common convention, which made the “Unknown command” errors even more frustrating to track down.

It was while digging into one of those that Neil discovered a special intersection of unusual behaviors, in a section of code which may have looked something like:

# procedure for looking up config options
proc lookup {fname} {
  # does stuff …
}

Neil spent a good long time trying to figure out why there was an “Unknown command” error. While doing that hunting, and referring back to the “Dodekalogue” of rules which governs Tcl, Neil had a realization, specifically while looking at the definition of a comment:

If a hash character (“#”) appears at a point where Tcl is expecting the first character of the first word of a command, then the hash character and the characters that follow it, up through the next newline, are treated as a comment and ignored. The comment character only has significance when it appears at the beginning of a command.

In Tcl, a command is a series of words, where the first word is the name of the command. If the command name starts with a “#”, then the command is a comment.

That is to say, comments are commands. Which doesn’t really sound interesting, except for one very important rule about this vendor-specific deployment of Tcl: it restricted which commands could be executed based on the user’s role.

Most of the time, this never came up. Neil and his peers logged in as admins, and admins could do anything. But this time, Neil was logged in as a regular user. It didn’t take much digging for Neil to discover that in the default configuration the “#” command was restricted to administrators.

The vendor specifically shipped their devices configured so that comments couldn’t be added to provisioning scripts unless those scripts were executed by administrators. It wasn’t hard for Neil to fix that, but with the helpful “Unknown Command” errors, it was hard to find out what needed to be fixed.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianFrançois Marier: Displaying client IP address using Apache Server-Side Includes

If you use a Dynamic DNS setup to reach machines which are not behind a stable IP address, you will likely have a need to probe these machines' public IP addresses. One option is to use an insecure service like Oracle's http://checkip.dyndns.com/ which echoes back your client IP, but you can also do this on your own server if you have one.

There are multiple options to do this, like writing a CGI or PHP script, but those are fairly heavyweight if that's all you need mod_cgi or PHP for. Instead, I decided to use Apache's built-in Server-Side Includes.

Apache configuration

Start by turning on the include filter by adding the following in /etc/apache2/conf-available/ssi.conf:

AddType text/html .shtml
AddOutputFilter INCLUDES .shtml

and making that configuration file active:

a2enconf ssi

Then, find the vhost file where you want to enable SSI and add the following options to a Location or Directory section:

<Location /ssi_files>
    Options +IncludesNOEXEC
    SSLRequireSSL
    Header set Content-Security-Policy: "default-src 'none'"
    Header set X-Content-Type-Options: "nosniff"
</Location>

before adding the necessary modules:

a2enmod headers
a2enmod include

and restarting Apache:

apache2ctl configtest && systemctl restart apache2.service

Create an shtml page

With the web server ready to process SSI instructions, the following HTML blurb can be used to display the client IP address:

<!--#echo var="REMOTE_ADDR" -->

or any other built-in variable.

Note that you don't need to write a valid HTML for the variable to be substituted and so the above one-liner is all I use on my server.

Security concerns

The first thing to note is that the configuration section uses the IncludesNOEXEC option in order to disable arbitrary command execution via SSI. In addition, you can also make sure that the cgi module is disabled since that's a dependency of the more dangerous side of SSI:

a2dismod cgi

Of course, if you rely on this IP address to be accurate, for example because you'll be putting it in your DNS, then you should make sure that you only serve this page over HTTPS, which can be enforced via the SSLRequireSSL directive.

I included two other headers in the above vhost config (Content-Security-Policy and X-Content-Type-Options) in order to limit the damage that could be done in case a malicious file was accidentally dropped in that directory.

Finally, I suggest making sure that only the root user has writable access to the directory which has server-side includes enabled:

$ ls -la /var/www/ssi_includes/
total 12
drwxr-xr-x  2 root     root     4096 May 18 15:58 .
drwxr-xr-x 16 root     root     4096 May 18 15:40 ..
-rw-r--r--  1 root     root        0 May 18 15:46 index.html
-rw-r--r--  1 root     root       32 May 18 15:58 whatsmyip.shtml

Planet DebianArturo Borrero González: A better Toolforge: upgrading the Kubernetes cluster

Logos

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

One of the most successful and important products provided by the Wikimedia Cloud Services team at the Wikimedia Foundation is Toolforge. Toolforge is a platform that allows users and developers to run and use a variety of applications that help the Wikimedia movement and mission from the technical point of view in general. Toolforge is a hosting service commonly known in the industry as a Platform as a Service (PaaS). Toolforge is powered by two different backend engines, Kubernetes and GridEngine.

This article focuses on how we made a better Toolforge by integrating a newer version of Kubernetes and, along with it, some more modern workflows.

The starting point in this story is 2018. Yes, two years ago! We identified that we could do better with our Kubernetes deployment in Toolforge. We were using a very old version, v1.4. Using an old version of any software has more or less the same consequences everywhere: you lack security improvements and some modern key features.

Once it was clear that we wanted to upgrade our Kubernetes cluster, both the engineering work and the endless chain of challenges started.

It turns out that Kubernetes is a complex and modern technology, which adds some extra abstraction layers to add flexibility and some intelligence to a very old systems engineering need: hosting and running a variety of applications.

Our first challenge was to understand what our use case for a modern Kubernetes was. We were particularly interested in some key features:

  • The increased security and controls required for a public user-facing service, using RBAC, PodSecurityPolicies, quotas, etc.
  • Native multi-tenancy support, using namespaces
  • Advanced web routing, using the Ingress API

Soon enough we faced another Kubernetes native challenge: the documentation. For a newcomer, learning and understanding how to adapt Kubernetes to a given use case can be really challenging. We identified some baffling patterns in the docs. For example, different documentation pages would assume you were using different Kubernetes deployments (Minikube vs kubeadm vs a hosted service). We are running Kubernetes like you would on bare-metal (well, in CloudVPS virtual machines), and some documents directly referred to ours as a corner case.

During late 2018 and early 2019, we started brainstorming and prototyping. We wanted our cluster to be reproducible and easily rebuildable, and in the Technology Department at the Wikimedia Foundation, we rely on Puppet for that. One of the first things to decide was how to deploy and build the cluster while integrating with Puppet. This is not as simple as it seems because Kubernetes itself is a collection of reconciliation loops, just like Puppet is. So we had to decide what to put directly in Kubernetes and what to control and make visible through Puppet. We decided to stick with kubeadm as the deployment method, as it seems to be the more upstream-standardized tool for the task. We had to make some interesting decisions by trial and error, like where to run the required etcd servers, what the kubeadm init file would look like, how to proxy and load-balance the API on our bare-metal deployment, what network overlay to choose, etc. If you take a look at our public notes, you can get a glimpse of the number of decisions we had to make.

Our Kubernetes wasn’t going to be a generic cluster, we needed a Toolforge Kubernetes service. This means we don’t use some of the components, and also, we add some additional pieces and configurations to it. By the second half of 2019, we were working full-speed on the new Kubernetes cluster. We already had an idea of what we wanted and how to do it.

There were a couple of important topics for discussions, for example:

  • Ingress
  • Validating admission controllers
  • Security policies and quotas
  • PKI and user management

We will describe in detail the final state of those pieces in another blog post, but each of the topics required several hours of engineering time, research, tests, and meetings before reaching a point in which we were comfortable with moving forward.

By the end of 2019 and early 2020, we felt like all the pieces were in place, and we started thinking about how to migrate the users, the workloads, from the old cluster to the new one. This migration plan mostly materialized in a Wikitech page which contains concrete information for our users and the community.

The interaction with the community was a key success element. Thanks to our vibrant and involved users, we had several early adopters and beta testers that helped us identify early flaws in our designs. The feedback they provided was very valuable for us. Some folks helped solve technical problems, helped with the migration plan or even helped make some design decisions. Worth noting that some of the changes that were presented to our users were not easy to handle for them, like new quotas and usage limits. Introducing new workflows and deprecating old ones is always a risky operation.

Even though the migration procedure from the old cluster to the new one was fairly simple, there were some rough edges. We helped our users navigate them. A common issue was a webservice not being able to run in the new cluster due to stricter quota limiting the resources for the tool. Another example is the new Ingress layer failing to properly work with some webservices’s particular options.

By March 2020, we no longer had anything running in the old Kubernetes cluster, and the migration was completed. We then started thinking about another step towards making a better Toolforge, which is introducing the toolforge.org domain. There is plenty of information about the change to this new domain in Wikitech News.

The community wanted a better Toolforge, and so do we, and after almost 2 years of work, we have it! All the work that was done represents the commitment of the Wikimedia Foundation to support the technical community and how we really want to pursue technical engagement in general in the Wikimedia movement. In a follow-up post we will present and discuss more in-depth about some technical details of the new Kubernetes cluster, stay tuned!

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

TEDTED2020 seeks the uncharted

The world has shifted, and so has TED.

We need brilliant ideas and thinkers more than ever. While we can’t convene in person, we will convene. Rather than a one-week conference, TED2020 will be an eight-week virtual experience — all held in the company of the TED community. Each week will offer signature TED programming and activities, as well as new and unique opportunities for connection and interaction. 

We have an opportunity to rebuild our world in a better, fairer and more beautiful way. In line with TED2020’s original theme, Uncharted, the conference will focus on the roles we all have to play in building back better. The eight-week program will offer ways to deepen community relationships and, together, re-imagine what the future can be.

Here’s what the TED2020 weekly program will look like: On Monday, Tuesday and Wednesday, a series of 45-minute live interviews, talks and debates centered on the theme Build Back Better. TED attendees can help shape the real-time conversation on an interactive, TED-developed virtual platform they can use to discuss ideas, share questions and give feedback to the stage. On Thursday, the community will gather to experience a longer mainstage TED session packed with unexpected moments, performances, visual experiences and provocative talks and interviews. Friday wraps up the week with an all-day, à la carte Community Day featuring an array of interactive choices including Discovery Sessions, speaker meetups and more.

 TED2020 speakers and performers include: 

  • JAD ABUMRAD, RadioLab founder 
  • CHRISTINA AGAPAKIS, Synthetic biology adventurer
  • REFIK ANADOL, Digital arts maestro
  • XIYE BASTIDA, Climate justice activist
  • SWIZZ BEATZ, Hip-hop artist, producer
  • GEORGES C. BENJAMIN, Executive Director, American Public Health Association
  • BRENÉ BROWN, Vulnerability researcher, storyteller 
  • WILL CATHCART, Head of WhatsApp
  • JAMIE DIMON, Global banker
  • ABIGAIL DISNEY, Filmmaker, activist
  • BILL GATES, Technologist, philanthropist
  • KRISTALINA GEORGIEVA, Managing Director, International Monetary Fund
  • JANE GOODALL, Primatologist, conservationist
  • AL GORE, Climate advocate
  • TRACY EDWARDS, Trailblazer
  • ISATA KANNEH-MASON, Pianist
  • SHEKU KANNEH-MASON, Cellist
  • NEAL KATYAL, Supreme Court litigator
  • EMILY KING, Singer, songwriter
  • YANN LECUN, AI pioneer
  • MICHAEL LEVIN, Cellular explorer
  • PHILIP LUBIN, Physicist
  • SHANTELL MARTIN, Artist
  • MARIANA MAZZUCATO, Policy influencer
  • MARCELO MENA, Environment minister of Chile
  • JACQUELINE NOVOGRATZ, Moral leader
  • DAN SCHULMAN, CEO and President, PayPal
  • AUDREY TANG, Taiwan’s digital minister for social innovation
  • DALLAS TAYLOR, Sound designer, podcaster
  • NIGEL TOPPING, Climate action champion
  • RUSSELL WILSON, Quarterback, Seattle Seahawks

The speaker lineup is being unveiled on ted2020.ted.com in waves throughout the eight weeks, as many speakers will be addressing timely and breaking news. Information about accessing the high-definition livestream of the entire conference and TED2020 membership options are also available on ted2020.ted.com.

The TED Fellows class of 2020 will once again be a highlight of the conference, with talks, Discovery Sessions and other special events sprinkled throughout the eight-week program. 

TED2020 members will also receive special access to the TED-Ed Student Talks program, which helps students around the world discover, develop and share their ideas in the form of TED-style talks. TEDsters’ kids and grandkids (ages 8-18) can participate in a series of interactive sessions led by the TED-Ed team and culminating in the delivery of each participant’s very own big idea.

As in the past, TED Talks given during the conference will be made available to the public in the coming weeks. Opening TED up to audiences around the world is foundational to TED’s mission of spreading ideas. Founded in 1984, the first TED conferences were held in Monterey, California. In 2006, TED experimented with putting TED Talk videos online for free — a decision that opened the doors to giving away all of its content. Today there are thousands of TED Talks available on TED.com. What was once a closed-door conference devoted to Technology, Entertainment and Design has become a global platform for sharing talks across a wide variety of disciplines. Thanks to the support of thousands of volunteer translators, TED Talks are available in 116 languages. TEDx, the licensing program that allows communities to produce independently organized TED events, has seen more than 28,000 events held in more than 170 countries. TED-Ed offers close to 1,200 free animated lessons and other learning resources for a youth audience and educators. Collectively, TED content attracts billions of views and listens each year.

TED has partnered with a number of innovative organizations to support its mission and contribute to the idea exchange at TED2020. They are collaborating with the TED team on innovative ways to engage a virtual audience and align their ideas and perspectives with this year’s programming. This year’s partners include: Accenture, BetterUp, Boston Consulting Group, Brightline™ Initiative, Cognizant, Hilton, Lexus, Project Management Institute, Qatar Foundation, Robert Wood Johnson Foundation, SAP, Steelcase and Target.

Get the latest information and updates on TED2020 on ted2020.ted.com.

TEDTED2020 postponed

Update 5/18/20: TED2020 will not be held in Vancouver, BC. Starting May 18, 2020, the conference is being convened as an eight-week virtual experience.

Based on a community-wide decision, TED2020 will move from April 20-24 to July 26-30 — and will still be held in Vancouver, BC.

With the COVID-19 virus spreading across the planet, we’re facing many challenges and uncertainties, which is why we feel passionately that TED2020 matters more than ever. Knowing our original April dates would no longer work, we sought counsel and guidance from our vast community. Amidst our network of artists, entrepreneurs, innovators, creators, scientists and more, we also count experts in health and medicine among our ranks. After vetting all of the options, we offered registered attendees the choice to either postpone the event or hold a virtual version. The majority expressed a preference for a summer TED, so that’s the official plan.

We’ve spent the past year putting together a spectacular program designed to chart the future. Our speakers are extraordinary. You, our beloved community, are also incredible. Somehow, despite the global health crisis, we will use this moment to share insights, spark action and host meaningful discussions of the ideas that matter most in the world.

As head of TED Chris Anderson noted in his letter to attendees: “Our north star in making decisions has been your health and safety. This is a moment when community matters like never before. I believe passionately in the power, wisdom and collective spirit of this community. We’re stronger together.”

Learn more about TED2020: Uncharted

Krebs on SecurityThis Service Helps Malware Authors Fix Flaws in their Code

Almost daily now there is news about flaws in commercial software that lead to computers getting hacked and seeded with malware. But the reality is most malicious software also has its share of security holes that open the door for security researchers or ne’er-do-wells to liberate or else seize control over already-hacked systems. Here’s a look at one long-lived malware vulnerability testing service that is used and run by some of the Dark Web’s top cybercriminals.

It is not uncommon for crooks who sell malware-as-a-service offerings such as trojan horse programs and botnet control panels to include backdoors in their products that let them surreptitiously monitor the operations of their customers and siphon data stolen from victims. More commonly, however, the people writing malware simply make coding mistakes that render their creations vulnerable to compromise.

At the same time, security companies are constantly scouring malware code for vulnerabilities that might allow them peer to inside the operations of crime networks, or to wrest control over those operations from the bad guys. There aren’t a lot of public examples of this anti-malware activity, in part because it wades into legally murky waters. More importantly, talking publicly about these flaws tends to be the fastest way to get malware authors to fix any vulnerabilities in their code.

Enter malware testing services like the one operated by “RedBear,” the administrator of a Russian-language security site called Krober[.]biz, which frequently blogs about security weaknesses in popular malware tools.

For the most part, the vulnerabilities detailed by Krober aren’t written about until they are patched by the malware’s author, who’s paid a small fee in advance for a code review that promises to unmask any backdoors and/or harden the security of the customer’s product.

RedBear’s profile on the Russian-language xss[.]is cybercrime forum.

RedBear’s service is marketed not only to malware creators, but to people who rent or buy malicious software and services from other cybercriminals. A chief selling point of this service is that, crooks being crooks, you simply can’t trust them to be completely honest.

“We can examine your (or not exactly your) PHP code for vulnerabilities and backdoors,” reads his offering on several prominent Russian cybercrime forums. “Possible options include, for example, bot admin panels, code injection panels, shell control panels, payment card sniffers, traffic direction services, exchange services, spamming software, doorway generators, and scam pages, etc.”

As proof of his service’s effectiveness, RedBear points to almost a dozen articles on Krober[.]biz which explain in intricate detail flaws found in high-profile malware tools whose authors have used his service in the past, including; the Black Energy DDoS bot administration panel; malware loading panels tied to the Smoke and Andromeda bot loaders; the RMS and Spyadmin trojans; and a popular loan scan script.

ESTRANGED BEDFELLOWS

RedBear doesn’t operate this service on his own. Over the years he’s had several partners in the project, including two very high-profile cybercriminals (or possibly just one, as we’ll see in a moment) who until recently operated under the hacker aliases “upO” and “Lebron.”

From 2013 to 2016, upO was a major player on Exploit[.]in — one of the most active and venerated Russian-language cybercrime forums in the underground — authoring almost 1,500 posts on the forum and starting roughly 80 threads, mostly focusing on malware. For roughly one year beginning in 2016, Lebron was a top moderator on Exploit.

One of many articles Lebron published on Krober[.]biz that detailed flaws found in malware submitted to RedBear’s vulnerability testing service.

In 2016, several members began accusing upO of stealing source code from malware projects under review, and then allegedly using or incorporating bits of the code into malware projects he marketed to others.

up0 would eventually be banned from Exploit for getting into an argument with another top forum contributor, wherein both accused the other of working for or with Russian and/or Ukrainian federal authorities, and proceeded to publish personal information about the other that allegedly outed their real-life identities.

The cybercrime actor “upO” on Exploit[.]in in late 2016, complaining that RedBear was refusing to pay a debt owed to him.

Lebron first appeared on Exploit in September 2016, roughly two months before upO was banished from the community. After serving almost a year on the forum while authoring hundreds of posts and threads (including many articles first published on Krober), Lebron abruptly disappeared from Exploit.

His departure was prefaced by a series of increasingly brazen accusations by forum members that Lebron was simply upO using a different nickname. His final post on Exploit in May 2017 somewhat jokingly indicated he was joining an upstart ransomware affiliate program.

RANSOMWARE DREAMS

According to research from cyber intelligence firm Intel 471, upO had a strong interest in ransomware and had partnered with the developer of the Cerber ransomware strain, an affiliate program operating between Feb. 2016 and July 2017 that sought to corner the increasingly lucrative and competitive market for ransomware-as-a-service offerings.

Intel 471 says a rumor has been circulating on Exploit and other forums upO frequented that he was the mastermind behind GandCrab, another ransomware-as-a-service affiliate program that first surfaced in January 2018 and later bragged about extorting billions of dollars from hacked businesses when it closed up shop in June 2019.

Multiple security companies and researchers (including this author) have concluded that GandCrab didn’t exactly go away, but instead re-branded to form a more exclusive ransomware-as-a-service offering dubbed “REvil” (a.k.a. “Sodin” and “Sodinokibi”). REvil was first spotted in April 2019 after being installed by a GandCrab update, but its affiliate program didn’t kick into high gear until July 2019.

Last month, the public face of the REvil ransomware affiliate program — a cybercriminal who registered on Exploit in July 2019 using the nickname “UNKN” (a.k.a. “Unknown”) — found himself the target of a blackmail scheme publicly announced by a fellow forum member who claimed to have helped bankroll UNKN’s ransomware business back in 2016 but who’d taken a break from the forum on account of problems with the law.

That individual, using the nickname “Vivalamuerte,” said UNKN still owed him his up-front investment money, which he reckoned amounted to roughly $190,000. Vivalamuerte said he would release personal details revealing UNKN’s real-life identity unless he was paid what he claims he is owed.

In this Google-translated blackmail post by Vivalamuerte to UNKN, the latter’s former nickname was abbreviated to “L”.

Vivalamuerte also claimed UNKN has used four different nicknames, and that the moniker he interacted with back in 2016 began with the letter “L.” The accused’s full nickname was likely redacted by forum administrators because a search on the forum for “Lebron” brings up the same post even though it is not visible in any of Vivalamuerte’s threatening messages.

Reached by KrebsOnSecurity, Vivalamuerte declined to share what he knew about UNKN, saying the matter was still in arbitration. But he said he has proof that Lebron was the principle coder behind the GandCrab ransomware, and that the person behind the Lebron identity plays a central role in the REvil ransomware extortion enterprise as it exists today.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 03)

Here’s part three of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

CryptogramRamsey Malware

A new malware, called Ramsey, can jump air gaps:

ESET said they've been able to track down three different versions of the Ramsay malware, one compiled in September 2019 (Ramsay v1), and two others in early and late March 2020 (Ramsay v2.a and v2.b).

Each version was different and infected victims through different methods, but at its core, the malware's primary role was to scan an infected computer, and gather Word, PDF, and ZIP documents in a hidden storage folder, ready to be exfiltrated at a later date.

Other versions also included a spreader module that appended copies of the Ramsay malware to all PE (portable executable) files found on removable drives and network shares. This is believed to be the mechanism the malware was employing to jump the air gap and reach isolated networks, as users would most likely moved the infected executables between the company's different network layers, and eventually end up on an isolated system.

ESET says that during its research, it was not able to positively identify Ramsay's exfiltration module, or determine how the Ramsay operators retrieved data from air-gapped systems.

Honestly, I can't think of any threat actor that wants this kind of feature other than governments:

The researcher has not made a formal attribution as who might be behind Ramsay. However, Sanmillan said that the malware contained a large number of shared artifacts with Retro, a malware strain previously developed by DarkHotel, a hacker group that many believe to operate in the interests of the South Korean government.

Seems likely.

Details.

Planet DebianRussell Coker: A Good Time to Upgrade PCs

PC hardware just keeps getting cheaper and faster. Now that so many people have been working from home the deficiencies of home PCs are becoming apparent. I’ll give Australian prices and URLs in this post, but I think that similar prices will be available everywhere that people read my blog.

From MSY (parts list PDF ) [1] 120G SATA SSDs are under $50 each. 120G is more than enough for a basic workstation, so you are looking at $42 or so for fast quiet storage or $84 or so for the same with RAID-1. Being quiet is a significant luxury feature and it’s also useful if you are going to be in video conferences.

For more serious storage NVMe starts at around $100 per unit, I think that $124 for a 500G Crucial NVMe is the best low end option (paying $95 for a 250G Kingston device doesn’t seem like enough savings to be worth it). So that’s $248 for 500G of very fast RAID-1 storage. There’s a Samsung 2TB NVMe device for $349 which is good if you need more storage, it’s interesting to note that this is significantly cheaper than the Samsung 2TB SSD which costs $455. I wonder if SATA SSD devices will go away in the future, it might end up being SATA for slow/cheap spinning media and M.2 NVMe for solid state storage. The SATA SSD devices are only good for use in older systems that don’t have M.2 sockets on the motherboard.

It seems that most new motherboards have one M.2 socket on the motherboard with NVMe support, and presumably support for booting from NVMe. But dual M.2 sockets is rare and the price difference is significantly greater than the cost of a PCIe M.2 card to support NVMe which is $14. So for NVMe RAID-1 it seems that the best option is a motherboard with a single NVMe socket (starting at $89 for a AM4 socket motherboard – the current standard for AMD CPUs) and a PCIe M.2 card.

One thing to note about NVMe is that different drivers are required. On Linux this means means building a new initrd before the migration (or afterwards when booted from a recovery image) and on Windows probably means a fresh install from special installation media with NVMe drivers.

All the AM4 motherboards seem to have RADEON Vega graphics built in which is capable of 4K resolution at a stated refresh of around 24Hz. The ones that give detail about the interfaces say that they have HDMI 1.4 which means a maximum of 30Hz at 4K resolution if you have the color encoding that suits text (IE for use other than just video). I covered this issue in detail in my blog post about DisplayPort and 4K resolution [2]. So a basic AM4 motherboard won’t give great 4K display support, but it will probably be good for a cheap start.

$89 for motherboard, $124 for 500G NVMe, $344 for a Ryzen 5 3600 CPU (not the cheapest AM4 but in the middle range and good value for money), and $99 for 16G of RAM (DDR4 RAM is cheaper than DDR3 RAM) gives the core of a very decent system for $656 (assuming you have a working system to upgrade and peripherals to go with it).

Currently Kogan has 4K resolution monitors starting at $329 [3]. They probably won’t be the greatest monitors but my experience of a past cheap 4K monitor from Kogan was that it is quite OK. Samsung 4K monitors started at about $400 last time I could check (Kogan currently has no stock of them and doesn’t display the price), I’d pay an extra $70 for Samsung, but the Kogan branded product is probably good enough for most people. So you are looking at under $1000 for a new system with fast CPU, DDR4 RAM, NVMe storage, and a 4K monitor if you already have the case, PSU, keyboard, mouse, etc.

It seems quite likely that the 4K video hardware on a cheap AM4 motherboard won’t be that great for games and it will definitely be lacking for watching TV documentaries. Whether such deficiencies are worth spending money on a PCIe video card (starting at $50 for a low end card but costing significantly more for 3D gaming at 4K resolution) is a matter of opinion. I probably wouldn’t have spent extra for a PCIe video card if I had 4K video on the motherboard. Not only does using built in video save money it means one less fan running (less background noise) and probably less electricity use too.

My Plans

I currently have a workstation with 2*500G SATA SSDs in a RAID-1 array, 16G of RAM, and a i5-2500 CPU (just under 1/4 the speed of the Ryzen 5 3600). If I had hard drives then I would definitely buy a new system right now. But as I have SSDs that work nicely (quiet and fast enough for most things) and almost all machines I personally use have SSDs (so I can’t get a benefit from moving my current SSDs to another system) I would just get CPU, motherboard, and RAM. So the question is whether to spend $532 for more than 4* the CPU performance. At the moment I’ll wait because I’ll probably get a free system with DDR4 RAM in the near future, while it probably won’t be as fast as a Ryzen 5 3600, it should be at least twice as fast as what I currently have.

Worse Than FailureCodeSOD: Extra Strict

One of the advantages of a strongly typed language is that many kinds of errors can be caught at compile time. Without even running the code, you know you've made a mistake. This adds a layer of formality to your programs, which has the disadvantage of making it harder for a novice programmer to get started.

At least, that's my understanding of why every language that's designed to be "easy to use" defaults to being loosely typed. The result is that it's easy to get started, but then you inevitably end up asking yourself wat?

Visual Basic was one of those languages. It wanted to avoid spitting out errors at compile time, because that made it "easy" to get started. This meant, for example, that in old versions of Visual Basic, you didn't need to declare your variables- they were declared on use, a feature that persists into languages like Python today. Also, in older versions, you didn't need to declare variables as having a type, they could just hold anything. And even if you declared a type, the compiler would "do its best" to stuff one type into another, much like JavaScript does today.

Microsoft recognized that this would be a problem if a large team was working on a Visual Basic project. And large teams and large Visual Basic projects are a thing that sadly happened. So they added features to the language which let you control how strict it would be. Adding Option Explicit to a file would mean that variables needed to be declared before use. Option Strict would enforce strict type checking, and preventing surprising implicit casts.

One of the big changes in VB.Net was the defaults for those changed- Option Explicit defaulted to being on, and you needed to specify Option Explicit Off to get the old behavior. Option Strict remained off by default, though, so many teams enabled it. In .NET, it was even more important, since while VB.Net might let you play loose with types at compile time, the compiled MSIL output didn't.

Which brings us to Russell F's code. While the team's coding standards do recommend that Option Strict be enabled, one developer hasn't quite adapted to that reality. Which is why pretty much any code that interacts with form fields looks like this:

Public i64Part2 As Int64 'later… i64Part2 = Format(Convert.ToInt64(txtIBM2.Text), "00000")

txtIBM2 is, as you might guess from the Hungarian tag, a text box. So we need to convert that to a number, hence the Convert.ToInt64. So far so good.

Then, perplexingly, we Format the number back into a string that is 5 characters long. Then we let an implicit cast turn the string back into a number, because i64Part2 is an Int64. So that's a string converted explicitly into a number, formatted into a string and then implicitly converted back to a number.

The conversion back to a number undoes whatever was accomplished by the formatting. Worse, the format give you a false sense of security- the format string only supports 5 digits, but what happens if you pass a 6 digit number in? Nothing: the Format method won't truncate, so your six digit number comes out as six digits.

Maybe the "easy to use" languages are onto something. Types do seem hard.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet DebianEnrico Zini: Art links

Guglielmo Achille Cavellini (11 September 1914 – 20 November 1990), also known as GAC, was an Italian artist and art collector. After an initial activity as a painter, in the 1940s and 1950s he became one of the major collectors of contemporary Italian abstract art, developing a deep relationship of patronage and friendship with the artists. This experience has its pinnacle in the exhibition Modern painters of the Cavellini collection at the National Gallery of Modern Art in Rome in 1957. In the 1960s Cavellini resumed his activity as an artist, with an ample production spanning from Neo-Dada to performance art to mail art, of which he became one of the prime exponents with the Exhibitions at Home and the Round Trip works. In 1971 he invented autostoricizzazione (self-historicization), upon which he acted to create a deliberate popular history surrounding his existence. He also authored the books Abstract Art (1959), Man painter (1960), Diary of Guglielmo Achille Cavellini (1975), Encounters/Clashes in the Jungle of Art (1977) and Life of a Genius (1989).
Paul Gustave Louis Christophe Doré (/dɔːˈreɪ/; French: [ɡys.tav dɔ.ʁe]; 6 January 1832 – 23 January 1883[1]) was a French artist, printmaker, illustrator, comics artist, caricaturist, and sculptor who worked primarily with wood-engraving.
«Enrico Baj era bravissimo a pijà per culo er potere usanno ‘a fantasia. Co quaa sempricità che è solo dii granni, raccatta robbe tipo bottoni, pezzi de stoffa, cordoni, passamanerie varie, e l’appiccica su ‘a tela insieme aa pittura sua: che pare quasi che sta a giocà ma giocanno giocanno, zitto zitto, riesce a rovescià er monno.…>>

Planet DebianDirk Eddelbuettel: #2 T^4: Customizing The Shell Prompt

The second video (following the announcement and last week’s shell colors) is up in the stil new T^4 series of video lightning talks with tips, tricks, tools, and toys. Today we cover customizing shell prompts.

The slides are available here. Next week we likely continue on shell customization with aliases.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRuss Allbery: krb5-strength 3.2

krb5-strength provides password strength checking for Kerberos KDCs (either MIT or Heimdal), and also provides a password history implementation for Heimdal.

This release adds a check-only mode to the heimdal-history command to interrogate history without modifying it and increases the default hash iterations used when storing old passwords. explicit_bzero is now used, where available, to clear the memory used for passwords after processing. krb5-strength can now optionally be built without CrackLib support at all, if you only want to use the word list, edit distance, or length and character class rules.

It's been a few years since the previous release, so this release also updates all the portability code, overhauls valgrind testing, and now passes tests when built with system CrackLib (by skipping tests for passwords that are rejected by the stronger rules of the embedded CrackLib fork).

You can get the latest release from the krb5-strength distribution page. New packages will be uploaded to Debian unstable shortly (as soon as a Perl transition completes enough to make the package buildable in unstable).

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.880.1.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 719 other packages on CRAN.

Conrad released a new upstream version 9.880.1 of Armadillo on Friday which I packaged and tested as usual (result log here in the usual repo). The R package also sports a new OpenMP detection facility once again motivated by macOS which changed its setup yet again.

Changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.880.1.0 (2020-05-15)

  • Upgraded to Armadillo release 9.880.1 (Roasted Mocha Detox)

    • expanded qr() to optionally use pivoted decomposition

    • updated physical constants to NIST 2018 CODATA values

    • added ARMA_DONT_USE_CXX11_MUTEX confguration option to disable use of std::mutex

  • OpenMP capability is tested explicitly (Kevin Ushey and Dirk in #294, #295, and #296 all fixing #290).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteve Kemp: Some brief sysbox highlights

I started work on sysbox again recently, adding a couple of simple utilities. (The whole project is a small collection of utilities, distributed as a single binary to ease installation.)

Imagine you want to run a command for every line of STDIN, here's a good example:

 $ cat input | sysbox exec-stdin "youtube-dl {}"

Here you see for every (non-empty) line of input read from STDIN the command "youtube-dl" has been executed. "{}" gets expanded to the complete line read. You can also access individual fields, kinda like awk.

(Yes youtube-dl can read a list of URLs from a file, this is an example!)

Another example, run groups for every local user:

$ cat /etc/passwd | sysbox exec-stdin --split=: groups {1}

Here you see we have split the input-lines read from STDIN by the : character, instead of by whitespace, and we've accessed the first field via "{1}". This is certainly easier for scripting than using a bash loop.

On the topic of bash; command-completion for each subcommand, and their arguments, is now present:

$ source <(sysbox bash-completion)

And I've added a text-based UI for selecting files. You can also execute a command, against the selected file:

$ sysbox choose-file -exec "xine {}" /srv/tv

This is what that looks like:

/2020/05/17-choose-file.png

You'll see:

  • A text-box for filtering the list.
  • A list which can be scrolled up/down/etc.
  • A brief bit of help information in the footer.

As well as choosing files, you can also select from lines read via STDIN, and you can filter the command in the same way as before. (i.e. "{}" is the selected item.)

Other commands received updates, so the calculator now allows storing results in variables:

$ sysbox calc
calc> let a = 3
3
calc> a / 9 * 3
1
calc> 1 + 2 * a
7
calc> 1.2 + 3.4
4.600000

Planet Linux AustraliaLev Lafayette: Notes on Installing Ubuntu 20 VM on an MS-Windows 10 Host

Some thirteen years ago I worked with Xen virtual machines as part of my day job, and gave a presentation at Linux Users of Victoria on the subject (with additional lecture notes). A few years after that I gave another presentation on the Unified Extensible Firmware Interface (UEFI), itself which (indirectly) led to a post on Linux and MS-Windows 8 dual-booting. All of this now leads to a some notes on using MS-Windows as a host for Ubuntu Linux guest machines.

Why Would You Want to do This?

Most people these have at least heard of Linux. They might even know that every single supercomputer in the world uses Linux. They may know that the overwhelming majority of embedded devices, such as home routers, use Linux. Or maybe even that the Android mobile 'phone uses a Linux kernel. Or that MacOS is built on the same broad family of UNIX-like operating systems. Whilst they might be familiar with their MS-Windows environment, because that's what they've been brought up on and what their favourite applications are designed for, they might also be "Linux curious", especially if they are hoping to either scale-up the complexity and volume of the datasets they're working with (i.e., towards high performance computing) or scale-down their applications (i.e., towards embedded devices). If this is the case, then introducing Linux via a virtual machine (VM) is a relatively safe and easy path to experiment with.

About VMs

Virtual machines work by emulating a computer system, including hardware, in a software environment, a technology that has been around for a very long time (e.g., CP/CMS, 1967). The VMs in a host system is managed by a hypervisor, or Virtual Machine Monitor (VMM), that manages one or more guest systems. In the example that follows VirtualBox, a free-and-open source hypervisor. Because the guest system relies on the host it cannot have the same performance as a host system, unlike a dual-boot system. It will share memory, it will share processing power, it must take up some disk space, and will also have the overhead of the hypervisor itself (although this has improved a great deal in recent years). In a production environment, VMs are usually used to optimise resource allocation for very powerful systems, such as web-server farms and bodies like the Nectar Research Cloud, or even some partitions on systems like the University of Melbourne's supercomputer, Spartan. In a development environment, VMs are an excellent tool for testing and debugging.

Install VirtualBox and Enable Virtualization

For most environments VirtualBox is an easy path for creating a virtual machine, ARM systems excluded (QEMU suggested for Raspberry Pi or Android, or QEMU's fork, KVM). For the example given here, simply download VirtualBox for MS-Windows and click one's way through the installation process, noting that it VirtualBox will make changes to your system and that products from Oracle can be trusted (*blink*). Download for other operating environments are worth looking at as well.

It is essential to enable virtualisation on your MS-Windows host through the BIOS/UEFI, which is not as easy as it used to be. A handy page from some smart people in the Czech Republic provides quick instructions for a variety of hardware environments. The good people at laptopmag provide the path from within the MS-Windows environment. In summary; select Settings (gear icon), select Update & Security, Select Recovery (this sounds wrong), Advanced Startup, Restart Now (which is also wrong, you don't restart now), Troubleshoot, Advanced Options, UEFI Firmware Settings, then Restart.

Install Linux and Create a Shared Folder

Download a Ubuntu 20.04 LTS (long-term support) ISO and save to the MS-Windows host. There are some clever alternatives, such as the Ubuntu Linux terminal environment for MS-Windows (which is possibly even a better choice these days, but that will be for another post), or Multipass which allows one to create their own mini-cloud environment. But this is a discussion for a VM, so I'll resist the temptation to go off on a tangent.

Creating a VM in VirtualBox is pretty straight-forward; open the application, select "New", give the VM a name, and allocate resources (virtual hard disk, virtual memory). It's worthwhile tending towards the generous in resource allocation. After that it is a case selecting the ISO in settings and storage; remember a VM does not have a real disk drive, so it has a virtual (software) one. After this one can start the VM, and it will boot from the ISO and begin the installation process for Ubuntu Linux desktop edition, which is pretty straight forward. One amusing caveat, when the installation says it's going to wipe the disk it doesn't mean the host machine, just that of the virtual disk that has been build for it. When the installation is complete go to "Devices" on the VM menu, and remove the boot disk and restart the guest system; you now have a Ubuntu VM installed on your MS-Windows system.

By default, VMs do not have access to the host computer. To provide that access one will want to set up a shared folder in the VM and on the host. The first step in this environment would be to give the Linux user (created during installation) membership to the vboxsf, e.g., on the terminal sudo usermod -a -G vboxsf username. In VirtualBox, select Settings, and add a Share under as a Machine Folders, which is a permanent folder. Under Folder Path set the name and location on the host operating system (e.g., UbuntuShared on the Desktop); leave automount blank (we can fix that soon enough). Put a test file in the shared folder.

Ubuntu now needs additional software installed to work with VirtualBox's Guest Additions, including kernel modules. Also, mount VirtualBox's Guest Additions to the guest VM, under Devices as a virtual CD; you can download this from the VirtualBox website.

Run the following commands, entering the default user's password as needed:


sudo apt-get install -y build-essential linux-headers-`uname -r`
sudo /media/cdrom/./VBoxLinuxAdditions.run
sudo shutdown -r now # Reboot the system
mkdir ~/UbuntuShared
sudo mount -t vboxsf shared ~/UbuntuShared
cd ~/UbuntuShared

The file that was put in the UbuntuShared folder in MS-Windows should now be visible in ~/UbuntuShared. Add a file (e.g., touch testfile.txt) from Linux and check if it can seen in MS-Windows. If this all succeeds, make the folder persistent.


sudo nano /etc/fstab # nano is just fine for short configuration files
# Add the following, separate by tabs, and save
shared /home//UbuntuShared vboxsf defaults 0 0
# Edit modules
sudo nano /etc/modules
# Add the following
vboxsf
# Exit and reboot
sudo shutdown -r now

You're done! You now have a Ubuntu desktop system running as a VM guest using VirtualBox on an MS-Windows 10 host system. Ideal for learning, testing, and debugging.

Planet DebianErich Schubert: Contact Tracing Apps are Useless

Some people believe that automatic contact tracing apps will help contain the Coronavirus epidemic. They won’t.

Sorry to bring the bad news, but IT and mobile phones and artificial intelligence will not solve every problem.

In my opinion, those that promise to solve these things with artificial intelligence / mobile phones / apps / your-favorite-buzzword are at least overly optimistic and “blinder Aktionismus” (*), if not naive, detachted from reality, or fraudsters that just want to get some funding.

(*) there does not seem to be an English word for this – “doing something just for the sake of doing something, without thinking about whether it makes sense to do so”

Here are the reasons why it will not work:

  1. Signal quality. Forget detecting proximity with Bluetooth Low Energy. Yes, there are attempts to use BLE beacons for indoor positioning. But these use that you can learn “fingerprints” of which beacons are visible at which points, combined with additional information such as movement sensors and history (you do not teleport around in a building). BLE signals and antennas apparently tend to be very prone to orientation differences, signal reflections, and of course you will not have the idealized controlled environment used in such prototypes. The contacts have a single device, and they move – this is not comparable to indoor positioning. I strongly doubt you can tell whether you are “close” to someone, or not.
  2. Close vs. protection. The app cannot detect protection in place. Being close to someone behind a plexiglass window or even a solid wall is very different from being close otherwise. You will get a lot of false contacts this way. That neighbor that you have never seen living in the appartment above will likely be considered a close contact of yours, as you sleep “next” to each other every day…
  3. Low adoption rates. Apparently even in technology affine Singapore, fewer than 20% of people installed the app. That does not even mean they use it regularly. In Austria, the number is apparently below 5%, and people complain that it does not detect contact… But in order for this approach to work, you will need Chinese-style mass surveillance that literally puts you in prison if you do not install the app.
  4. False alerts. Because of these issues, you will get false alerts, until you just do not care anymore.
  5. False sense of security. Honestly: the app does not pretect you at all. All it tries to do is to make the tracing of contacts easier. It will not tell you reliably if you have been infected (as mentioned above, too many false positives, too few users) nor that you are relatively safe (too few contacts included, too slow testing and reporting). It will all be on the quality of “about 10 days ago you may or may not have contact with someone that tested positive, please contact someone to expose more data to tell you that it is actually another false alert”.
  6. Trust. In Germany, the app will be operated by T-Systems and SAP. Not exactly two companies that have a lot of fans… SAP seems to be one of the most hated software around. Neither company is known for caring about privacy much, but they are prototypical for “business first”. Its trust the cat to keep the cream. Yes, I know they want to make it open-source. But likely only the client, and you will still have to trust that the binary in the app stores is actually built from this source code, and not from a modified copy. As long as the name T-Systems and SAP are associated to the app, people will not trust it. Plus, we all know that the app will be bad, given the reputation of these companies at making horrible software systems…
  7. Too late. SAP and T-Systems want to have the app ready in mid June. Seriously, this must be a joke? It will be very buggy in the beginning (because it is SAP!) and it will not be working reliably before end of July. There will not be a substantial user before fall. But given the low infection rates in Germany, nobody will bother to install it anymore, because the perceived benefit is 0 one the infection rates are low.
  8. Infighting. You may remember that there was the discussion before that there should be a pan-european effort. Except that in the end, everybody fought everybody else, countries went into different directions and they all broke up. France wanted a centralized systems, while in Germany people pointed out that the users will not accept this and only a distributed system will have a chance. That failed effort was known as “Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT)” vs. “Decentralized Privacy-Preserving Proximity Tracing (DP-3T)”, and it turned out to have become a big “clusterfuck”. And that is just the tip of the iceberg.

Iceleand, probably the country that handled the Corona crisis best (they issued a travel advisory against Austria, when they were still happily spreading the virus at apres-ski; they massively tested, and got the infections down to almost zero within 6 weeks), has been experimenting with such an app. Iceland as a fairly close community managed to have almost 40% of people install their app. So did it help? No: “The technology is more or less … I wouldn’t say useless […] it wasn’t a game changer for us.”

The contact tracing app is just a huge waste of effort and public money.

And pretty much the same applies to any other attempts to solve this with IT. There is a lot of buzz about solving the Corona crisis with artificial intelligence: bullshit!

That is just naive. Do not speculate about magic power of AI. Get the data, understand the data, and you will see it does not help.

Because its real data. Its dirty. Its late. Its contradicting. Its incomplete. It is all what AI currently can not handle well. This is not image recognition. You have no labels. Many of the attempts in this direction already fail at the trivial 7-day seasonality you observe in the data… For example, the widely known John Hopkins “Has the curve flattened” trend has a stupid, useless indicator based on 5 day averages. And hence you get the weekly up and downs due to weekends. They show pretty “up” and “down” indicators. But these are affected mostly by the day of the week. And nobody cares. Notice that they currently even have big negative infections in their plots?

There is no data on when someone was infected. Because such data simply does not exist. What you have is data when someone tested positive (mostly), when someone reported symptons (sometimes, but some never have symptoms!), and when someone dies (but then you do not know if it was because of Corona, because of other issues that became “just” worse because of Corona, or hit by a car without any relation to Corona). The data that we work with is incredibly delayed, yet we pretend it is “live”.

Stop reading tea leaves. Stop pretending AI can save the world from Corona.

CryptogramAnother California Data Privacy Law

The California Consumer Privacy Act is a lesson in missed opportunities. It was passed in haste, to stop a ballot initiative that would have been even more restrictive:

In September 2017, Alastair Mactaggart and Mary Ross proposed a statewide ballot initiative entitled the "California Consumer Privacy Act." Ballot initiatives are a process under California law in which private citizens can propose legislation directly to voters, and pursuant to which such legislation can be enacted through voter approval without any action by the state legislature or the governor. While the proposed privacy initiative was initially met with significant opposition, particularly from large technology companies, some of that opposition faded in the wake of the Cambridge Analytica scandal and Mark Zuckerberg's April 2018 testimony before Congress. By May 2018, the initiative appeared to have garnered sufficient support to appear on the November 2018 ballot. On June 21, 2018, the sponsors of the ballot initiative and state legislators then struck a deal: in exchange for withdrawing the initiative, the state legislature would pass an agreed version of the California Consumer Privacy Act. The initiative was withdrawn, and the state legislature passed (and the Governor signed) the CCPA on June 28, 2018.

Since then, it was substantially amended -- that is, watered down -- at the request of various surveillance capitalism companies. Enforcement was supposed to start this year, but we haven't seen much yet.

And we could have had that ballot initiative.

It looks like Alastair Mactaggart and others are back.

Advocacy group Californians for Consumer Privacy, which started the push for a state-wide data privacy law, announced this week that it has the signatures it needs to get version 2.0 of its privacy rules on the US state's ballot in November, and submitted its proposal to Sacramento.

This time the goal is to tighten up the rules that its previously ballot measure managed to get into law, despite the determined efforts of internet giants like Google and Facebook to kill it. In return for the legislation being passed, that ballot measure was dropped. Now, it looks like the campaigners are taking their fight to a people's vote after all.

[...]

The new proposal would add more rights, including the use and sale of sensitive personal information, such as health and financial information, racial or ethnic origin, and precise geolocation. It would also triples existing fines for companies caught breaking the rules surrounding data on children (under 16s) and would require an opt-in to even collect such data.

The proposal would also give Californians the right to know when their information is used to make fundamental decisions about them, such as getting credit or employment offers. And it would require political organizations to divulge when they use similar data for campaigns.

And just to push the tech giants from fury into full-blown meltdown the new ballot measure would require any amendments to the law to require a majority vote in the legislature, effectively stripping their vast lobbying powers and cutting off the multitude of different ways the measures and its enforcement can be watered down within the political process.

I don't know why they accepted the compromise in the first place. It was obvious that the legislative process would be hijacked by the powerful tech companies. I support getting this onto the ballot this year.

EDITED TO ADD(5/17): It looks like this new ballot initiative isn't going to be an improvement.

Planet Linux AustraliaMichael Still: A super simple non-breadmaker loaf

Share

This is the second in a series of posts documenting my adventures in making bread during the COVID-19 shutdown. Yes I know all the cool kids made bread for themselves during the shutdown, but I did it too!

A loaf of bread

So here we were, in the middle of a pandemic which closed bakeries and cancelled almost all of my non-work activities. I found this animated GIF on Reddit for a super simple no-kneed bread and decided to give it a go. It turns out that a few things are true:

  • animated GIFs are a super terrible way store recipes
  • that animated GIF was a export of this YouTube video which originally accompanied this blog post
  • and that I only learned these things while to trying and work out who to credit for this recipe

The basic recipe is really easy — chuck the following into a big bowl, stir, and then cover with a plate. Leave resting a warm place for a long time (three or four hours), then turn out onto a floured bench. Fold into a ball with flour, and then bake. You can see a more detailed version in the YouTube video above.

  • 3 cups of bakers flour (not plain white flour)
  • 2 tea spoons of yeast
  • 2 tea spooons of salt
  • 1.5 cups of warm water (again, I use 42 degrees from my gas hot water system)

The dough will seem really dry when you first mix it, but gets wetter as it rises. Don’t panic if it seems tacky and dry.

I think the key here is the baking process, which is how the oven loaf in my previous post about bread maker white loaves was baked. I use a cast iron camp oven (sometimes called a dutch oven), because thermal mass is key. If I had a fancy enamelized cast iron camp oven I’d use that, but I don’t and I wasn’t going shopping during the shutdown to get one. Oh, and they can be crazy expensive at up to $500 AUD.

Another loaf of bread

Warm the oven with the camp oven inside for at least 30 minutes at 230 degrees celsius. Then place the dough inside the camp oven on some baking paper — I tend to use a triffet as well, but I think you could skip that if you didn’t have one. Bake for 30 minutes with the lid on — this helps steam the bread a little and forms a nice crust. Then bake for another 12 minutes with the camp over lid off — this darkens the crust up nicely.

A final loaf of bread

Oh, and I’ve noticed a bit of variation in how wet the dough seems to be when I turn it out and form it in flour, but it doesn’t really seem to change the outcome once baked, so that’s nice.

The original blogger for this receipe also recommends chilling the dough overnight in the fridge before baking, but I haven’t tried that yet.

Share

Planet Linux AustraliaDavid Rowe: MicroHams Digital Conference (MHDC) 2020

On May 9 2020 (PST) I had the pleasure of speaking at the MicroHams Digital Conference (MHDC) 2020. Due to COVID-19 presenters attended via Zoom, and the conference was live streamed over YouTube.

Thanks to hard work of the organisers, this worked really well!

Looking at the conference program, I noticed the standard of the presenters was very high. The organisers I worked with (Scott N7SS, and Grant KB7WSD) explained that a side effect of making the conference virtual was casting a much wider net on presenters – making the conference even better than IRL (In Real Life)! The YouTube streaming stats showed 300-500 people “attending” – also very high.

My door to door travel time to West Coast USA is about 20 hours. So a remote presentation makes life much easier for me. It takes me a week to prepare, means 1-2 weeks away from home, and a week to recover from the jetlag. As a single parent I need to find a carer for my 14 year old.

Vickie, KD7LAW, ran a break out room for after talk chat which worked well. It was nice to “meet” several people that I usually just have email contact with. All from the comfort of my home on a Sunday morning in Adelaide (Saturday afternoon PST).

The MHDC 2020 talks have been now been published on YouTube. Here is my talk, which is a good update (May 2020) of Codec 2 and FreeDV, including:

  • The new FreeDV 2020 mode using the LPCNet neural net vocoder
  • Embedded FreeDV 700D running on the SM1000
  • FreeDV over the QO-100 geosynchronous satellite and KiwiSDRs
  • Introducing some of the good people contributing to FreeDV

The conference has me interested in applying the open source modems we have developed for digital voice to Amateur Radio packet and HF data. So I’m reading up on Winlink, Pat, Direwolf and friends.

Thanks Scott, Grant, and Vickie and the MicroHams club!

Planet DebianMatthew Palmer: Private Key Redaction: UR DOIN IT RONG

Because posting private keys on the Internet is a bad idea, some people like to “redact” their private keys, so that it looks kinda-sorta like a private key, but it isn’t actually giving away anything secret. Unfortunately, due to the way that private keys are represented, it is easy to “redact” a key in such a way that it doesn’t actually redact anything at all. RSA private keys are particularly bad at this, but the problem can (potentially) apply to other keys as well.

I’ll show you a bit of “Inside Baseball” with key formats, and then demonstrate the practical implications. Finally, we’ll go through a practical worked example from an actual not-really-redacted key I recently stumbled across in my travels.

The Private Lives of Private Keys

Here is what a typical private key looks like, when you come across it:

-----BEGIN RSA PRIVATE KEY-----
MGICAQACEQCxjdTmecltJEz2PLMpS4BXAgMBAAECEDKtuwD17gpagnASq1zQTYEC
CQDVTYVsjjF7IQIJANUYZsIjRsR3AgkAkahDUXL0RSECCB78r2SnsJC9AghaOK3F
sKoELg==
-----END RSA PRIVATE KEY-----

Obviously, there’s some hidden meaning in there – computers don’t encrypt things by shouting “BEGIN RSA PRIVATE KEY!”, after all. What is between the BEGIN/END lines above is, in fact, a base64-encoded DER format ASN.1 structure representing a PKCS#1 private key.

In simple terms, it’s a list of numbers – very important numbers. The list of numbers is, in order:

  • A version number (0);
  • The “public modulus”, commonly referred to as “n”;
  • The “public exponent”, or “e” (which is almost always 65,537, for various unimportant reasons);
  • The “private exponent”, or “d”;
  • The two “private primes”, or “p” and “q”;
  • Two exponents, which are known as “dmp1” and “dmq1”; and
  • A coefficient, known as “iqmp”.

Why Is This a Problem?

The thing is, only three of those numbers are actually required in a private key. The rest, whilst useful to allow the RSA encryption and decryption to be more efficient, aren’t necessary. The three absolutely required values are e, p, and q.

Of the other numbers, most of them are at least about the same size as each of p and q. So of the total data in an RSA key, less than a quarter of the data is required. Let me show you with the above “toy” key, by breaking it down piece by piece1:

  • MGI – DER for “this is a sequence”
  • CAQ – version (0)
  • CxjdTmecltJEz2PLMpS4BXn
  • AgMBAAe
  • ECEDKtuwD17gpagnASq1zQTYd
  • ECCQDVTYVsjjF7IQp
  • IJANUYZsIjRsR3q
  • AgkAkahDUXL0RSdmp1
  • ECCB78r2SnsJC9dmq1
  • AghaOK3FsKoELg==iqmp

Remember that in order to reconstruct all of these values, all I need are e, p, and q – and e is pretty much always 65,537. So I could “redact” almost all of this key, and still give all the important, private bits of this key. Let me show you:

-----BEGIN RSA PRIVATE KEY-----
..............................................................EC
CQDVTYVsjjF7IQIJANUYZsIjRsR3....................................
........
-----END RSA PRIVATE KEY-----

Now, I doubt that anyone is going to redact a key precisely like this… but then again, this isn’t a “typical” RSA key. They usually look a lot more like this:

-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAu6Inch7+mWtKn+leB9uCG3MaJIxRyvC/5KTz2fR+h+GOhqj4
SZJobiVB4FrE5FgC7AnlH6qeRi9MI0s6dt5UWZ5oNIeWSaOOeNO+EJDUkSVf67wj
SNGXlSjGAkPZ0nRJiDjhuPvQmdW53hOaBLk5udxPEQbenpXAzbLJ7wH5ouLQ3nQw
HwpwDNQhF6zRO8WoscpDVThOAM+s4PS7EiK8ZR4hu2toon8Ynadlm95V45wR0VlW
zywgbkZCKa1IMrDCscB6CglQ10M3Xzya3iTzDtQxYMVqhDrA7uBYRxA0y1sER+Rb
yhEh03xz3AWemJVLCQuU06r+FABXJuY/QuAVvQIDAQABAoIBAFqwWVhzWqNUlFEO
PoCVvCEAVRZtK+tmyZj9kU87ORz8DCNR8A+/T/JM17ZUqO2lDGSBs9jGYpGRsr8s
USm69BIM2ljpX95fyzDjRu5C0jsFUYNi/7rmctmJR4s4uENcKV5J/++k5oI0Jw4L
c1ntHNWUgjK8m0UTJIlHbQq0bbAoFEcfdZxd3W+SzRG3jND3gifqKxBG04YDwloy
tu+bPV2jEih6p8tykew5OJwtJ3XsSZnqJMwcvDciVbwYNiJ6pUvGq6Z9kumOavm9
XU26m4cWipuK0URWbHWQA7SjbktqEpxsFrn5bYhJ9qXgLUh/I1+WhB2GEf3hQF5A
pDTN4oECgYEA7Kp6lE7ugFBDC09sKAhoQWrVSiFpZG4Z1gsL9z5YmZU/vZf0Su0n
9J2/k5B1GghvSwkTqpDZLXgNz8eIX0WCsS1xpzOuORSNvS1DWuzyATIG2cExuRiB
jYWIJUeCpa5p2PdlZmBrnD/hJ4oNk4oAVpf+HisfDSN7HBpN+TJfcAUCgYEAyvY7
Y4hQfHIdcfF3A9eeCGazIYbwVyfoGu70S/BZb2NoNEPymqsz7NOfwZQkL4O7R3Wl
Rm0vrWT8T5ykEUgT+2ruZVXYSQCKUOl18acbAy0eZ81wGBljZc9VWBrP1rHviVWd
OVDRZNjz6nd6ZMrJvxRa24TvxZbJMmO1cgSW1FkCgYAoWBd1WM9HiGclcnCZknVT
UYbykCeLO0mkN1Xe2/32kH7BLzox26PIC2wxF5seyPlP7Ugw92hOW/zewsD4nLze
v0R0oFa+3EYdTa4BvgqzMXgBfvGfABJ1saG32SzoWYcpuWLLxPwTMsCLIPmXgRr1
qAtl0SwF7Vp7O/C23mNukQKBgB89DOEB7xloWv3Zo27U9f7nB7UmVsGjY8cZdkJl
6O4LB9PbjXCe3ywZWmJqEbO6e83A3sJbNdZjT65VNq9uP50X1T+FmfeKfL99X2jl
RnQTsrVZWmJrLfBSnBkmb0zlMDAcHEnhFYmHFuvEnfL7f1fIoz9cU6c+0RLPY/L7
n9dpAoGAXih17mcmtnV+Ce+lBWzGWw9P4kVDSIxzGxd8gprrGKLa3Q9VuOrLdt58
++UzNUaBN6VYAe4jgxGfZfh+IaSlMouwOjDgE/qzgY8QsjBubzmABR/KWCYiRqkj
qpWCgo1FC1Gn94gh/+dW2Q8+NjYtXWNqQcjRP4AKTBnPktEvdMA=
-----END RSA PRIVATE KEY-----

People typically redact keys by deleting whole lines, and usually replacing them with [...] and the like. But only about 345 of those 1588 characters (excluding the header and footer) are required to construct the entire key. You can redact about 4/5ths of that giant blob of stuff, and your private parts (or at least, those of your key) are still left uncomfortably exposed.

But Wait! There’s More!

Remember how I said that everything in the key other than e, p, and q could be derived from those three numbers? Let’s talk about one of those numbers: n.

This is known as the “public modulus” (because, along with e, it is also present in the public key). It is very easy to calculate: n = p * q. It is also very early in the key (the second number, in fact).

Since n = p * q, it follows that q = n / p. Thus, as long as the key is intact up to p, you can derive q by simple division.

Real World Redaction

At this point, I’d like to introduce an acquaintance of mine: Mr. Johan Finn. He is the proud owner of the GitHub repo johanfinn/scripts. For a while, his repo contained a script that contained a poorly-redacted private key. He since deleted it, by making a new commit, but of course because git never really deletes anything, it’s still available.

Of course, Mr. Finn may delete the repo, or force-push a new history without that commit, so here is the redacted private key, with a bit of the surrounding shell script, for our illustrative pleasure:

#Add private key to .ssh folder
cd /home/johan/.ssh/
echo  "-----BEGIN RSA PRIVATE KEY-----
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK
ÄÄÄÄÄÄÄÄÄÄÄÄÄÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
MIIJKgIBAAKCAgEAxEVih1JGb8gu/Fm4AZh+ZwJw/pjzzliWrg4mICFt1g7SmIE2
TCQMKABdwd11wOFKCPc/UzRH/fHuQcvWrpbOSdqev/zKff9iedKw/YygkMeIRaXB
fYELqvUAOJ8PPfDm70st9GJRhjGgo5+L3cJB2gfgeiDNHzaFvapRSU0oMGQX+kI9
ezsjDAn+0Pp+r3h/u1QpLSH4moRFGF4omNydI+3iTGB98/EzuNhRBHRNq4oBV5SG
Pq/A1bem2ninnoEaQ+OPESxYzDz3Jy9jV0W/6LvtJ844m+XX69H5fqq5dy55z6DW
sGKn78ULPVZPsYH5Y7C+CM6GAn4nYCpau0t52sqsY5epXdeYx4Dc+Wm0CjXrUDEe
Egl4loPKDxJkQqQ/MQiz6Le/UK9vEmnWn1TRXK3ekzNV4NgDfJANBQobOpwt8WVB
rbsC0ON7n680RQnl7PltK9P1AQW5vHsahkoixk/BhcwhkrkZGyDIl9g8Q/Euyoq3
eivKPLz7/rhDE7C1BzFy7v8AjC3w7i9QeHcWOZFAXo5hiDasIAkljDOsdfD4tP5/
wSO6E6pjL3kJ+RH2FCHd7ciQb+IcuXbku64ln8gab4p8jLa/mcMI+V3eWYnZ82Yu
axsa85hAe4wb60cp/rCJo7ihhDTTvGooqtTisOv2nSvCYpcW9qbL6cGjAXECAwEA
AQKCAgEAjz6wnWDP5Y9ts2FrqUZ5ooamnzpUXlpLhrbu3m5ncl4ZF5LfH+QDN0Kl
KvONmHsUhJynC/vROybSJBU4Fu4bms1DJY3C39h/L7g00qhLG7901pgWMpn3QQtU
4P49qpBii20MGhuTsmQQALtV4kB/vTgYfinoawpo67cdYmk8lqzGzzB/HKxZdNTq
s+zOfxRr7PWMo9LyVRuKLjGyYXZJ/coFaobWBi8Y96Rw5NZZRYQQXLIalC/Dhndm
AHckpstEtx2i8f6yxEUOgPvV/gD7Akn92RpqOGW0g/kYpXjGqZQy9PVHGy61sInY
HSkcOspIkJiS6WyJY9JcvJPM6ns4b84GE9qoUlWVF3RWJk1dqYCw5hz4U8LFyxsF
R6WhYiImvjxBLpab55rSqbGkzjI2z+ucDZyl1gqIv9U6qceVsgRyuqdfVN4deU22
LzO5IEDhnGdFqg9KQY7u8zm686Ejs64T1sh0y4GOmGsSg+P6nsqkdlXH8C+Cf03F
lqPFg8WQC7ojl/S8dPmkT5tcJh3BPwIWuvbtVjFOGQc8x0lb+NwK8h2Nsn6LNazS
0H90adh/IyYX4sBMokrpxAi+gMAWiyJHIHLeH2itNKtAQd3qQowbrWNswJSgJzsT
JuJ7uqRKAFkE6nCeAkuj/6KHHMPsfCAffVdyGaWqhoxmPOrnVgECggEBAOrCCwiC
XxwUgjOfOKx68siFJLfHf4vPo42LZOkAQq5aUmcWHbJVXmoxLYSczyAROopY0wd6
Dx8rqnpO7OtZsdJMeBSHbMVKoBZ77hiCQlrljcj12moFaEAButLCdZFsZW4zF/sx
kWIAaPH9vc4MvHHyvyNoB3yQRdevu57X7xGf9UxWuPil/jvdbt9toaraUT6rUBWU
GYPNKaLFsQzKsFWAzp5RGpASkhuiBJ0Qx3cfLyirjrKqTipe3o3gh/5RSHQ6VAhz
gdUG7WszNWk8FDCL6RTWzPOrbUyJo/wz1kblsL3vhV7ldEKFHeEjsDGroW2VUFlS
asAHNvM4/uYcOSECggEBANYH0427qZtLVuL97htXW9kCAT75xbMwgRskAH4nJDlZ
IggDErmzBhtrHgR+9X09iL47jr7dUcrVNPHzK/WXALFSKzXhkG/yAgmt3r14WgJ6
5y7010LlPFrzaNEyO/S4ISuBLt4cinjJsrFpoo0WI8jXeM5ddG6ncxdurKXMymY7
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::.::
:::::::::::::::::::::::::::.::::::::::::::::::::::::::::::::::::
LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLlL
ÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖ
ÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
YYYYYYYYYYYYYYYYYYYYYyYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
gff0GJCOMZ65pMSy3A3cSAtjlKnb4fWzuHD5CFbusN4WhCT/tNxGNSpzvxd8GIDs
nY7exs9L230oCCpedVgcbayHCbkChEfoPzL1e1jXjgCwCTgt8GjeEFqc1gXNEaUn
O8AJ4VlR8fRszHm6yR0ZUBdY7UJddxQiYOzt0S1RLlECggEAbdcs4mZdqf3OjejJ
06oTPs9NRtAJVZlppSi7pmmAyaNpOuKWMoLPElDAQ3Q7VX26LlExLCZoPOVpdqDH
KbdmBEfTR4e11Pn9vYdu9/i6o10U4hpmf4TYKlqk10g1Sj21l8JATj/7Diey8scO
sAI1iftSg3aBSj8W7rxCxSezrENzuqw5D95a/he1cMUTB6XuravqZK5O4eR0vrxR
AvMzXk5OXrUEALUvt84u6m6XZZ0pq5XZxq74s8p/x1JvTwcpJ3jDKNEixlHfdHEZ
ZIu/xpcwD5gRfVGQamdcWvzGHZYLBFO1y5kAtL8kI9tW7WaouWVLmv99AyxdAaCB
Y5mBAQKCAQEAzU7AnorPzYndlOzkxRFtp6MGsvRBsvvqPLCyUFEXrHNV872O7tdO
GmsMZl+q+TJXw7O54FjJJvqSSS1sk68AGRirHop7VQce8U36BmI2ZX6j2SVAgIkI
9m3btCCt5rfiCatn2+Qg6HECmrCsHw6H0RbwaXS4RZUXD/k4X+sslBitOb7K+Y+N
Bacq6QxxjlIqQdKKPs4P2PNHEAey+kEJJGEQ7bTkNxCZ21kgi1Sc5L8U/IGy0BMC
PvJxssLdaWILyp3Ws8Q4RAoC5c0ZP0W2j+5NSbi3jsDFi0Y6/2GRdY1HAZX4twem
Q0NCedq1JNatP1gsb6bcnVHFDEGsj/35oQKCAQEAgmWMuSrojR/fjJzvke6Wvbox
FRnPk+6YRzuYhAP/YPxSRYyB5at++5Q1qr7QWn7NFozFIVFFT8CBU36ktWQ39MGm
cJ5SGyN9nAbbuWA6e+/u059R7QL+6f64xHRAGyLT3gOb1G0N6h7VqFT25q5Tq0rc
Lf/CvLKoudjv+sQ5GKBPT18+zxmwJ8YUWAsXUyrqoFWY/Tvo5yLxaC0W2gh3+Ppi
EDqe4RRJ3VKuKfZxHn5VLxgtBFN96Gy0+Htm5tiMKOZMYAkHiL+vrVZAX0hIEuRZ
JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
-----END RSA PRIVATE KEY-----" >> id_rsa

Now, if you try to reconstruct this key by removing the “obvious” garbage lines (the ones that are all repeated characters, some of which aren’t even valid base64 characters), it still isn’t a key – at least, openssl pkey doesn’t want anything to do with it. The key is very much still in there, though, as we shall soon see.

Using a gem I wrote and a quick bit of Ruby, we can extract a complete private key. The irb session looks something like this:

>> require "derparse"
>> b64 = <<EOF
MIIJKgIBAAKCAgEAxEVih1JGb8gu/Fm4AZh+ZwJw/pjzzliWrg4mICFt1g7SmIE2
TCQMKABdwd11wOFKCPc/UzRH/fHuQcvWrpbOSdqev/zKff9iedKw/YygkMeIRaXB
fYELqvUAOJ8PPfDm70st9GJRhjGgo5+L3cJB2gfgeiDNHzaFvapRSU0oMGQX+kI9
ezsjDAn+0Pp+r3h/u1QpLSH4moRFGF4omNydI+3iTGB98/EzuNhRBHRNq4oBV5SG
Pq/A1bem2ninnoEaQ+OPESxYzDz3Jy9jV0W/6LvtJ844m+XX69H5fqq5dy55z6DW
sGKn78ULPVZPsYH5Y7C+CM6GAn4nYCpau0t52sqsY5epXdeYx4Dc+Wm0CjXrUDEe
Egl4loPKDxJkQqQ/MQiz6Le/UK9vEmnWn1TRXK3ekzNV4NgDfJANBQobOpwt8WVB
rbsC0ON7n680RQnl7PltK9P1AQW5vHsahkoixk/BhcwhkrkZGyDIl9g8Q/Euyoq3
eivKPLz7/rhDE7C1BzFy7v8AjC3w7i9QeHcWOZFAXo5hiDasIAkljDOsdfD4tP5/
wSO6E6pjL3kJ+RH2FCHd7ciQb+IcuXbku64ln8gab4p8jLa/mcMI+V3eWYnZ82Yu
axsa85hAe4wb60cp/rCJo7ihhDTTvGooqtTisOv2nSvCYpcW9qbL6cGjAXECAwEA
AQKCAgEAjz6wnWDP5Y9ts2FrqUZ5ooamnzpUXlpLhrbu3m5ncl4ZF5LfH+QDN0Kl
KvONmHsUhJynC/vROybSJBU4Fu4bms1DJY3C39h/L7g00qhLG7901pgWMpn3QQtU
4P49qpBii20MGhuTsmQQALtV4kB/vTgYfinoawpo67cdYmk8lqzGzzB/HKxZdNTq
s+zOfxRr7PWMo9LyVRuKLjGyYXZJ/coFaobWBi8Y96Rw5NZZRYQQXLIalC/Dhndm
AHckpstEtx2i8f6yxEUOgPvV/gD7Akn92RpqOGW0g/kYpXjGqZQy9PVHGy61sInY
HSkcOspIkJiS6WyJY9JcvJPM6ns4b84GE9qoUlWVF3RWJk1dqYCw5hz4U8LFyxsF
R6WhYiImvjxBLpab55rSqbGkzjI2z+ucDZyl1gqIv9U6qceVsgRyuqdfVN4deU22
LzO5IEDhnGdFqg9KQY7u8zm686Ejs64T1sh0y4GOmGsSg+P6nsqkdlXH8C+Cf03F
lqPFg8WQC7ojl/S8dPmkT5tcJh3BPwIWuvbtVjFOGQc8x0lb+NwK8h2Nsn6LNazS
0H90adh/IyYX4sBMokrpxAi+gMAWiyJHIHLeH2itNKtAQd3qQowbrWNswJSgJzsT
JuJ7uqRKAFkE6nCeAkuj/6KHHMPsfCAffVdyGaWqhoxmPOrnVgECggEBAOrCCwiC
XxwUgjOfOKx68siFJLfHf4vPo42LZOkAQq5aUmcWHbJVXmoxLYSczyAROopY0wd6
Dx8rqnpO7OtZsdJMeBSHbMVKoBZ77hiCQlrljcj12moFaEAButLCdZFsZW4zF/sx
kWIAaPH9vc4MvHHyvyNoB3yQRdevu57X7xGf9UxWuPil/jvdbt9toaraUT6rUBWU
GYPNKaLFsQzKsFWAzp5RGpASkhuiBJ0Qx3cfLyirjrKqTipe3o3gh/5RSHQ6VAhz
gdUG7WszNWk8FDCL6RTWzPOrbUyJo/wz1kblsL3vhV7ldEKFHeEjsDGroW2VUFlS
asAHNvM4/uYcOSECggEBANYH0427qZtLVuL97htXW9kCAT75xbMwgRskAH4nJDlZ
IggDErmzBhtrHgR+9X09iL47jr7dUcrVNPHzK/WXALFSKzXhkG/yAgmt3r14WgJ6
5y7010LlPFrzaNEyO/S4ISuBLt4cinjJsrFpoo0WI8jXeM5ddG6ncxdurKXMymY7
EOF
>> b64 += <<EOF
gff0GJCOMZ65pMSy3A3cSAtjlKnb4fWzuHD5CFbusN4WhCT/tNxGNSpzvxd8GIDs
nY7exs9L230oCCpedVgcbayHCbkChEfoPzL1e1jXjgCwCTgt8GjeEFqc1gXNEaUn
O8AJ4VlR8fRszHm6yR0ZUBdY7UJddxQiYOzt0S1RLlECggEAbdcs4mZdqf3OjejJ
06oTPs9NRtAJVZlppSi7pmmAyaNpOuKWMoLPElDAQ3Q7VX26LlExLCZoPOVpdqDH
KbdmBEfTR4e11Pn9vYdu9/i6o10U4hpmf4TYKlqk10g1Sj21l8JATj/7Diey8scO
sAI1iftSg3aBSj8W7rxCxSezrENzuqw5D95a/he1cMUTB6XuravqZK5O4eR0vrxR
AvMzXk5OXrUEALUvt84u6m6XZZ0pq5XZxq74s8p/x1JvTwcpJ3jDKNEixlHfdHEZ
ZIu/xpcwD5gRfVGQamdcWvzGHZYLBFO1y5kAtL8kI9tW7WaouWVLmv99AyxdAaCB
Y5mBAQKCAQEAzU7AnorPzYndlOzkxRFtp6MGsvRBsvvqPLCyUFEXrHNV872O7tdO
GmsMZl+q+TJXw7O54FjJJvqSSS1sk68AGRirHop7VQce8U36BmI2ZX6j2SVAgIkI
9m3btCCt5rfiCatn2+Qg6HECmrCsHw6H0RbwaXS4RZUXD/k4X+sslBitOb7K+Y+N
Bacq6QxxjlIqQdKKPs4P2PNHEAey+kEJJGEQ7bTkNxCZ21kgi1Sc5L8U/IGy0BMC
PvJxssLdaWILyp3Ws8Q4RAoC5c0ZP0W2j+5NSbi3jsDFi0Y6/2GRdY1HAZX4twem
Q0NCedq1JNatP1gsb6bcnVHFDEGsj/35oQKCAQEAgmWMuSrojR/fjJzvke6Wvbox
FRnPk+6YRzuYhAP/YPxSRYyB5at++5Q1qr7QWn7NFozFIVFFT8CBU36ktWQ39MGm
cJ5SGyN9nAbbuWA6e+/u059R7QL+6f64xHRAGyLT3gOb1G0N6h7VqFT25q5Tq0rc
Lf/CvLKoudjv+sQ5GKBPT18+zxmwJ8YUWAsXUyrqoFWY/Tvo5yLxaC0W2gh3+Ppi
EDqe4RRJ3VKuKfZxHn5VLxgtBFN96Gy0+Htm5tiMKOZMYAkHiL+vrVZAX0hIEuRZ
EOF
>> der = b64.unpack("m").first
>> c = DerParse.new(der).first_node.first_child
>> version = c.value
=> 0
>> c = c.next_node
>> n = c.value
=> 80071596234464993385068908004931... # (etc)
>> c = c.next_node
>> e = c.value
=> 65537
>> c = c.next_node
>> d = c.value
=> 58438813486895877116761996105770... # (etc)
>> c = c.next_node
>> p = c.value
=> 29635449580247160226960937109864... # (etc)
>> c = c.next_node
>> q = c.value
=> 27018856595256414771163410576410... # (etc)

What I’ve done, in case you don’t speak Ruby, is take the two “chunks” of plausible-looking base64 data, chuck them together into a variable named b64, unbase64 it into a variable named der, pass that into a new DerParse instance, and then walk the DER value tree until I got all the values I need.

Interestingly, the q value actually traverses the “split” in the two chunks, which means that there’s always the possibility that there are lines missing from the key. However, since p and q are supposed to be prime, we can “sanity check” them to see if corruption is likely to have occurred:

>> require "openssl"
>> OpenSSL::BN.new(p).prime?
=> true
>> OpenSSL::BN.new(q).prime?
=> true

Excellent! The chances of a corrupted file producing valid-but-incorrect prime numbers isn’t huge, so we can be fairly confident that we’ve got the “real” p and q. Now, with the help of another one of my creations we can use e, p, and q to create a fully-operational battle key:

>> require "openssl/pkey/rsa"
>> k = OpenSSL::PKey::RSA.from_factors(p, q, e)
=> #<OpenSSL::PKey::RSA:0x0000559d5903cd38>
>> k.valid?
=> true
>> k.verify(OpenSSL::Digest::SHA256.new, k.sign(OpenSSL::Digest::SHA256.new, "bob"), "bob")
=> true

… and there you have it. One fairly redacted-looking private key brought back to life by maths and far too much free time.

Sorry Mr. Finn, I hope you’re not still using that key on anything Internet-facing.

What About Other Key Types?

EC keys are very different beasts, but they have much the same problems as RSA keys. A typical EC key contains both private and public data, and the public portion is twice the size – so only about 1/3 of the data in the key is private material. It is quite plausible that you can “redact” an EC key and leave all the actually private bits exposed.

What Do We Do About It?

In short: don’t ever try and redact real private keys. For documentation purposes, just put “KEY GOES HERE” in the appropriate spot, or something like that. Store your secrets somewhere that isn’t a public (or even private!) git repo.

Generating a “dummy” private key and sticking it in there isn’t a great idea, for different reasons: people have this odd habit of reusing “demo” keys in real life. There’s no need to encourage that sort of thing.


  1. Technically the pieces aren’t 100% aligned with the underlying DER, because of how base64 works. I felt it was easier to understand if I stuck to chopping up the base64, rather than decoding into DER and then chopping up the DER. 

,

Planet Linux AustraliaStewart Smith: Raptor Blackbird support: all upstream in op-build

Thanks to my most recent PR being merged, op-build v2.5 will have full support for the Raptor Blackbird! This includes support for the “IPL Monitor” that’s required to get fan control going.

Note that if you’re running Fedora 32 then you need some patches to buildroot to have it build, but if you’re building on something a little older, then upstream should build and work straight out of the box (err… git tree).

I also note that the work to get Secure Boot for an OS Kernel going is starting to make its way out for code reviews, so that’s something to look forward to (although without a TPM we’re going to need extra code).

Planet DebianRuss Allbery: DocKnot 3.04

This is a relatively small feature release of my tool for managing software documentation and releases.

I'm slowly moving all of my packages from Travis-CI to GitHub Workflows for automated CI. GitHub Workflows is much easier to configure and control, and I've been a bit worried about the future of Travis-CI since their acquisition. It seems unlikely that GitHub Workflows is going anywhere.

It would be nice to use a fully free software solution for CI, but there doesn't seem to be anything out there that's nearly as easy and straightforward to use, and I have neither the time nor the energy to cobble something together myself. The configuration is fairly straightforward and should be portable to any fully free solution that might materialize in the future.

Anyway, as part of that migration I needed to make some changes to DocKnot to generate status badges from GitHub Workflows instead of Travis-CI. This release includes those changes. There is a backward-incompatible change to make the semantics of the package metadata a bit more straightforward: vcs.travis needs to be changed to vcs.status.travis.

You can get the latest release from the DocKnot distribution page. Debian packages have been uploaded to my personal repository. I plan on uploading DocKnot to Debian proper once I change the metadata format to use YAML instead of relaxed JSON.

Planet DebianRuss Allbery: rra-c-util 8.2

This release of my general utility libraries and support code includes a large grab bag of fixes and improvements.

portable/system.h now defines explicit_bzero in terms of memset if it is not available. The memset version is unlikely to have the same security properties since the compiler may optimize it away, but that allows me to use explicit_bzero to erase security data where it is available.

For packages with Kerberos tests, generating a test krb5.conf file now works properly even if the system krb5.conf file does not set a default realm, and a krb5.conf file dropped into the test configuration directory now works properly. Thanks to Jeffrey Hutzelman for the latter fix.

For packages with PAM modules, the ENTRY and EXIT logging macros can now be used like function calls, and portable/pam.h now defines PAM_MAX_RESP_SIZE if it isn't defined.

Header ordering in some of the portability socket code has been restored to compatibility with a few ancient UNIX systems. This was accidentally broken by the clang-format reformatting. Thanks to Julien ÉLIE for the fix.

A few bugs in the test for SPDX license identifiers have been fixed.

Finally, this release fixes warnings with Clang 10 and GCC 10.

You can get the latest release from the rra-c-util distribution page.

Planet DebianRuss Allbery: C TAP Harness 4.7

This is a small bug fix release to my testing framework for C packages. It picks up a change to the test suite so that it won't break when C_TAP_VERBOSE is already set in the environment, and fixes new compilation warnings with GCC 10.

You can get the latest release from the C TAP Harness distribution page.

Planet DebianNorbert Preining: Upgrading AMD Radeon 5700 to 5700 XT via BIOS

Having recently switched from NVIDIA to AMD graphic cards, in particular a RX 5700, I found out that I can get myself a free upgrade to the RX 5700 XT variant without paying one Yen, by simply flashing a compatible 5700 XT BIOS onto the 5700 card. Not that this is something new, a detailed explanation can be found here.

The same article also gives a detailed technical explanation on the difference between the two cards. The 5700 variant has less stream processors (2304 against 2560 in the XT variant), and lower power limits and clock speeds. Other than this they are based on the exact same chip layout (Navi 10), and with the same amount and type of memory—8 GB GDDR6.

Flashing the XT BIOS onto the plain 5700 will not changes the number of stream processors, but power limits and clock speeds are raised to the same level of the 5700 XT, providing approximately a 7% gain without any over-clocking and over-powering, and potentially more by raising voltage etc. Detailed numbers can be found in the linked article above.

The first step in this “free upgrade� is to identify ones own card correctly, best with device id and subsystem id, and then find the correct BIOS. Lots of BIOS dumps are provided in the BIOS database (link already restricting to 5700 XT BIOS). I used CPU-Z (Windows program) to determine this items, see image on the right (click to enlarge). In my case I got 1002 731F - 1462 3811 for the complete device id. The card is a MSI RX 5700 8 GB Mech OC, so I found the following alternative BIOS for MSI RX 5700 XT 8 GB Mech OC. Unfortunately, it seems that MSI is distinguishing 5700 and 5700 XT by their device id, because the XT variant gives 1002 731F - 1462 3810 for the complete device id, meaning that the last digit is 1 off compared to mine (3811 versus 3810). And indeed, trying to flash this video BIOS the normal way (using the Windows version ended in a warning that the subsystem id is different. A bit of search led to a thread in the TechPowerup Fora and this post explaining how to force the flashing in this case.

Disclaimer: The following might brick your graphic card, you are doing this on your own risk!

Necessary software:

I did all the flashing and checking under Windows, but only because I realized too late that there is a fully uptodate flashing program for Linux that exhibits the same functionality. Also, I didn’t know how to get the device id since the current AMD ROCm tools seem not to provide this data. If you are lucky and the device ids for your card are the same for both 5700 and 5700 XT variants, then you can use the graphical client (amdvbflashWin.exe), but if there is a difference, the command line is necessary. After unpacking the AMD Flash program and getting the correct BIOS rom file, the steps taken on Windows are (the very same steps can be taken on Linux):

  • Start a command line shell (cmd or powershell) with Administrator rights (on Linux become root)
  • Save your current BIOS in case you need to restore it with amdvbflash -s 0 oldbios.rom (this can also be done out of the GUI application)
  • Unlock the BIOS rom with amdvbflash -unlockrom 0
  • Force flash the new BIOS with amdvbflash -f -p 0 NEW.rom (where NEW.rom is the 5700 XT BIOS rom file)

This should succeed in both cases. After that shutdown and restart your computer and you should be greeted with a RT 5700 XT card, without twisting a single screw. Starting Windows for the first time gave some flickering, because the driver for the “new� card was installed. On Linux the system auto-detects the card and everything works out of the box. Very smooth.

Finally, a word of warning: Don’t do these kind of things if you are not ready to pay the prize of a bricked GPU card in case something goes wrong! Everything is on your own risk!

Let me close with a before/after image, most of the fields are identical, but the default/gpu clocks both at normal as well as boost levels see a considerable improvement 😉

Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix B for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 107 MB 29s 3.7 MB/s
NixOS Nix 15 MB 14s 1.1 MB/s
Debian apt 15 MB 4s 3.7 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 266 MB 1m8s 3.9 MB/s
Arch Linux pacman 124 MB 1m2s 2.0 MB/s
Debian apt 159 MB 51s 3.1 MB/s
NixOS Nix 262 MB 38s 6.8 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s


The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes: doinst.sh
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes

Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix B: measurement details

ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Planet DebianSven Hoexter: Quick and Dirty: masquerading / NAT with nftables

Since nftables is now the new default, a short note to myself on how to setup masquerading, like the usual NAT setup you use on a gateway.

nft add table nat
nft add chain nat postrouting { type nat hook postrouting priority 100 \; }
nft add rule nat postrouting ip saddr 192.168.1.0/24 oif wlan0 masquerade

In this case the wlan0 is basically the "WAN" interface, because I use an old netbook as a wired to wireless network adapter.

Planet DebianMichael Stapelberg: a new distri linux (fast package management) release

I just released a new version of distri.

The focus of this release lies on:

  • a better developer experience, allowing users to debug any installed package without extra setup steps

  • performance improvements in all areas (starting programs, building distri packages, generating distri images)

  • better tooling for keeping track of upstream versions

See the release notes for more details.

The distri research linux distribution project was started in 2019 to research whether a few architectural changes could enable drastically faster package management.

While the package managers in common Linux distributions (e.g. apt, dnf, …) top out at data rates of only a few MB/s, distri effortlessly saturates 1 Gbit, 10 Gbit and even 40 Gbit connections, resulting in fast installation and update speeds.

Krebs on SecurityU.S. Secret Service: “Massive Fraud” Against State Unemployment Insurance Programs

A well-organized Nigerian crime ring is exploiting the COVID-19 crisis by committing large-scale fraud against multiple state unemployment insurance programs, with potential losses in the hundreds of millions of dollars, according to a new alert issued by the U.S. Secret Service.

A memo seen by KrebsOnSecurity that the Secret Service circulated to field offices around the United States on Thursday says the ring has been filing unemployment claims in different states using Social Security numbers and other personally identifiable information (PII) belonging to identity theft victims, and that “a substantial amount of the fraudulent benefits submitted have used PII from first responders, government personnel and school employees.”

“It is assumed the fraud ring behind this possesses a substantial PII database to submit the volume of applications observed thus far,” the Secret Service warned. “The primary state targeted so far is Washington, although there is also evidence of attacks in North Carolina, Massachusetts, Rhode Island, Oklahoma, Wyoming and Florida.”

The Secret Service said the fraud network is believed to consist of hundred of “mules,” a term used to describe willing or unwitting individuals who are recruited to help launder the proceeds of fraudulent financial transactions.

“In the state of Washington, individuals residing out-of-state are receiving multiple ACH deposits from the State of Washington Unemployment Benefits Program, all in different individuals’ names with no connection to the account holder,” the notice continues.

The Service’s memo suggests the crime ring is operating in much the same way as crooks who specialize in filing fraudulent income tax refund requests with the states and the U.S. Internal Revenue Service (IRS), a perennial problem that costs the states and the U.S. Treasury hundreds of millions of dollars in revenue each year.

In those schemes, the scammers typically recruit people — often victims of online romance scams or those who also are out of work and looking for any source of income — to receive direct deposits from the fraudulent transactions, and then forward the bulk of the illicit funds to the perpetrators.

A federal fraud investigator who spoke with KrebsOnSecurity on condition of anonymity said many states simply don’t have enough controls in place to detect patterns that might help better screen out fraudulent unemployment applications, such as looking for multiple applications involving the same Internet addresses and/or bank accounts. The investigator said in some states fraudsters need only to submit someone’s name, Social Security number and other basic information for their claims to be processed.

Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said financial institutions in her state earlier this week started seeing a flood of high-dollar transfers tied to employment claims filed for people in Washington, with many transfers in the $9,000 to $20,000 range.

“It’s been unbelievable to see the huge number of bogus filings here, and in such large amounts,” Dodd said, noting that one fraudulent claim sent to a mule in Oklahoma was for more than $29,000. “I’m proud of our bankers because they’ve managed to stop a lot of these transfers, but some are already gone. Most mules seem to have [been involved in] romance scams.”

While it might seem strange that people in Washington would be asking to receive their benefits via ACH deposits at a bank in Oklahoma, Dodd said the people involved seem to have a ready answer if anyone asks: One common refrain is that the claimants live in Washington but were riding out the Coronavirus pandemic while staying with family in Oklahoma.

The Secret Service alert follows news reports by media outlets in Washington and Rhode Island about millions of dollars in fraudulent unemployment claims in those states. On Thursday, The Seattle Times reported that the activity had halted unemployment payments for two days after officials found more than $1.6 million in phony claims.

“Between March and April, the number of fraudulent claims for unemployment benefits jumped 27-fold to 700,” the state Employment Security Department (ESD) told The Seattle Times. The story noted that the ESD’s fraud hotline has been inundated with calls, and received so many emails last weekend that it temporarily shut down.

WPRI in Rhode Island reported on May 4 that the state’s Department of Labor and Training has received hundreds of complaints of unemployment insurance fraud, and that “the number of purportedly fraudulent accounts is keeping pace with the unprecedented number of legitimate claims for unemployment insurance.”

The surge in fraud comes as many states are struggling to process an avalanche of jobless claims filed as a result of the Coronavirus pandemic. The U.S. government reported Thursday that nearly three million people filed unemployment claims last week, bringing the total over the last two months to more than 36 million. The Treasury Department says unemployment programs delivered $48 billion in payments in April alone.

A few of the states listed as key targets of this fraud ring are experiencing some of the highest levels of unemployment claims in the country. Washington has seen nearly a million unemployment claims, with almost 30 percent of its workforce currently jobless, according to figures released by the U.S. Chamber of Commerce. Rhode Island is even worse off, with 31.4 percent of its workforce filing for unemployment, the Chamber found.

Dodd said she recently heard from an FBI agent who was aware of a company in Oklahoma that has seven employees and has received notices of claims on several hundred persons obviously not employed there.

“Oklahoma will likely be seeing the same thing,” she said. “There must be other states that are getting filings on behalf of Oklahomans.”

Indeed, the Secret Service says this scam is likely to affect all states that don’t take additional steps to weed out fraudulent filings.

“The banks targeted have been at all levels including local banks, credit unions, and large national banks,” the Secret Service alert concluded. “It is extremely likely every state is vulnerable to this scheme and will be targeted if they have not been already.”

Update, May 16, 1:20 p.m. ET: Added comments from the Oklahoma Bankers Association.

Planet DebianLucas Kanashiro: Quarantine times

After quite some time without publishing anything here, I decided to share the latest events. It is a hard time for most of us but with all this time at home, one can also do great things.

I would like to start with the wonderful idea the Debian Brasil community had! Why not create an online Debian related conference to keep people’s minds busy and also share knowledge? After brainstorming, we came up with our online conference called #FiqueEmCasaUseDebian (in English it would be #StayHomeUseDebian). It started on May 3rd and will last until May 30th (yes, one month)! Every weekday, we have one online talk at night and on every Saturday, a Debian packaging workshop. The feedback so far has been awesome and the Brazilian Debian community is reaching out to more people than usual at our regular conferences (as you might have imagined, Brazil is huge and it is hard to bring people at the same place). At the end of the month, we will have the first MiniDebConf online and I hope it will be as successful as our experience here in Brazil.

Another thing that deserves a highlight is the fact that I became an Ubuntu Core Developer this month; yay! After 9 months of working almost daily on the Ubuntu Server, I was able to get my upload rights to the Ubuntu archive. I was tired of asking for sponsorship, and probably my peers were tired of me too.

I could spend more time here complaining about the Brazilian government but I think it isn’t worth it. Let’s try to do something useful instead!

,

CryptogramFriday Squid Blogging: Vegan "Squid" Made from Chickpeas

It's beyond Beyond Meat. A Singapore company wants to make vegan "squid" -- and shrimp and crab -- from chickpeas.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: Let’s celebrate Anna!

Today is graduation at Washington University, and just like many other places, the ceremonies are a lot more virtual and surreal that in other years. For Anna today marks the graduation from Biomedical Engineering with a BSc. The McKelvey School of Engineering put a Zoom meeting together yesterday which was nice, and there is something more virtual here. Hopefully a real-life commencenment can take place in a year—the May 30, 2021, date has been set. The university also sent out a little commencement site/video which was cute. But at end of the day online-only still falls short of the real deal as we all know too well by now.

During those years, just about the only thing really I ever tweeted about appears to be soccer related. As it should because ball is life, as we all know. Here is one from 1 1/2 years ago when her Club Team three-peated in their NIRSA division:

And that opens what may be the best venue for mocking Anna: this year, which her a senior and co-captain, the team actually managed to loose a league game (a shocking first in these years) and to drop the final. I presume they anticipated that all we would all talk about around now is The Last Dance and three-peats, and left it at that. Probably wise.

Now just this week, and hence days before graduating with her B.Sc., also marks the first time Anna was addressed as Dr Eddelbuettel. A little prematurely I may say, but not too shabby to be in print already!

But on the topic of gratulations and what comes next, this tweet was very sweet:

As was this, which marked another impressive score:

So big thanks from all of us to WashU for being such a superb environment for Anna for those four years, and especially everybody at the Pappu Lab for giving Anna a home and base to start a research career.

And deepest and most sincere congratulations to Anna before the next adventure starts….

Planet DebianIngo Juergensmann: XMPP: ejabberd Project on the-federation.info

For those interested in alternative social networks there is that website that is called the-federation.info, which collects some statistics of "The Fediverse". The biggest part of the fediverse is Mastodon, but there are other projects (or parts) like Friendica or Matrix that do "federation". One of the oldest projects doing federation is XMPP. You could find some Prosody servers for some time now, because there is a Prosody module "mod_nodeinfo2" that can be used. But for ejabberd there is no such module (yet?) so far, which makes it a little bit difficult to get listed on the-federation.info.

Some days ago I wrote a small script to export the needed values to x-nodeinfo2 that is queried by the-federation.info. It's surely not the best script or solution for that job and is currently limited to ejabberd servers that use a PostgreSQL database as backend, although it should be fairly easy to adapt the script for use with MySQL. Well, at least it does its job. At least as there is no native ejabberd module for this task.

You can find the script on Github: https://github.com/ingoj/ejabberd-nodeinfo2

Enjoy it and register your ejabberd server on the-federation.info! :-)

Kategorie: 

Planet DebianJunichi Uekawa: Bought Canon TS3300 Printer.

Bought Canon TS3300 Printer. I haven't bought a printer in 20 years. 5 years ago I think I would have used Google Cloud Print which took forever uploading and downloading huge print data to the cloud. 20 years ago I would have used LPR and LPD, admiring the postscript backtraces. Configuration was done using Android app, the printer could be connected via WiFi and then configuration through the app allowed printer to be connected to the home WiFi network, feels modern. It was recognized as a IPPS device by Chrome OS (using CUPS?). No print server necessary. Amazing.

CryptogramOn Marcus Hutchins

Long and nuanced story about Marcus Hutchins, the British hacker who wrote most of the Kronos malware and also stopped WannaCry in real time. Well worth reading.

Worse Than FailureError'd: Destination Undefined

"It's good that I'm getting off at LTH, otherwise God knows what'd have happened to me," Elliot B. writes.

 

"Ummmm...Thanks for the 'great' deal, FedEx?" writes Ginnie.

 

David wrote, "Sure am glad that they have a men's version of this...I have so many things to do with my kitchen hands."

 

"I mean, the fact that you can't ship to undefined isn't wrong, but it's not right either," Kevin K. wrote.

 

Peter G. writes, "This must have been written by physicists, it's within +/- 10% of being correctly sorted."

 

"As if the thought of regular enemas don't make me clench my cheeks enough, there's this," wrote Quentin G.

 

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

Planet DebianIustin Pop: New internet provider ☺

Note: all this is my personal experience, on my personal machines, and I don’t claim it to be the absolute truth. Also, I don’t directly call out names, although if you live in Switzerland it’s pretty easy to guess who the old provider is from some hints.

For a long time, I wanted to move away from my current (well, now past) provider, for a multitude of reasons. The main being that the company is very classic company, with classic support system, that doesn’t work very well - I had troubles with their billing system that left me out cold without internet for 15 days, but for the recent few years, they were mostly OK, and changing to a different provider would have meant me routing a really long Ethernet cable around the apartment, so I kept postponing it. Yes, self-inflicted pain, I know.

Until the entire work-from-home thing, when the usually stable connection start degrading in a very linear fashion day-by-day (this is a graph that basically reflects download bandwidth):

1+ month download bandwidth test 1+ month download bandwidth test

At first, I didn’t realise this, as even 100Mbps is still fast enough. But once the connection went below (I believe) 50Mbps, it became visible in day to day work. And since I work daily from home… yeah. Not fun.

So I started doing speedtest.net - and oh my, ~20Mbps was a good result, usually 12-14Mbps. On a wired connection. On a connection that officially is supposed to be 600Mbps down. The upload speed was spot on, so I didn’t think it was my router, but:

  • rebooted everything; yes, including the cable modem.
  • turned off firewall. and on. and off. of course, no change.
  • changed the Ethernet port used on my firewall.
  • changed firewall rules.
  • rebooted again.

Nothing helped. Once in a blue moon, speedtest would give me 100Mbps, but like once every two days, then it would be back and showing 8Mbps. Eight! It ended up as even apt update was tediously slow, and kernel downloads took ages…

The official instructions for dealing with bad internet are a joke:

  • turn off everything.
  • change cable modem from bridged mode to router mode - which is not how I want the modem to work, so the test is meaningless; also, how can router mode be faster⁈
  • disconnect everything else than the test computer.
  • download a program from somewhere (Windows/Mac/iOS, with a browser version too), and run it. yay for open platform!

And the best part: “If you are not satisfied with the results, read our internet optimisation guide. If you are still not happy, use our community forums or our social platforms.”

Given that it was a decline over 3-weeks, that I don’t know of any computer component that would degrade this steadily but not throw any other errors, and that my upload speed was all good, I assumed it’s the provider. Maybe I was wrong, but I wanted to do this anyway for a long while, so I went through the “find how to route cable, check if other provider socket is good, order service, etc.” dance, and less than a week later, I had the other connection.

Now, of course, bandwidth works as expected:

1+ month updated bandwidth test 1+ month updated bandwidth test

Both download and upload are fine (the graph above is just download). Latency is also much better, towards many parts of the internet that matter. But what is shocking is the difference in jitter to some external hosts I care about. On the previous provider, a funny thing was that both outgoing and incoming pings had both more jitter and packet loss when done directly (IPv4 to IPv4) than when done over a VPN. This doesn’t make sense, since VPN is just overhead over IPv4, but the graphs show it, and what I think happens is that a VPN flow is “cached” in the provider’s routers, whereas a simple ping packet not. But, the fact that there’s enough jitter for a ping to a not-very-far host doesn’t make me happy.

Examples, outgoing:

Outgoing smokeping to public IPv4 Outgoing smokeping to public IPv4
Outgoing smokeping over VPN Outgoing smokeping over VPN

And incoming:

Incoming smokeping to public IPv4 Incoming smokeping to public IPv4
Incoming smokeping over VPN Incoming smokeping over VPN

Both incoming and outgoing show this weirdness - more packet loss and more jitter over VPN. Again, this is not a problem in practice, or not much, but makes me wonder what other shenanigans happen behind the scenes. You can also see clearly when the “work from home” traffic entered the picture and started significantly degrading my connection, even over the magically “better” VPN connection.

Switching to this week’s view shows the (in my opinion) dramatic improvement in consistency of the connection:

Outgoing current week smokeping to public IPv4 Outgoing current week smokeping to public IPv4
Outgoing current week smokeping over VPN Outgoing current week smokeping over VPN

No more packet loss, no more jitter. You can also see my VPN being temporarily down during provider switchover because my firewall was not quite correct for a moment.

And the last drill down, at high resolution, one day before and one day after switchover. Red is VPN, blue is plain IPv4, yellow is the missing IPv6 connection :)

Incoming old:

Incoming 1-day smokeping, old provider Incoming 1-day smokeping, old provider

and new:

Incoming 1-day smokeping, new provider Incoming 1-day smokeping, new provider

Outgoing old:

Outgoing 1-day smokeping, old provider Outgoing 1-day smokeping, old provider

and new:

Outgoing 1-day smokeping, new provider Outgoing 1-day smokeping, new provider

This is what I expect, ping-over-VPN should of course be slower than plain ping. Note that incoming and outgoing have slightly different consistency, but that is fine for me :) The endpoints doing the two tests are different, so this is expected. Reading the legend on the graphs for the incoming connection (similar story for outgoing):

  • before/after plain latency: 24.1ms→18.8ms, ~5.3ms gained, ~22% lower.
  • before/after packet loss: ~7.0% → 0.0%→, infinitely better :)
  • before/after latency standard deviation: ~2.1ms → 0.0ms.
  • before/after direct-vs-VPN difference: inconsistent → consistent 0.4ms faster for direct ping.

So… to my previous provider: it can be done better. Or at least, allow people easier ways to submit performance issue problems.

For me, the moral of the story is that I should have switched a couple of years ago, instead of being lazy. And that I’m curious to see how IPv6 traffic will differ, if at all :)

Take care, everyone! And thanks for looking at these many graphs :)

CryptogramUS Government Exposes North Korean Malware

US Cyber Command has uploaded North Korean malware samples to the VirusTotal aggregation repository, adding to the malware samples it uploaded in February.

The first of the new malware variants, COPPERHEDGE, is described as a Remote Access Tool (RAT) "used by advanced persistent threat (APT) cyber actors in the targeting of cryptocurrency exchanges and related entities."

This RAT is known for its capability to help the threat actors perform system reconnaissance, run arbitrary commands on compromised systems, and exfiltrate stolen data.

TAINTEDSCRIBE is a trojan that acts as a full-featured beaconing implant with command modules and designed to disguise as Microsoft's Narrator.

The trojan "downloads its command execution module from a command and control (C2) server and then has the capability to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration."

Last but not least, PEBBLEDASH is yet another North Korean trojan acting like a full-featured beaconing implant and used by North Korean-backed hacking groups "to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration."

It's interesting to see the US government take a more aggressive stance on foreign malware. Making samples public, so all the antivirus companies can add them to their scanning systems, is a big deal -- and probably required some complicated declassification maneuvering.

Me, I like reading the codenames.

Lots more on the US-CERT website.

CryptogramAttack Against PC Thunderbolt Port

The attack requires physical access to the computer, but it's pretty devastating:

On Thunderbolt-enabled Windows or Linux PCs manufactured before 2019, his technique can bypass the login screen of a sleeping or locked computer -- and even its hard disk encryption -- to gain full access to the computer's data. And while his attack in many cases requires opening a target laptop's case with a screwdriver, it leaves no trace of intrusion and can be pulled off in just a few minutes. That opens a new avenue to what the security industry calls an "evil maid attack," the threat of any hacker who can get alone time with a computer in, say, a hotel room. Ruytenberg says there's no easy software fix, only disabling the Thunderbolt port altogether.

"All the evil maid needs to do is unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate, and the evil maid gets full access to the laptop," says Ruytenberg, who plans to present his Thunderspy research at the Black Hat security conference this summer­or the virtual conference that may replace it. "All of this can be done in under five minutes."

Lots of details in the article above, and in the attack website. (We know it's a modern hack, because it comes with its own website and logo.)

Intel responds.

EDITED TO ADD (5/14): More.

CryptogramNew US Electronic Warfare Platform

The Army is developing a new electronic warfare pod capable of being put on drones and on trucks.

...the Silent Crow pod is now the leading contender for the flying flagship of the Army's rebuilt electronic warfare force. Army EW was largely disbanded after the Cold War, except for short-range jammers to shut down remote-controlled roadside bombs. Now it's being urgently rebuilt to counter Russia and China, whose high-tech forces --- unlike Afghan guerrillas -- rely heavily on radio and radar systems, whose transmissions US forces must be able to detect, analyze and disrupt.

It's hard to tell what this thing can do. Possibly a lot, but it's all still in prototype stage.

Historically, cyber operations occurred over landline networks and electronic warfare over radio-frequency (RF) airwaves. The rise of wireless networks has caused the two to blur. The military wants to move away from traditional high-powered jamming, which filled the frequencies the enemy used with blasts of static, to precisely targeted techniques, designed to subtly disrupt the enemy's communications and radar networks without their realizing they're being deceived. There are even reports that "RF-enabled cyber" can transmit computer viruses wirelessly into an enemy network, although Wojnar declined to confirm or deny such sensitive details.

[...]

The pod's digital brain also uses machine-learning algorithms to analyze enemy signals it detects and compute effective countermeasures on the fly, instead of having to return to base and download new data to human analysts. (Insiders call this cognitive electronic warfare). Lockheed also offers larger artificial intelligences to assist post-mission analysis on the ground, Wojnar said. But while an AI small enough to fit inside the pod is necessarily less powerful, it can respond immediately in a way a traditional system never could.

EDITED TO ADD (5/14): Here are two reports on Russian electronic warfare capabilities.

Worse Than FailureCodeSOD: I Fixtured Your Test

When I was still doing consulting, I had a client that wanted to create One App To Rule Them All: all of their business functions (and they had many) available in one single Angular application. They hoped each business unit would have their own module, but the whole thing could be tied together into one coherent experience by setting global stylesheets.

I am a professional, so I muted myself before I started laughing at them. I did give them some guidance, but also tried to set expectations. Ignore the technical challenges. The political challenges of getting every software team in the organization, the contracting teams they would bring in, the management teams that needed direction, all headed in the same direction were likely insurmountable.

Brian isn’t in the same situation, but Brian has been receiving code from a team of contractors from Initech. The Initech contractors have been a problem from the very start of the project. Specifically, they are contractors, and very expensive ones. They know that they are very expensive, and thus have concluded that they must also be very smart. Smarter than Brian and his peers.

So, when Brian does a code review and finds their code doesn’t even approach his company’s basic standards for code quality, they ignore him. When he points out that they’ve created serious performance problems by refusing to follow his organization’s best practices, they ignore him and bill a few extra hours that week. When the project timeline slips, and he starts asking about their methodology, they refuse to tell him a single thing about how they work beyond, “We’re Agile.”

To the shock of the contractors and the management paying the bills, sprint demos started to fail. QA dashboards went red. Implementation of key features got pushed back farther and farther. In response, management decided to give Brian more supervisory responsibility over the contractors, starting with a thorough code review.

He’s been reviewing the code in detail, and has this to say:

Phrases like ‘depressingly awful’ are likely to feature in my final report (the review is still in progress) but this little gem from testing jumped out at me.

  it('should detect change', () => {
    fixture.detectChanges();
    const dt: OcTableComponent = fixture.componentInstance.dt;
    expect(dt).toEqual(fixture.componentInstance.dt);
  }); 

This is a Jasmine unit test, which takes a behavioral approach to testing. The it method expects a string describing what we expect “it” to do (“it”, in this context, being one unit of a larger feature), and a callback function which implements the actual test.

Right at the start, it('should detect change',…) reeks of a bad unit test. Doubly so when we see what changes they’re detecting: fixture.detectChanges()

Angular, when running in a browser context, automatically syncs the DOM elements it manages with the underlying model. You can’t do that in a unit test, because there isn’t an actual DOM to interact with, so Angular’s unit test framework allows you to trigger that by calling detectChanges.

Essentially, you invoke this any time you do something that’s supposed to impact the UI state from a unit test, so that you can accurately make assertions about the UI state at that point. What you don’t do is just, y’know, invoke it for no reason. It doesn’t hurt anything, it’s just not useful.

But it’s the meat of the test where things really go awry.

We set the variable dt to be equal to fixture.componentInstance.dt. Then we assert that dt is equal to fixture.componentInstance.dt. Which it clearly is, because we just set it.

The test is named “should detect changes”, which gives us the sense that they were attempting to unit test the Angular test fixture’s detectChanges method. That’s worse than writing unit tests for built-in methods, it’s writing a unit test for a vendor-supplied test fixture: testing the thing that helps you test.

But then we don’t change anything. In the end, this unit test simply asserts that the assignment operator works as expected. So it’s also worse than a test for a built-in method, it’s a test for a basic language feature.

This unit test manages, in a few compact lines, to not simply be bad, but is “not even wrong”. This is the kind of code which populates the entire code base. As Brian writes:

I still have about half this review to go and I dread to think what other errors I may find.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianNorbert Preining: Switching from NVIDIA to AMD (including tensorflow)

I have been using my Geforce 1060 extensively for deep learning, both with Python and R. But the always painful play with the closed source drivers and kernel updates, paired with the collapse of my computer’s PSU and/or GPU, I decided to finally do the switch to AMD graphic card and open source stack. And you know what, within half a day I had everything, including Tensorflow running. Yeah to Open Source!

Preliminaries

So what is the starting point: I am running Debian/unstable with a AMD Radeon 5700. First of all I purged all NVIDIA related packages, and that are a lot I have to say. Be sure to search for nv and nvidia and get rid of all packages. For safety I did reboot and checked again that no kernel modules related to NVIDIA are loaded.

Firmware

Debian ships the package amd-gpu-firmware but this is not enough for the current kernel and current hardware. Better is to clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git and copy everything from the amdgpu directory to /lib/firmware/amdgpu.

I didn’t do that at first, and then booting the kernel did hang during the switch to AMD framebuffer. If you see this behaviour, your firmwares are too old.

Kernel

The advantage of having open source driver that is in the kernel is that you don’t have to worry about incompatibilities (like every time a new kernel comes out the NVIDIA driver needs patching). For recent AMD GPUs you need a rather new kernel, I have 5.6.0 and 5.7.0-rc5 running. Make sure that you have all the necessary kernel config options turned on if you compile your own kernels. In my case this is

CONFIG_DRM_AMDGPU=m
CONFIG_DRM_AMDGPU_USERPTR=y
CONFIG_DRM_AMD_ACP=y
CONFIG_DRM_AMD_DC=y
CONFIG_DRM_AMD_DC_DCN=y
CONFIG_HSA_AMD=y

When installing the kernel, be sure that the firmware is already updated so that the correct firmware is copied into the initrd.

Support programs and libraries

All the following is more or less an excerpt from the ROCm Installation Guide!

AMD provides a Debian/Ubuntu APT repository for software as well as kernel sources. Put the following into /etc/apt/sources.list.d/rocm.list:

deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main

and also put the public key of the rocm repository into /etc/apt/trusted.d/rocm.asc.

After that apt-get update should work.

I did install rocm-dev-3.3.0, rocm-libs-3.3.0, hipcub-3.3.0, miopen-hip-3.3.0 (and of course the dependencies), but not rocm-dkms which is the kernel module. If you have a sufficiently recent kernel (see above), the source in the kernel itself is newer.

The libraries and programs are installed under /opt/rocm-3.3.0, and to make the libraries available to Tensorflow (see below) and other programs, I added /etc/ld.so.conf.d/rocm.conf with the following content:

/opt/rocm-3.3.0/lib/

and run ldconfig as root.

Last but not least, add a udev rule that is normally installed by rocm-dkms, put the following into /etc/udev/rules.d/70-kfd.rules:

SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"

This allows users from the video group to access the GPU.


Up to here you should be able to boot into the system and have X running on top of AMD GPU, including OpenGL acceleration and direct rendering:

$ glxinfo
ame of display: :0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
...

Tensorflow

Thinking about how hard it was to get the correct libraries to get Tensorflow running on GPUs (see here and here), it is a pleasure to see that with open source all this pain is relieved.

There is already work done to make Tensorflow run on ROCm, the tensorflow-rocm project. The provide up to date PyPi packages, so a simple

pip3 install tensorflow-rocm

is enough to get Tensorflow running with Python:

>> import tensorflow as tf
>> tf.add(1, 2).numpy()
2020-05-14 12:07:19.590169: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libhip_hcc.so
...
2020-05-14 12:07:19.711478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7444 MB memory) -> physical GPU (device: 0, name: Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT], pci bus id: 0000:03:00.0)
3
>>

Tensorflow for R

Installation is trivial again since there is a tensorflow for R package, just run (as a user that is in the group staff, which normally own /usr/local/lib/R)

$ R
...
> install.packages("tensorflow")
..

Do not call the R function install_tensorflow() since Tensorflow is already installed and functional!

With that done, R can use the AMD GPU for computations:

$ R
...
> library(tensorflow)
> tf$constant("Hellow Tensorflow")
2020-05-14 12:14:24.185609: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libhip_hcc.so
...
2020-05-14 12:14:24.277736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7444 MB memory) -> physical GPU (device: 0, name: Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT], pci bus id: 0000:03:00.0)
tf.Tensor(b'Hellow Tensorflow', shape=(), dtype=string)
>

AMD Vulkan

From the Vulkan home page:

Vulkan is a new generation graphics and compute API that provides high-efficiency, cross-platform access to modern GPUs used in a wide variety of devices from PCs and consoles to mobile phones and embedded platforms.

Several games are using the Vulkan API if available and it is said to be more efficient.

There are Vulkan libraries for Radeon shipped in with mesa, in the Debian package mesa-vulkan-drivers, but they look a bit outdated is my guess.

The AMDVLK project provides the latest version, and to my surprise was rather easy to install, again by following the advice in their README. The steps are basically (always follow what is written for Ubuntu):

  • Install the necessary dependencies
  • Install the Repo tool
  • Get the source code
  • Make 64-bit and 32-bit builds
  • Copy driver and JSON files (see below for what I did differently!)

All as described in the linked README. Just to make sure, I removed the JSON files /usr/share/vulkan/icd.d/radeon* shipped by Debians mesa-vulkan-drivers package.

Finally I deviated a bit by not editing the file /usr/share/X11/xorg.conf.d/10-amdgpu.conf, but instead copying to /etc/X11/xorg.conf.d/10-amdgpu.conf and adding there the section:

Section "Device"
        Identifier "AMDgpu"
        Option  "DRI" "3"
EndSection

.

To be honest, I did not follow the Copy driver and JSON files literally, since I don’t want to copy self-made files into system directories under /usr/lib. So what I did is:

  • copy the driver files to /opt/amdvkn/lib, so I have now there /opt/amdvlk/lib/i386-linux-gnu/amdvlk32.so and /opt/amdvlk/lib/x86_64-linux-gnu/amdvlk64.so
  • Adjust the location of the driver file in the two JSON files /etc/vulkan/icd.d/amd_icd32.json and /etc/vulkan/icd.d/amd_icd64.json (which were installed above under Copy driver and JSON files)
  • added a file /etc/ld.so.conf.d/amdvlk.conf containing the two lines:
    /opt/amdvlk/lib/i386-linux-gnu
    /opt/amdvlk/lib/x86_64-linux-gnu

With this in place, I don’t pollute the system directories, and still the new Vulkan driver is available.

But honestly, I don’t really know whether it is used and is working, because I don’t know how to check.


With all that in place, I can run my usual set of Steam games (The Long Dark, Shadow of the Tomb Raider, The Talos Principle, Supraland, …) and I don’t see any visual problem till now. As a bonus, KDE/Plasma is now running much better, since NVIDIA and KDE has traditionally some incompatibilities.

The above might sound like a lot of stuff to do, but considering that most of the parts are not really packaged within Debian, and all this is rather new open source stack, I was surprised that in half a day I got all working smoothly.

Thanks to all the developers who have worked hard to make this all possible.

,

TEDThe Audacious Project announces new efforts in response to COVID-19

In response to the unprecedented impact of COVID-19, The Audacious Project, a collaborative funding initiative housed at TED, will direct support towards solutions tailored to rapid response and long-term recovery. Audacious has catalyzed more than $30 million towards the first three organizations in its COVID-19 rapid response cohort: Partners In Health will rapidly increase the scale, speed and effectiveness of contact tracing in the US; Project ECHO will equip over 350,000 frontline clinicians and public health workers across Africa, Southeast Asia and Latin America to respond to COVID-19; and World Central Kitchen will demonstrate a new model for food assistance within US cities. Each organization selected is delivering immediate aid to vulnerable populations most affected by the novel coronavirus. 

“Audacious was designed to elevate powerful interventions tackling the world’s most urgent challenges,” said Anna Verghese, Executive Director of The Audacious Project. “In line with that purpose, our philanthropic model was built to flex. In the wake of COVID-19, we’re grateful to be able to funnel rapid support towards Partners in Health, Project ECHO and World Central Kitchen — each spearheading critical work that is actionable now.”

(Photo: Partners in Health/Jon Lasher)

Announcing The Audacious Project’s COVID-19 rapid response cohort 

Partners In Health has been a global leader in disease prevention, treatment and care for more than 30 years. With Audacious support over the next year, Partners In Health will disseminate its contact tracing expertise across the US and work with more than 19 public health departments to not only flatten the curve but bend it downward and help stop the spread of COVID-19. They plan to customize and scale their programs through a combination of direct technical assistance and open source sharing of best practices. This effort will reduce the spread of COVID-19 in cities and states home to an estimated 133 million people.

(Photo: Project Echo)

Project ECHO (Extension for Community Healthcare Outcomes) exists to democratize life-saving medical knowledge — linking experts at centralized institutions with regional, local and community-based workforces. With Audacious investment over the next two years, ECHO will scale this proven virtual learning and telementoring model to equip more than 350,000 frontline clinicians and public health workers to respond to COVID-19. Working across Africa, Southeast Asia and Latin America, the ECHO team will build a global network of health workers who together can permanently improve health systems and save lives in our world’s most vulnerable communities. 

(Photo: World Central Kitchen)

Chef José AndrésWorld Central Kitchen has provided fresh and nutritious meals to those in need following disasters such as earthquakes and hurricanes since 2010. In response to the novel coronavirus pandemic, World Central Kitchen has developed an innovative solution to simultaneously provide fresh meals to those in immediate need and keep small businesses open in the midst of a health and economic crisis. World Central Kitchen will demonstrate this at scale, by expanding to employ 200 local Oakland restaurants (roughly 16 percent of the local restaurant industry) to serve nearly two million meals by the end of July — delivering a powerful proof of concept for a model that could shift food assistance around the world.

The Audacious Coalition

The Audacious Project was formed in partnership with The Bridgespan Group as a springboard for social impact. Using TED’s curatorial expertise to surface ideas, the initiative convenes investors and social entrepreneurs to channel funds towards pressing global issues.

A remarkable group of individuals and organizations have played a key role in facilitating the first edition of this Rapid Response effort. Among them ELMA Philanthropies, Skoll Foundation, Scott Cook and Signe Ostby of the Valhalla Charitable Foundation, Chris Larsen and Lyna Lam, Lyda Hill Philanthropies, The Rick & Nancy Moskovitz Foundation, Stadler Family Charitable Foundation, Inc., Ballmer Group, Mary and Mark Stevens, Crankstart and more.

To learn more about The Audacious Project visit audaciousproject.org/covid-19-response.

Planet Linux AustraliaStewart Smith: A op-build v2.5-rc1 based Raptor Blackbird Build

I have done a few builds of firmware for the Raptor Blackbird since I got mine, each of them based on upstream op-build plus a few patches. The previous one was Yet another near-upstream Raptor Blackbird firmware build that I built a couple of months ago. This new build is based off the release candidate of op-build v2.5. Here’s what’s changed:

PackageOld VersionNew Version
hcodehw030220a.opmsthw050520a.opmst
hostbootacdff8a390a2654dd52fed67bdebe2b5
kexec-lite18ec88310c4134e6b0130b3c1ea489e
libflashv6.5-228-g82aed17av6.6
linuxv5.4.22v5.4.33
linux-headersv5.4.22v5.4.33
machine-xml17e9e84d504582c88e782e30829e0d6be
occ3ab29212518e65740ab4dc96fd6cf584c42
openpower-pnor6fb8d914134d544a84175f00d9c6dc395faf3
sbec318ab00116d92f08c78fb7838495ad0aab7
skibootv6.5-228-g82aed17av6.6
Changes in my latest Blackbird build

Go grab blackbird.pnor from https://www.flamingspork.com/blackbird/stewart-blackbird-6-images/, and give it a go! Just scp it to your BMC, and flash it:

pflash -E -p /tmp/blackbird.pnor

There’s two differences from upstream op-build: my pull request to op-build, and the fixing of the (old) buildroot so that it’ll build on Fedora 32. From discussions on the openpower-firmware mailing list, it seems that one hopeful thing is to have all the Blackbird support merged in before the final op-build v2.5 is tagged. The previous op-build release (v2.4) was tagged in July 2019, so we’re about 10 months into what was a 2 month release cycle, so speculating on when that final release will be is somewhat difficult.

Planet DebianMike Gabriel: Q: Remote Support Framework for the GNU/Linux Desktop?

TL;DR; For those (admins) of you who run GNU/Linux on staff computers: How do you organize your graphical remote support in your company? Get in touch, share your expertise and experiences.

Researching on FLOSS based Linux Desktops

When bringing GNU/Linux desktops to a generic folk of productive office users on a large scale, graphical remote support is a key feature when organizing helpdesk support teams' workflows.

In a research project that I am currently involved in, we investigate the different available remote support technologies (VNC screen mirroring, ScreenCasts, etc.) and the available frameworks that allow one to provide a remote support infrastructure 100% on-premise.

In this research project we intend to find FLOSS solutions for everything required for providing a large scale GNU/Linux desktop to end users, but we likely will have to recommend non-free solutions, if a FLOSS approach is not available for certain demands. Depending on the resulting costs, bringing forth a new software solution instead of dumping big money in subscription contracts for non-free software is seen as a possible alternative.

As a member of the X2Go upstream team and maintainer of several remote desktop related tools and frameworks in Debian, I'd consider myself as sort of in-the-topic. The available (as FLOSS) underlying technologies for plumbing a remote support framework are pretty much clear (x11vnc, recent pipewire-related approaches in Wayland compositors, browser-based screencasting). However, I still lack a good spontaneous answer to the question: "How to efficiently software-side organize a helpdesk scenario for 10.000+ users regarding graphical remote support?".

Framework for Remote Desktop in Webbrowsers

In fact, in the context of my X2Go activities, I am currently planning to put together a Django-based framework for running X2Go sessions in a web browser. The framework that we will come up with (two developers have already been hired for an initial sprint in July 2020) will be designed to be highly pluggable and it will probably be easy to add remote support / screen sharing features further on.

And still, I walk around with the question in mind: Do I miss anything? Is there anything already out there that provides a remote support solution as 100% FLOSS, that has enterprise grade, that up-scales well, that has a modern UI design, etc. Something that I simply haven't come across, yet?

Looking forward to Your Feedback

Please get in touch (OFTC/Freenode IRC, Telegram, Email), if you can fill the gap and feel like sharing your ideas and experiences.

light+love
Mike

Worse Than FailureCodeSOD: A Short Trip on the BobC

More than twenty years ago, “BobC” wrote some code. This code was, at the time, relatively modern C++ code. One specific class controls a display, attached to a “Thingamobob” (technical factory term), and reporting on the state of a number of “Doohickeys”, which grows over time.

The code hasn’t been edited since BobC’s last change, but it had one little, tiny, insignificant problem. It would have seeming random crashes. They were rare, which was good, but “crashing software attached to factory equipment” isn’t good for anyone.

Eventually, the number of crash reports was enough that the company decided to take a look at it, but no one could replicate the bug. Johana was asked to debug the code, and I’ve presented it as she supplied it for us:

class CDisplayControl
{
private:

    std::vector<IDoohickey*> m_vecIDoohickeys;
    std::map<short, IHelper*> m_vecIHelpers;
    short m_nNumHelpers;

public:

    AddDoohickey(IDoohickey *pIDH, IHelper *pIHlp)
    {
        // Give Helper to doohickey
        pIDH->put_Helper(pIHlp);

        // Add doohickey to collection
        m_vecIDooHickeys.push_back(pIDH);
        pIDH->AddRef();
        int nId = m_vecIDooHickeys.size() - 1;

        // Add Helper to local interface vector.  This is really only done so
        // we have easy/quick access to the Helper.
        m_nNumHelpers++;
        m_vecIHelpers[nId] = pIHlp; // BobC:CHANGED
        pIHlp->AddRef();

        // Skip deadly function on the first Doohickey.
        if (m_nNumHelpers > 1)
        {
            CallThisEveryTimeButTheFirstOrTheWorldWillEnd();
        }
    }
}

I’m on record as being anti-Hungarian notation. Wrong people disagree with me all the time on this, but they’re wrong, why would we listen to them? I’m willing to permit the convention of IWhatever for interfaces, but CDisplayControl is an awkward class name. That’s just aesthetic preference, though, the real problem is the member declarations:

    std::vector<IDoohickey*> m_vecIDoohickeys;
    std::map<short, IHelper*> m_vecIHelpers;

Here, we have a vector- a resizable list- of IDoohickey objects called m_vecIDoohickeys, which is Hungarian notation for a member which is a vector.

We also have a map that maps shorts to IHelper objects, called m_vecIHelpers, which is Hungarian notation for a member which is a vector. But this is a map. So even if Hungarian notation were helpful, this completely defeats the purpose.

Tracing through the AddDoohickey method, the very first step is that we assign a property on the IDoohickey object to point at the IHelper object. Then we put that IDoohickey into the vector, and create an ID by just checking the size of the vector.

We also increment m_nNumHelpers, another wonderfully Hungarian name, since n tells us that this is a number, but we also need to specify Num in the name too.

It’s important to note: the size of the vector and the value in m_nNumHelpers should match. Then, based on the id, we slot the IHelper object into our map. This is done, according to the comment, “so we have easy/quick access to the Helper”.

Keep in mind, we just assigned the IHelper instance to a property of the IDoohickey, so we already have “quick/easy” access. Quicker, because these are Standard Template Library classes, and while the STL is a powerful set of data-structures, back then speed wasn’t really one of its attributes.

Also, note that BobC didn’t trust source control, which isn’t unreasonable for that long ago, but for only one of the lines changed. Though the tag, “CHANGED” doesn’t really give us much insight into what the change was.

Finally, we use than m_nNumHelpers to see if we’ve run this method at least once, because there’s a step that should only happen when we have more than one IDoohickey and IHelper combination. As Johana’s “corrections” to the code make clear- if we call this at the wrong time, the world will end. We can’t call it the first time through, but we must call it every other time through.

Which, if you carefully check the variable declarations, you’ll catch the root cause of the seemingly random crashes:

short m_nNumHelpers;

In Johana’s world, shorts are 16 bit integers. As these are signed, that means after it hits 32,767, it overflows and wraps back around to negative. So m_nNumHelpers > 1 becomes false, and we stop calling that method which we must call or the world will end.

Most of the time, the equipment gets power-cycled long before they hit the 32,767 invocations of this method, which is why this was so tricky to debug.

Speaking of “tricky to debug,” there’s one more thing I see lurking in here, which based on what I saw in this method, makes me worry. As we know, BobC isn’t super keen on counting, but we see calls to AddRef() in this code. I don’t know, but I suspect that BobC implemented his own reference counting garbage collector.

Real garbage collection, of course, would be to completely refactor this code.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Gideon the Ninth

Review: Gideon the Ninth, by Tamsyn Muir

Series: The Locked Tomb #1
Publisher: Tor
Copyright: September 2019
ISBN: 1-250-31317-1
Format: Kindle
Pages: 448

Despite being raised there, Gideon Nav is an outsider in the Ninth House. Her mother, already dead, fell from the sky with a one-day-old Gideon in tow, leaving her an indentured servant. She's a grumpy, caustic teenager in a world of moldering corpses, animated skeletons, and mostly-dead adults whose parts are falling off. Her world is sword fighting, dirty magazines, a feud with the house heir Harrowhark, and a determination to escape the terms of her indenture.

Gideon does get off the planet, but not the way that she expects. She doesn't get accepted into the military. She ends up in the middle of a bizarre test, or possibly an ascension rite, mingling with and competing with the nobility of the empire alongside her worst enemy.

I struggled to enjoy the beginning of Gideon the Ninth. Gideon tries to carry the story on pure snark, but it is very, very goth. If you like desiccated crypts, mostly-dead goons, betrayal, frustration, necromancers, black robes, disturbing family relationships, gloom, and bitter despair, the first six chapters certainly deliver, but I was sick of it by the time Gideon gets out. Thankfully, the opening is largely unlike the rest of the book. What starts as an over-the-top teenage goth rebellion turns into a cross between a manor house murder mystery and a competitive escape room. This book is a bit of a mess, but it's a glorious mess.

It's also the sort of glorious mess that I don't think would have been written or published twenty years ago, and I have a pet theory that attributes this to the invigorating influence of fanfic and writers who grew up reading and writing it.

I read a lot of classic science fiction and epic fantasy as a teenager. Those books have many merits, obviously, but emotional range is not one of them. There are a few exceptions, but on average the genre either focused on puzzles and problem solving (how do we fix the starship, how do we use the magic system to take down the dark god) or on the typical "heroic" (and male-coded) emotions of loyalty, bravery, responsibility, authority, and defiance of evil. Characters didn't have messy breakups, frenemies, anxiety, socially-awkward love affairs, impostor syndrome, self-hatred, or depression. And authors weren't allowed to fall in love with the messiness of their characters, at least on the page.

I'm not enough of a scholar to make the argument well, but I suspect there's a case to be made that fanfic exists partially to fill this gap. So much of fanfic starts from taking the characters on the canonical page or screen and letting them feel more, live more, love more, screw up more, and otherwise experience a far wider range of human drama, particularly compared to what made it into television, which was even more censored than what made it into print. Some of those readers and writers are now writing for publication, and others have gone into publishing. The result, in my theory, is that the range of stories that are acceptable in the genre has broadened, and the emotional texture of those stories has deepened.

Whether or not this theory is correct, there are now more novels like this in the world, novels full of grudges, deflective banter, squabbling, messy emotional processing, and moments of glorious emotional catharsis. This makes me very happy. To describe the emotional payoff of this book in any more detail would be a huge spoiler; suffice it to say that I unabashedly love fragile competence and unexpected emotional support, and adore this book for containing it.

Gideon's voice, irreverent banter, stubborn defiance, and impulsive good-heartedness are the center of this book. At the start, it's not clear whether there will be another likable character in the book. There will be, several of them, but it takes a while for Gideon to find them or for them to become likable. You'll need to like Gideon well enough to stick with her for that journey.

I read books primarily for the characters, not for the setting, and Gideon the Ninth struck some specific notes that I will happily read endlessly. If that doesn't match your preferences, I would not be too surprised to hear you bounced off the book. There's a lot here that won't be to everyone's taste. The setting felt very close to Warhammer 40K: an undead emperor that everyone worships, endless war, necromancy, and gothic grimdark. The stage for most of the book is at least more light-filled, complex, and interesting than the Ninth House section at the start, but everything is crumbling, drowning, broken, or decaying. There's quite a lot of body horror, grotesque monsters, and bloody fights. And the ending is not the best part of the book; roughly the last 15% of the novel is composed of two running fight scenes against a few practically unkillable and frankly not very interesting villains. I got exhausted by the fighting long before it was over, and the conclusion is essentially a series cliffhanger.

There are also a few too many characters. The collection of characters and the interplay between the houses is one of the strengths of this book, but Muir sets up her story in a way that requires eighteen significant characters and makes the reader want to keep track of all of them. It took me about halfway through the book before I felt like I had my bearings and wasn't confusing one character for another or forgetting a whole group of characters. That said, most of the characters are great, and the story gains a lot from the interplay of their different approaches and mindsets. Palamedes Sextus's logical geekery, in particular, is a great counterpoint to the approaches of most of the other characters.

The other interesting thing Muir does in this novel that I've not seen before, and that feels very modern, is to set the book in essentially an escape room. Locking a bunch of characters in a sprawling mansion until people start dying is an old fictional trope, but this one has puzzles, rewards, and a progressive physical structure that provides a lot of opportunities to motivate the characters and give them space to take wildly different problem-solving approaches. I liked this a lot, and I'm looking forward to seeing it in future books.

This is not the best book I've read, but I thoroughly enjoyed it, despite some problems with the ending. I've already pre-ordered the sequel.

Followed by Harrow the Ninth.

Rating: 8 out of 10

,

Planet Linux AustraliaJonathan Adamczewski: f32, u32, and const

Some time ago, I wrote “floats, bits, and constant expressions” about converting floating point number into its representative ones and zeros as a C++ constant expression – constructing the IEEE 754 representation without being able to examine the bits directly.

I’ve been playing around with Rust recently, and rewrote that conversion code as a bit of a learning exercise for myself, with a thoroughly contrived set of constraints: using integer and single-precision floating point math, at compile time, without unsafe blocks, while using as few unstable features as possible.

I’ve included the listing below, for your bemusement and/or head-shaking, and you can play with the code in the Rust Playground and rust.godbolt.org

// Jonathan Adamczewski 2020-05-12
//
// Constructing the bit-representation of an IEEE 754 single precision floating 
// point number, using integer and single-precision floating point math, at 
// compile time, in rust, without unsafe blocks, while using as few unstable 
// features as I can.
//
// or "What if this silly C++ thing http://brnz.org/hbr/?p=1518 but in Rust?"


// Q. Why? What is this good for?
// A. To the best of my knowledge, this code serves no useful purpose. 
//    But I did learn a thing or two while writing it :)


// This is needed to be able to perform floating point operations in a const 
// function:
#![feature(const_fn)]


// bits_transmute(): Returns the bits representing a floating point value, by
//                   way of std::mem::transmute()
//
// For completeness (and validation), and to make it clear the fundamentally 
// unnecessary nature of the exercise :D - here's a short, straightforward, 
// library-based version. But it needs the const_transmute flag and an unsafe 
// block.
#![feature(const_transmute)]
const fn bits_transmute(f: f32) -> u32 {
  unsafe { std::mem::transmute::<f32, u32>(f) }
}



// get_if_u32(predicate:bool, if_true: u32, if_false: u32):
//   Returns if_true if predicate is true, else if_false
//
// If and match are not able to be used in const functions (at least, not 
// without #![feature(const_if_match)] - so here's a branch-free select function
// for u32s
const fn get_if_u32(predicate: bool, if_true: u32, if_false: u32) -> u32 {
  let pred_mask = (-1 * (predicate as i32)) as u32;
  let true_val = if_true & pred_mask;
  let false_val = if_false & !pred_mask;
  true_val | false_val
}

// get_if_f32(predicate, if_true, if_false):
//   Returns if_true if predicate is true, else if_false
//
// A branch-free select function for f32s.
// 
// If either is_true or is_false is NaN or an infinity, the result will be NaN,
// which is not ideal. I don't know of a better way to implement this function
// within the arbitrary limitations of this silly little side quest.
const fn get_if_f32(predicate: bool, if_true: f32, if_false: f32) -> f32 {
  // can't convert bool to f32 - but can convert bool to i32 to f32
  let pred_sel = (predicate as i32) as f32;
  let pred_not_sel = ((!predicate) as i32) as f32;
  let true_val = if_true * pred_sel;
  let false_val = if_false * pred_not_sel;
  true_val + false_val
}


// bits(): Returns the bits representing a floating point value.
const fn bits(f: f32) -> u32 {
  // the result value, initialized to a NaN value that will otherwise not be
  // produced by this function.
  let mut r = 0xffff_ffff;

  // These floation point operations (and others) cause the following error:
  //     only int, `bool` and `char` operations are stable in const fn
  // hence #![feature(const_fn)] at the top of the file
  
  // Identify special cases
  let is_zero    = f == 0_f32;
  let is_inf     = f == f32::INFINITY;
  let is_neg_inf = f == f32::NEG_INFINITY;
  let is_nan     = f != f;

  // Writing this as !(is_zero || is_inf || ...) cause the following error:
  //     Loops and conditional expressions are not stable in const fn
  // so instead write this as type coversions, and bitwise operations
  //
  // "normalish" here means that f is a normal or subnormal value
  let is_normalish = 0 == ((is_zero as u32) | (is_inf as u32) | 
                        (is_neg_inf as u32) | (is_nan as u32));

  // set the result value for each of the special cases
  r = get_if_u32(is_zero,    0,           r); // if (iz_zero)    { r = 0; }
  r = get_if_u32(is_inf,     0x7f80_0000, r); // if (is_inf)     { r = 0x7f80_0000; }
  r = get_if_u32(is_neg_inf, 0xff80_0000, r); // if (is_neg_inf) { r = 0xff80_0000; }
  r = get_if_u32(is_nan,     0x7fc0_0000, r); // if (is_nan)     { r = 0x7fc0_0000; }
 
  // It was tempting at this point to try setting f to a "normalish" placeholder 
  // value so that special cases do not have to be handled in the code that 
  // follows, like so:
  // f = get_if_f32(is_normal, f, 1_f32);
  //
  // Unfortunately, get_if_f32() returns NaN if either input is NaN or infinite.
  // Instead of switching the value, we work around the non-normalish cases 
  // later.
  //
  // (This whole function is branch-free, so all of it is executed regardless of 
  // the input value)

  // extract the sign bit
  let sign_bit  = get_if_u32(f < 0_f32,  1, 0);

  // compute the absolute value of f
  let mut abs_f = get_if_f32(f < 0_f32, -f, f);

  
  // This part is a little complicated. The algorithm is functionally the same 
  // as the C++ version linked from the top of the file.
  // 
  // Because of the various contrived constraints on thie problem, we compute 
  // the exponent and significand, rather than extract the bits directly.
  //
  // The idea is this:
  // Every finite single precision float point number can be represented as a
  // series of (at most) 24 significant digits as a 128.149 fixed point number 
  // (128: 126 exponent values >= 0, plus one for the implicit leading 1, plus 
  // one more so that the decimal point falls on a power-of-two boundary :)
  // 149: 126 negative exponent values, plus 23 for the bits of precision in the 
  // significand.)
  //
  // If we are able to scale the number such that all of the precision bits fall 
  // in the upper-most 64 bits of that fixed-point representation (while 
  // tracking our effective manipulation of the exponent), we can then 
  // predictably and simply scale that computed value back to a range than can 
  // be converted safely to a u64, count the leading zeros to determine the 
  // exact exponent, and then shift the result into position for the final u32 
  // representation.
  
  // Start with the largest possible exponent - subsequent steps will reduce 
  // this number as appropriate
  let mut exponent: u32 = 254;
  {
    // Hex float literals are really nice. I miss them.

    // The threshold is 2^87 (think: 64+23 bits) to ensure that the number will 
    // be large enough that, when scaled down by 2^64, all the precision will 
    // fit nicely in a u64
    const THRESHOLD: f32 = 154742504910672534362390528_f32; // 0x1p87f == 2^87

    // The scaling factor is 2^41 (think: 64-23 bits) to ensure that a number 
    // between 2^87 and 2^64 will not overflow in a single scaling step.
    const SCALE_UP: f32 = 2199023255552_f32; // 0x1p41f == 2^41

    // Because loops are not available (no #![feature(const_loops)], and 'if' is
    // not available (no #![feature(const_if_match)]), perform repeated branch-
    // free conditional multiplication of abs_f.

    // use a macro, because why not :D It's the most compact, simplest option I 
    // could find.
    macro_rules! maybe_scale {
      () => {{
        // care is needed: if abs_f is above the threshold, multiplying by 2^41 
        // will cause it to overflow (INFINITY) which will cause get_if_f32() to
        // return NaN, which will destroy the value in abs_f. So compute a safe 
        // scaling factor for each iteration.
        //
        // Roughly equivalent to :
        // if (abs_f < THRESHOLD) {
        //   exponent -= 41;
        //   abs_f += SCALE_UP;
        // }
        let scale = get_if_f32(abs_f < THRESHOLD, SCALE_UP,      1_f32);    
        exponent  = get_if_u32(abs_f < THRESHOLD, exponent - 41, exponent); 
        abs_f     = get_if_f32(abs_f < THRESHOLD, abs_f * scale, abs_f);
      }}
    }
    // 41 bits per iteration means up to 246 bits shifted.
    // Even the smallest subnormal value will end up in the desired range.
    maybe_scale!();  maybe_scale!();  maybe_scale!();
    maybe_scale!();  maybe_scale!();  maybe_scale!();
  }

  // Now that we know that abs_f is in the desired range (2^87 <= abs_f < 2^128)
  // scale it down to be in the range (2^23 <= _ < 2^64), and convert without 
  // loss of precision to u64.
  const INV_2_64: f32 = 5.42101086242752217003726400434970855712890625e-20_f32; // 0x1p-64f == 2^64
  let a = (abs_f * INV_2_64) as u64;

  // Count the leading zeros.
  // (C++ doesn't provide a compile-time constant function for this. It's nice 
  // that rust does :)
  let mut lz = a.leading_zeros();

  // if the number isn't normalish, lz is meaningless: we stomp it with 
  // something that will not cause problems in the computation that follows - 
  // the result of which is meaningless, and will be ignored in the end for 
  // non-normalish values.
  lz = get_if_u32(!is_normalish, 0, lz); // if (!is_normalish) { lz = 0; }

  {
    // This step accounts for subnormal numbers, where there are more leading 
    // zeros than can be accounted for in a valid exponent value, and leading 
    // zeros that must remain in the final significand.
    //
    // If lz < exponent, reduce exponent to its final correct value - lz will be
    // used to remove all of the leading zeros.
    //
    // Otherwise, clamp exponent to zero, and adjust lz to ensure that the 
    // correct number of bits will remain (after multiplying by 2^41 six times - 
    // 2^246 - there are 7 leading zeros ahead of the original subnormal's
    // computed significand of 0.sss...)
    // 
    // The following is roughly equivalent to:
    // if (lz < exponent) {
    //   exponent = exponent - lz;
    // } else {
    //   exponent = 0;
    //   lz = 7;
    // }

    // we're about to mess with lz and exponent - compute and store the relative 
    // value of the two
    let lz_is_less_than_exponent = lz < exponent;

    lz       = get_if_u32(!lz_is_less_than_exponent, 7,             lz);
    exponent = get_if_u32( lz_is_less_than_exponent, exponent - lz, 0);
  }

  // compute the final significand.
  // + 1 shifts away a leading 1-bit for normal, and 0-bit for subnormal values
  // Shifts are done in u64 (that leading bit is shifted into the void), then
  // the resulting bits are shifted back to their final resting place.
  let significand = ((a << (lz + 1)) >> (64 - 23)) as u32;

  // combine the bits
  let computed_bits = (sign_bit << 31) | (exponent << 23) | significand;

  // return the normalish result, or the non-normalish result, as appopriate
  get_if_u32(is_normalish, computed_bits, r)
}


// Compile-time validation - able to be examined in rust.godbolt.org output
pub static BITS_BIGNUM: u32 = bits(std::f32::MAX);
pub static TBITS_BIGNUM: u32 = bits_transmute(std::f32::MAX);
pub static BITS_LOWER_THAN_MIN: u32 = bits(7.0064923217e-46_f32);
pub static TBITS_LOWER_THAN_MIN: u32 = bits_transmute(7.0064923217e-46_f32);
pub static BITS_ZERO: u32 = bits(0.0f32);
pub static TBITS_ZERO: u32 = bits_transmute(0.0f32);
pub static BITS_ONE: u32 = bits(1.0f32);
pub static TBITS_ONE: u32 = bits_transmute(1.0f32);
pub static BITS_NEG_ONE: u32 = bits(-1.0f32);
pub static TBITS_NEG_ONE: u32 = bits_transmute(-1.0f32);
pub static BITS_INF: u32 = bits(std::f32::INFINITY);
pub static TBITS_INF: u32 = bits_transmute(std::f32::INFINITY);
pub static BITS_NEG_INF: u32 = bits(std::f32::NEG_INFINITY);
pub static TBITS_NEG_INF: u32 = bits_transmute(std::f32::NEG_INFINITY);
pub static BITS_NAN: u32 = bits(std::f32::NAN);
pub static TBITS_NAN: u32 = bits_transmute(std::f32::NAN);
pub static BITS_COMPUTED_NAN: u32 = bits(std::f32::INFINITY/std::f32::INFINITY);
pub static TBITS_COMPUTED_NAN: u32 = bits_transmute(std::f32::INFINITY/std::f32::INFINITY);


// Run-time validation of many more values
fn main() {
  let end: usize = 0xffff_ffff;
  let count = 9_876_543; // number of values to test
  let step = end / count;
  for u in (0..=end).step_by(step) {
      let v = u as u32;
      
      // reference
      let f = unsafe { std::mem::transmute::<u32, f32>(v) };
      
      // compute
      let c = bits(f);

      // validation
      if c != v && 
         !(f.is_nan() && c == 0x7fc0_0000) && // nans
         !(v == 0x8000_0000 && c == 0) { // negative 0
          println!("{:x?} {:x?}", v, c); 
      }
  }
}

Krebs on SecurityMicrosoft Patch Tuesday, May 2020 Edition

Microsoft today issued software updates to plug at least 111 security holes in Windows and Windows-based programs. None of the vulnerabilities were labeled as being publicly exploited or detailed prior to today, but as always if you’re running Windows on any of your machines it’s time once again to prepare to get your patches on.

May marks the third month in a row that Microsoft has pushed out fixes for more than 110 security flaws in its operating system and related software. At least 16 of the bugs are labeled “Critical,” meaning ne’er-do-wells can exploit them to install malware or seize remote control over vulnerable systems with little or no help from users.

But focusing solely on Microsoft’s severity ratings may obscure the seriousness of the flaws being addressed this month. Todd Schell, senior product manager at security vendor Ivanti, notes that if one looks at the “exploitability assessment” tied to each patch — i.e., how likely Microsoft considers each can and will be exploited for nefarious purposes — it makes sense to pay just as much attention to the vulnerabilities Microsoft has labeled with the lesser severity rating of “Important.”

Virtually all of the non-critical flaws in this month’s batch earned Microsoft’s “Important” rating.

“What is interesting and often overlooked is seven of the ten [fixes] at higher risk of exploit are only rated as Important,” Schell said. “It is not uncommon to look to the critical vulnerabilities as the most concerning, but many of the vulnerabilities that end up being exploited are rated as Important vs Critical.”

For example, Satnam Narang from Tenable notes that two remote code execution flaws in Microsoft Color Management (CVE-2020-1117) and Windows Media Foundation (CVE-2020-1126) could be exploited by tricking a user into opening a malicious email attachment or visiting a website that contains code designed to exploit the vulnerabilities. However, Microsoft rates these vulnerabilities as “Exploitation Less Likely,” according to their Exploitability Index.

In contrast, three elevation of privilege vulnerabilities that received a rating of “Exploitation More Likely” were also patched, Narang notes. These include a pair of “Important” flaws in Win32k (CVE-2020-1054, CVE-2020-1143) and one in the Windows Graphics Component (CVE-2020-1135). Elevation of Privilege vulnerabilities are used by attackers once they’ve managed to gain access to a system in order to execute code on their target systems with elevated privileges. There are at least 56 of these types of fixes in the May release.

Schell says if your organization’s plan for prioritizing the deployment of this month’s patches stops at vendor severity or even CVSS scores above a certain level you may want to reassess your metrics.

“Look to other risk metrics like Publicly Disclosed, Exploited (obviously), and Exploitability Assessment (Microsoft specific) to expand your prioritization process,” he advised.

As it usually does each month on Patch Tuesday, Adobe also has issued updates for some of its products. An update for Adobe Acrobat and Reader covers two dozen critical and important vulnerabilities. There are no security fixes for Adobe’s Flash Player in this month’s release.

Just a friendly reminder that while many of the vulnerabilities fixed in today’s Microsoft patch batch affect Windows 7 operating systems — including all three of the zero-day flaws — this OS is no longer being supported with security updates (unless you’re an enterprise taking advantage of Microsoft’s paid extended security updates program, which is available to Windows 7 Professional and Windows 7 enterprise users).

If you rely on Windows 7 for day-to-day use, it’s time to think about upgrading to something newer. That something might be a PC with Windows 10. Or maybe you have always wanted that shiny MacOS computer.

If cost is a primary motivator and the user you have in mind doesn’t do much with the system other than browsing the Web, perhaps a Chromebook or an older machine with a recent version of Linux is the answer (Ubuntu may be easiest for non-Linux natives). Whichever system you choose, it’s important to pick one that fits the owner’s needs and provides security updates on an ongoing basis.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re not losing your mind when the odd buggy patch causes problems booting the system.

So backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Also, keep an eye on the AskWoody blog from Woody Leonhard, who keeps a reliable lookout for buggy Microsoft updates each month.

Further reading:

SANS Internet Storm Center breakdown by vulnerability and severity

Microsoft’s Security Update catalog

BleepingComputer on May 2020 Patch Tuesday

Planet DebianKunal Mehta: MediaWiki packages for Ubuntu 20.04 Focal available

Packages for the MediaWiki 1.31 LTS release are now available for the new Ubuntu 20.04 LTS "Focal Fossa" release in my PPA. Please let me know if you run into any errors or issues.

In the future these packages will be upgraded to the MediaWiki 1.35 LTS release, whenever that's ready. It's currently delayed because of the pandemic, but I expect that it'll be ready for the next Debian release.

Planet DebianEvgeni Golov: Building a Shelly 2.5 USB to TTL adapter cable

When you want to flash your Shelly 2.5 with anything but the original firmware for the first time, you'll need to attach it to your computer. Later flashes can happen over the air (at least with ESPHome or Tasmota), but the first one cannot.

In theory, this is not a problem as the Shelly has a quite exposed and well documented interface:

Shelly 2.5 pinout

However, on closer inspection you'll notice that your normal jumper wires don't fit as the Shelly has a connector with 1.27mm (0.05in) pitch and 1mm diameter holes.

Now, there are various tutorials on the Internet how to build a compatible connector using Ethernet cables and hot glue or with female header socket legs, and you can even buy cables on Amazon for 18€! But 18€ sounded like a lot and the female header socket thing while working was pretty finicky to use, so I decided to build something different.

We'll need 6 female-to-female jumper wires and a 1.27mm pitch male header. Jumper wires I had at home, the header I got is a SL 1X20G 1,27 from reichelt.de for 0.61€. It's a 20 pin one, so we can make 3 adapters out of it if needed. Oh and we'll need some isolation tape.

SL 1X20G 1,27

The first step is to cut the header into 6 pin chunks. Make sure not to cut too close to the 6th pin as the whole thing is rather fragile and you might lose it.

SL 1X20G 1,27 cut into pieces

It now fits very well into the Shelly with the longer side of the pins.

Shelly 2.5 with pin headers attached

Second step is to strip the plastic part of one side of the jumper wires. Those are designed to fit 2.54mm pitch headers and won't work for our use case otherwise.

jumper wire with removed plastic

As the connectors are still too big, even after removing the plastic, the next step is to take some pliers and gently press the connectors until they fit the smaller pins of our header.

Shelly 2.5 with pin headers and a jumper wire attached

Now is the time to put everything together. To avoid short circuiting the pins/connectors, apply some isolation tape while assembling, but not too much as the space is really limited.

Shelly 2.5 with pin headers and a jumper wire attached and taped

And we're done, a wonderful (lol) and working (yay) Shelly 2.5 cable that can be attached to any USB-TTL adapter, like the pictured FTDI clone you get almost everywhere.

Shelly 2.5 with full cable and FTDI attached

Yes, in an ideal world we would have soldered the header to the cable, but I didn't feel like soldering on that limited space. And yes, shrink-wrap might be a good thing too, but again, limited space and with isolation tape you only need one layer between two pins, not two.

Planet DebianDaniel Silverstone: The Lars, Mark, and Daniel Club

Last night, Lars, Mark, and I discussed Jeremy Kun's The communicative value of using Git well post. While a lot of our discussion was spawned by the article, we did go off-piste a little, and I hope that my notes below will enlighten you all as to a bit of how we see revision control these days. It was remarkably pleasant to read an article where the comments section wasn't a cesspool of horror, so if this posting encourages you to go and read the article, don't stop when you reach the bottom -- the comments are good and useful too.


This was a fairly non-contentious article for us though each of us had points we wished to bring up and chat about it turned into a very convivial chat. We saw the main thrust of the article as being about using the metadata of revision control to communicate intent, process, and decision making. We agreed that it must be possible to do so effectively with Mercurial (thus deciding that the mention of it was simply a bit of clickbait / red herring) and indeed Mark figured that he was doing at least some of this kind of thing way back with CVS.

We all discussed how knowing the fundamentals of Git's data model improved our ability to work wih the tool. Lars and I mentioned how jarring it has originally been to come to Git from revision control systems such as Bazaar (bzr) but how over time we came to appreciate Git for what it is. For Mark this was less painful because he came to Git early enough that there was little more than the fundamental data model, without much of the porcelain which now exists.

One point which we all, though Mark in particular, felt was worth considering was that of distinguishing between published and unpublished changes. The article touches on it a little, but one of the benefits of the workflow which Jeremy espouses is that of using the revision control system as an integral part of the review pipeline. This is perhaps done very well with Git based workflows, but can be done with other VCSs.

With respect to the points Jeremy makes regarding making commits which are good for reviewing, we had a general agreement that things were good and sensible, to a point, but that some things were missed out on. For example, I raised that commit messages often need to be more thorough than one-liners, but Jeremy's examples (perhaps through expedience for the article?) were all pretty trite one-liners which perhaps would have been entirely obvious from the commit content. Jeremy makes the point that large changes are hard to review, and Lars poined out that Greg Wilson did research in this area, and at least one article mentions 200 lines as a maximum size of a reviewable segment.

I had a brief warble at this point about how reviewing needs to be able to consider the whole of the intended change (i.e. a diff from base to tip) not just individual commits, which is also missing from Jeremy's article, but that such a review does not need to necessarily be thorough and detailed since the commit-by-commit review remains necessary. I use that high level diff as a way to get a feel for the full shape of the intended change, a look at the end-game if you will, before I read the story of how someone got to it. As an aside at this point, I talked about how Jeremy included a 'style fixes' commit in his example, but I loathe seeing such things and would much rather it was either not in the series because it's unrelated to it; or else the style fixes were folded into the commits they were related to.

We discussed how style rules, as well as commit-bisectability, and other rules which may exist for a codebase, the adherence to which would form part of the checks that a code reviewer may perform, are there to be held to when they help the project, and to be broken when they are in the way of good communication between humans.

In this, Lars talked about how revision control histories provide high level valuable communication between developers. Communication between humans is fraught with error and the rules are not always clear on what will work and what won't, since this depends on the communicators, the context, etc. However whatever communication rules are in place should be followed. We often say that it takes two people to communicate information, but when you're writing commit messages or arranging your commit history, the second party is often nebulous "other" and so the code reviewer fulfils that role to concretise it for the purpose of communication.

At this point, I wondered a little about what value there might be (if any) in maintaining the metachanges (pull request info, mailing list discussions, etc) for historical context of decision making. Mark suggested that this is useful for design decisions etc but not for the style/correctness discussions which often form a large section of review feedback. Maybe some of the metachange tracking is done automatically by the review causing the augmentation of the changes (e.g. by comments, or inclusion of design documentation changes) to explain why changes are made.

We discussed how the "rebase always vs. rebase never" feeling flip-flopped in us for many years until, as an example, what finally won Lars over was that he wants the history of the project to tell the story, in the git commits, of how the software has changed and evolved in an intentional manner. Lars said that he doesn't care about the meanderings, but rather about a clear story which can be followed and understood.

I described this as the switch from "the revision history is about what I did to achieve the goal" to being more "the revision history is how I would hope someone else would have done this". Mark further refined that to "The revision history of a project tells the story of how the project, as a whole, chose to perform its sequence of evolution."

We discussed how project history must necessarily then contain issue tracking, mailing list discussions, wikis, etc. There are exist free software projects where part of their history is forever lost because, for example, the project moved from Sourceforge to Github, but made no effort (or was unable) to migrate issues or somesuch. Linkages between changes and the issues they relate to can easily be broken, though at least with mailing lists you can often rewrite URLs if you have something consistent like a Message-Id.

We talked about how cover notes, pull request messages, etc. can thus also be lost to some extent. Is this an argument to always use merges whose message bodies contain those details, rather than always fast-forwarding? Or is it a reason to encapsulate all those discussions into git objects which can be forever associated with the changes in the DAG?

We then diverted into discussion of CI, testing every commit, and the benefits and limitations of automated testing vs. manual testing; though I think that's a little too off-piste for even this summary. We also talked about how commit message audiences include software perhaps, with the recent movement toward conventional commits and how, with respect to commit messages for machine readability, it can become very complex/tricky to craft good commit messages once there are multiple disparate audiences. For projects the size of the Linux kernel this kind of thing would be nearly impossible, but for smaller projects, perhaps there's value.

Finally, we all agreed that we liked the quote at the end of the article, and so I'd like to close out by repeating it for you all...

Hal Abelson famously said:

Programs must be written for people to read, and only incidentally for machines to execute.

Jeremy agrees, as do we, and extends that to the metacommit information as well.

Worse Than FailureRepresentative Line: Don't Negate Me

There are certain problem domains where we care more about the results and the output than the code itself. Gaming is the perfect example: game developers write "bad" code because clarity, readability, maintainability are often subordinate to schedules and the needs of a fun game. The same is true for scientific research: that incomprehensible blob of Fortran was somebody's PhD thesis, and it proved fundamental facts about the universe, so maybe don't judge it on how well written it is.

Sometimes, finance falls into similar place. Often, the software being developer has to implement obtuse business rules that accreted over decades of operation; sometimes it's trying to be a predictive model; sometimes a pointy-haired-boss got upset about how a dashboard looked and asked for the numbers to get fudged.

But that doesn't mean that we can't find new ways to write bad code in any of these domains. René works in finance, and found this unique JavaScript solution to converting a number to a negative value:

/** * Reverses a value a number to its negative * @param {int} value - The value to be reversed * @return {number} The reversed value */ negateNumber(value) { return value - (value * 2); }

JavaScript numbers aren't integers, they're double-precision floats. Which does mean that you could exceed the range when you double. That would require you to be tracking numbers larger than 2^52, though, which we can safely assume isn't happening in a financial system, unless inflation suddenly gets cosmically out of hand.

René has since replaced this with a more "traditional" approach to negation.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianPetter Reinholdtsen: Debian Edu interview: Yvan Masson

It has been way too long since my last interview, but as the Debian Edu / Skolelinux community is still active, and new people keep showing up on the IRC channel #debian-edu and the debian-edu mailing list, I decided to give it another go. I was hoping someone else might pick up the idea and run with it, but this has not happened as far as I can tell, so here we are… This time the announcement of a new free software tool to create a school year book triggered my interest, and I decided to learn more about its author.

Who are you, and how do you spend your days?

My name is Yvan MASSON, I live in France. I have my own one person business in computer services. The work consist of visiting my customers (person's home, local authority, small business) to give advise, install computers and software, fix issues, and provide computing usage training. I spend the rest of my time enjoying my family and promoting free software.

What is your approach for promoting free software?

When I think that free software could be suitable for someone, I explain what it is, with simple words, give a few known examples, and explain that while there is no fee it is a viable alternative in many situations. Most people are receptive when you explain how it is better (I simplify arguments here, I know that it is not so simple): Linux works on older hardware, there are no viruses, and the software can be audited to ensure user is not spied upon. I think the most important is to keep a clear but moderated speech: when you try to convince too much, people feel attacked and stop listening.

How did you get in contact with the Skolelinux / Debian Edu project?

I can not remember how I first heard of Skolelinux / Debian Edu, but probably on planet.debian.org. As I have been working for a school, I have interest in this type of project.

The school I am involved in is a school for "children" between 14 and 18 years old. The French government has recommended free software since 2012, but they do not always use free software themselves. The school computers are still using the Windows operating system, but all of them have the classic set of free software: Firefox ESR, LibreOffice (with the excellent extension Grammalecte that indicates French grammatical errors), SumatraPDF, Audacity, 7zip, KeePass2, VLC, GIMP, Inkscape…

What do you see as the advantages of Skolelinux / Debian Edu?

It is free software! Built on Debian, I am sure that users are not spied upon, and that it can run on low end hardware. This last point is very important, because we really need to improve "green IT". I do not know enough about Skolelinux / Debian Edu to tell how it is better than another free software solution, but what I like is the "all in one" solution: everything has been thought of and prepared to ease installation and usage.

I like Free Software because I hate using something that I can not understand. I do not say that I can understand everything nor that I want to understand everything, but knowing that someone / some company intentionally prevents me from understanding how things work is really unacceptable to me.

Secondly, and more importantly, free software is a requirement to prevent abuses regarding human rights and environmental care. Humanity can not rely on tools that are in the hands of small group of people.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Again, I don't know this project enough. Maybe a dedicated website? Debian wiki works well for documentation, but is not very appealing to someone discovering the project. Also, as Skolelinux / Debian Edu uses OpenLDAP, it probably means that Windows workstations cannot use centralized authentication. Maybe the project could use Samba as an Active Directory domain controller instead, allowing Windows desktop usage when necessary.

(Editors note: In fact Windows workstations can use the centralized authentication in a Debian Edu setup, at least for some versions of Windows, but the fact that this is not well known can be seen as an indication of the need for better documentation and marketing. :)

Which free software do you use daily?

Nothing original: Debian testing/sid with Gnome desktop, Firefox, Thunderbird, LibreOffice…

Which strategy do you believe is the right one to use to get schools to use free software?

Every effort to spread free software into schools is important, whatever it is. But I think, at least where I live, that IT professionals maintaining schools networks are still very "Microsoft centric". Schools will use any working solution, but they need people to install and maintain it. How to make these professionals sensitive about free software and train them with solutions like Debian Edu / Skolelinux is a really good question :-)

Planet DebianJacob Adams: Roman Finger Counting

I recently wrote a final paper on the history of written numerals. In the process, I discovered this fascinating tidbit that didn’t really fit in my paper, but I wanted to put it somewhere. So I’m writing about it here.

If I were to ask you to count as high as you could on your fingers you’d probably get up to 10 before running out of fingers. You can’t count any higher than the number of fingers you have, right? The Romans could! They used a place-value system, combined with various gestures to count all the way up to 9,999 on two hands.

The System

Finger Counting (Note that in this diagram 60 is, in fact, wrong, and this picture swaps the hundreds and the thousands.)

We’ll start with the units. The last three fingers of the left hand, middle, ring, and pinkie, are used to form them.

Zero is formed with an open hand, the opposite of the finger counting we’re used to.

One is formed by bending the middle joint of the pinkie, two by including the ring finger and three by including the middle finger, all at the middle joint. You’ll want to keep all these bends fairly loose, as otherwise these numbers can get quite uncomfortable. For four, you extend your pinkie again, for five, also raise your ring finger, and for six, you raise your middle finger as well, but then lower your ring finger.

For seven you bend your pinkie at the bottom joint, for eight adding your ring finger, and for nine, including your middle finger. This mirrors what you did for one, two and three, but bending the finger at the bottom joint now instead.

This leaves your thumb and index finger for the tens. For ten, touch the nail of your index finger to the inside of your top thumb joint. For twenty, put your thumb between your index and middle fingers. For thirty, touch the nails of your thumb and index fingers. For forty, bend your index finger slightly towards your palm and place your thumb between the middle and top knuckle of your index finger. For fifty, place your thumb against your palm. For sixty, leave your thumb where it is and wrap your index finger around it (the diagram above is wrong). For seventy, move your thumb so that the nail touches between the middle and top knuckle of your index finger. For eighty, flip your thumb so that the bottom of it now touches the spot between the middle and top knuckle of your index finger. For ninety, touch the nail of your index finger to your bottom thumb joint.

The hundreds and thousands use the same positions on the right hand, with the units being the thousands and the tens being the hundreds. One account, from which the picture above comes, swaps these two, but the first account we have uses this ordering.

Combining all these symbols, you can count all the way to 9,999 yourself on just two hands. Try it!

History

The Venerable Bede

The first written record of this system comes from the Venerable Bede, an English Benedictine monk who died in 735.

He wrote De computo vel loquela digitorum, “On Calculating and Speaking with the Fingers,” as the introduction to a larger work on chronology, De temporum ratione. (The primary calculation done by monks at the time was calculating the date of Easter, the first Sunday after the first full moon of spring).

He also includes numbers from 10,000 to 1,000,000, but its unknown if these were inventions of the author and were likely rarely used regardless. They require moving your hands to various positions on your body, as illustrated below, from Jacob Leupold’s Theatrum Arilhmetico-Geometricum, published in 1727:

Finger Counting with Large Numbers

The Romans

If Bede was the first to write it, how do we know that it came from Roman times? It’s referenced in many Roman writings, including this bit from the Roman satirist Juvenal who died in 130:

Felix nimirum qui tot per saecula mortem distulit atque suos iam dextera computat annos.

Happy is he who so many times over the years has cheated death And now reckons his age on the right hand.

Because of course the right hand is where one counts hundreds!

There’s also this Roman riddle:

Nunc mihi iam credas fieri quod posse negatur: octo tenes manibus, si me monstrante magistro sublatis septem reliqui tibi sex remanebunt.

Now you shall believe what you would deny could be done: In your hands you hold eight, as my teacher once taught; Take away seven, and six still remain.

If you form eight with this system and then remove the symbol for seven, you get the symbol for six!

Sources

My source for this blog post is Paul Broneer’s 1969 English translation of Karl Menninger’s Zahlwort und Ziffer (Number Words and Number Symbols).

,

Planet DebianTim Retout: Blog Posts

Rondam RamblingsWilliam Barr's debasement of the Justice Department

The Independent has an excellent and detailed deconstruction of the idea that William Barr was justified in dropping the charges against Michael Flynn: Lying to the FBI is a crime. There is a materiality requirement; if you tell the FBI that you had cornflakes for breakfast when you had raisin

Planet DebianMarkus Koschany: My Free Software Activities in April 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in May) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Playonlinux
  • Scott Talbert did a fantastic job by porting playonlinux, a user-friendly frontend for Wine, to Python 3 (#937302). I tested his patch and uploaded the package today. More testing and feedback is welcome. Scott’s work basically prevented the removal of one of the most popular packages in the games section. I believe this will also give interested people more time to package the Java successor of playonlinux called Phoenicis.
  • Reiner Herrmann ported ardentryst, an action role playing game, to Python 3 to fix a release critical Py2 removal bug (#936148). He also packaged the latest release of xaos, a real-time interactive fractal zoomer, and improved various packaging details. I reviewed both of them and sponsored the upload for him.
  • I packaged new upstream releases of minetest, lordsawar, gtkatlantic and cutemaze.
  • I also sponsored a new simutrans update for Jörg Frings-Fürst.

Debian Java

Misc

  • I packaged new versions of wabt and binaryen, required to build Webassembly code from source.

Debian LTS

This was my 50. month as a paid contributor and I have been paid to work 11,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I completed the security update of Tomcat 8 in Stretch released as DSA-4673-1 and Tomcat 8 in Jessie soon to be released as DLA-2209-1.
  • I am currently assigned more hours and my plan is to invest the time in a project to improve our knowledge about embedded code copies and their current security impact which I want to discuss with the security team. The rest will be spent on Stretch security updates which will become the new LTS release soon.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my 23. month and I have been paid to work 2 hours on ELTS.

  • I prepared the fix for CVE-2019-18218 in php5 released as ELA-227-1.
  • I checked jetty for unfixed vulnerabilities and discovered that the version in Wheezy was not affected by CVE-2019-17632. No further action was required.
  • It turned out that the apache2 package in Wheezy was not affected by vulnerable embedded expat code because it depends on already fixed system packages.

Thanks for reading and see you next time.

Cory DoctorowRules for Writers

For this week’s podcast, I take a break from my reading of my 2009 novel, Someone Comes to Town, Someone Leaves Town, to read aloud my latest Locus column, Rules for Writers. The column sums up a long-overdue revelation I had teaching on the Writing Excuses cruise last fall: that the “rules” we advise writers to follow are actually just “places where it’s easy to go wrong.”

There’s an important distinction between this and the tired injunction, “You have to know the rules to break the rules.” It’s more like, “If you want to figure out how to make this better, start by checking on whether you messed up when doing the difficult stuff.”

MP3

Krebs on SecurityRansomware Hit ATM Giant Diebold Nixdorf

Diebold Nixdorf, a major provider of automatic teller machines (ATMs) and payment technology to banks and retailers, recently suffered a ransomware attack that disrupted some operations. The company says the hackers never touched its ATMs or customer networks, and that the intrusion only affected its corporate network.

Canton, Ohio-based Diebold [NYSE: DBD] is currently the largest ATM provider in the United States, with an estimated 35 percent of the cash machine market worldwide. The 35,000-employee company also produces point-of-sale systems and software used by many retailers.

According to Diebold, on the evening of Saturday, April 25, the company’s security team discovered anomalous behavior on its corporate network. Suspecting a ransomware attack, Diebold said it immediately began disconnecting systems on that network to contain the spread of the malware.

Sources told KrebsOnSecurity that Diebold’s response affected services for over 100 of the company’s customers. Diebold said the company’s response to the attack did disrupt a system that automates field service technician requests, but that the incident did not affect customer networks or the general public.

“Diebold has determined that the spread of the malware has been contained,” Diebold said in a written statement provided to KrebsOnSecurity. “The incident did not affect ATMs, customer networks, or the general public, and its impact was not material to our business. Unfortunately, cybercrime is an ongoing challenge for all companies. Diebold Nixdorf takes the security of our systems and customer service very seriously. Our leadership has connected personally with customers to make them aware of the situation and how we addressed it.”

NOT SO PRO LOCK

An investigation determined that the intruders installed the ProLock ransomware, which experts say is a relatively uncommon ransomware strain that has gone through multiple names and iterations over the past few months.

For example, until recently ProLock was better known as “PwndLocker,” which is the name of the ransomware that infected servers at Lasalle County, Ill. in March. But the miscreants behind PwndLocker rebranded their malware after security experts at Emsisoft released a tool that let PwndLocker victims decrypt their files without paying the ransom.

Diebold claims it did not pay the ransom demanded by the attackers, although the company wouldn’t discuss the amount requested. But Lawrence Abrams of BleepingComputer said the ransom demanded for ProLock victims typically ranges in the six figures, from $175,000 to more than $660,000 depending on the size of the victim network.

Fabian Wosar, Emsisoft’s chief technology officer, said if Diebold’s claims about not paying their assailants are true, it’s probably for the best: That’s because current versions of ProLock’s decryptor tool will corrupt larger files such as database files.

As luck would have it, Emsisoft does offer a tool that fixes the decryptor so that it properly recovers files held hostage by ProLock, but it only works for victims who have already paid a ransom to the crooks behind ProLock.

“We do have a tool that fixes a bug in the decryptor, but it doesn’t work unless you have the decryption keys from the ransomware authors,” Wosar said.

WEEKEND WARRIORS

BleepingComputer’s Abrams said the timing of the attack on Diebold — Saturday evening — is quite common, and that ransomware purveyors tend to wait until the weekends to launch their attacks because that is typically when most organizations have the fewest number of technical staff on hand. Incidentally, weekends also are the time when the vast majority of ATM skimming attacks take place — for the same reason.

“After hours on Friday and Saturday nights are big, because they want to pull the trigger [on the ransomware] when no one is around,” Abrams said.

Many ransomware gangs have taken to stealing sensitive data from victims before launching the ransomware, as a sort of virtual cudgel to use against victims who don’t immediately acquiesce to a ransom demand.

Armed with the victim’s data — or data about the victim company’s partners or customers — the attackers can then threaten to publish or sell the information if victims refuse to pay up. Indeed, some of the larger ransomware groups are doing just that, constantly updating blogs on the Internet and the dark Web that publish the names and data stolen from victims who decline to pay.

So far, the crooks behind ProLock haven’t launched their own blog. But Abrams said the crime group behind it has indicated it is at least heading in that direction, noting that in his communications with the group in the wake of the Lasalle County attack they sent him an image and a list of folders suggesting they’d accessed sensitive data for that victim.

“I’ve been saying this ever since last year when the Maze ransomware group started publishing the names and data from their victims: Every ransomware attack has to be treated as a data breach now,” Abrams said.

Planet DebianJulien Danjou: Interview: The Performance of Python

Interview: The Performance of Python

Earlier this year, I was supposed to participate to dotPy, a one-day Python conference happening in Paris. This event has unfortunately been cancelled due to the COVID-19 pandemic.

Both Victor Stinner and me were supposed to attend that event. Victor had prepared a presentation about Python performances, while I was planning on talking about profiling.

Rather than being completely discouraged, Victor and I sat down (remotely) with Anne Laure from Behind the Code (a blog ran by Welcome to the Jungle, the organizers of the dotPy conference).

We discuss Python performance, profiling, speed, projects, problems, analysis, optimization and the GIL.

You can read the interview here.

Interview: The Performance of Python

Planet DebianGunnar Wolf: Certified printer fumes

After losing a fair bit of hair due to quality and reliability issues with our home laser multifunctional (Brother DCP1600-series, which we bought after checking it was meant to work on Linux… And it does, but with a very buggy, proprietary driver — Besides being the printer itself of quite low quality), we decided it was time to survey the market again, and get a color inkjet printer. I was not very much an enthusiast of the idea, until I found all of the major manufacturers now offer refillable ink tanks instead of the darn expensive cartridges of past decades. Lets see how it goes!

Of course, with over 20 years of training, I did my homework. I was about to buy an Epson printer, but decided for an HP Ink Tank 410 Wireless printer. The day it arrived, not wanting to fuss around too much to get to see the results, I connected it to my computer using the USB cable. Everything ran smoothly and happily! No driver hunting needed, print quality is superb… I hope, years from now, we stay with this impression.

Next day, I tried to print over WiFi. Of course, it requires configuration. And, of course, configuration strongly wants you to do it from a Windows or MacOS machine — which I don’t have. OK, fall back to Android — For which an app download is required (and does not thrill me, but what can I say. Oh — and the app needs location services to even run. Why‽ Maybe because it interacts with the wireless network in WiFi Direct, non-authenticated way?)

Anyway, things seem to work. But they don’t — My computers can identify and connect with the printer from CUPS, but nothing ever comes out. Printer paused, they say. Entering the printer’s web interface is somewhat ambiguous — Following the old HP practices, I tried http://192.168.1.75:9100/ (no point in hiding my internal IP), and got a partial webpage sometimes (and nothing at all othertimes). Seeing the printer got detected over ipps://, my immediate reaction was to try pointing the browser to port 631. Seems to work! Got some odd messages… But it seems I’ll soon debug the issue away. I am not a familiar meddler in the dark lands of cups, our faithful print server, but I had to remember my toolkit..

# cupsenable HP_Ink_Tank_Wireless_410_series_C37468_ --release

Sucess in enabling, but self-auto-disabled right away… lpstat -t was not more generous, reporting only it was still paused.

… Some hours later (mix in attending kids and whatnot), I finally remember to try cupsctl --debug-logging, and magically, /var/log/cups/error_log turns from being quiet to being quite chatty. And, of course, my first print job starts being processed:

D [10/May/2020:23:07:20 -0500] Report: jobs-active=1
(...)
D [10/May/2020:23:07:25 -0500] [Job 174] Start rendering...
(...)
D [10/May/2020:23:07:25 -0500] [Job 174] STATE: -connecting-to-device
(...)

Everything looks fine and dandy so far! But, hey, given nothing came out of the printer… keep reading one more second of logs (a couple dozen lines)

D [10/May/2020:23:07:26 -0500] [Job 174] Connection is encrypted.
D [10/May/2020:23:07:26 -0500] [Job 174] Credentials are expired (Credentials have expired.)
D [10/May/2020:23:07:26 -0500] [Job 174] Printer credentials: HPC37468 / Thu, 01 Jan 1970 00:00:00 GMT / 28A59EF511A480A34798B6712DEEAE74
D [10/May/2020:23:07:26 -0500] [Job 174] No stored credentials.
D [10/May/2020:23:07:26 -0500] [Job 174] update_reasons(attr=0(), s=\"-cups-pki-invalid,cups-pki-changed,cups-pki-expired,cups-pki-unknown\")
D [10/May/2020:23:07:26 -0500] [Job 174] STATE: -cups-pki-expired
(...)
D [10/May/2020:23:08:00 -0500] [Job 174] envp[16]="CUPS_ENCRYPTION=IfRequested"
(...)
D [10/May/2020:23:08:00 -0500] [Job 174] envp[27]="PRINTER_STATE_REASONS=cups-pki-expired"

My first stabs were attempts to get CUPS not to care about expired certificates, but it seems to have been hidden or removed from its usual place. Anyway, I was already frustrated.

WTF‽ Well, yes, turns out that from the Web interface, I paid some attention to this the first time around, but let it pass (speaks wonders about my security practices!):

Way, way, way too expired cert

So, the self-signed certificate the printer issued at itself expired 116 years before even being issued. (is this maybe a Y2k38 bug? Sounds like it!) Interesting thing, my CUPS log mentions the printer credentials to expire at the beginning of the Unix Epoch (01 Jan 1970 00:00:00 GMT).

OK, lets clickety-click away on the Web interface… Didn’t take me long to get to Network ⇒ Advanced settings ⇒ Certificates:

Can manage certs!

However, clicking on Configure leads me to the not very reassuring…

Way, way, way too expired cert

I don’t remember what I did for the next couple of minutes. Kept fuming… Until I parsed again the output of lpstat -t, and found that:

# lpstat -t
(...)
device for HP_Ink_Tank_Wireless_410_series_C37468_: ipps://HPF43909C37468.local:443/ipp/print
(...)

Hmmmm… CUPS is connecting using good ol’ port 443, as if it were a Web thingy… What if I do the same?

Now we are talking!

Click on “New self-signed certificate�, click on Next, a couple of reloads… And a very nice color print came out of the printer, yay!

Now, it still baffles me (of course I checked!): The self-signed certificate is now said to come from Issuer : CN=HPC37468, L=Vancouver, ST=Washington, C=US, O=HP,OU=HP-IPG, alright… not that it matters (I can import a more meaningful one if I really feel like it), but, why is it Issued On: 2019-06-14 and set to Expires On: 2029-06-11?

Anyway, print quality is quite nice. I hope to keep the printer long enough to rant at the certificate being expired in the future!

Comments

Jeff Epler (Adafruit) 2020-05-11 20:39:17 -0500

“why is it Issued On: 2019-06-14 and set to Expires On: 2029-06-11?� → Because it’s 3650 days

Gunnar Wolf 2020-05-11 20:39:17 -0500

Nice catch! Thanks for doing the head-scratching for me 😉

Worse Than FailureCodeSOD: Selected Sort

Before Evalia took a job at Initech, her predecessor, "JR" had to get fired first. That wasn't too much of a challenge, because JR claimed he was the "God of JavaScript". That was how he signed each of the tickets he handled in the ticket system.

JR was not, in fact, a god. Since then, Evalia has been trying to resuscitate the projects he had been working on. That's how she found this code.

function sortSelect(selElem) { var tmpAry = new Array(); for (var i=0;i<selElem.options.length;i++) { tmpAry[i] = new Array(); tmpAry[i][0] = selElem.options[i].text; tmpAry[i][1] = selElem.options[i].value; } tmpAry.sort(); while (selElem.options.length > 0) { selElem.options[0] = null; } for (var i=0;i<tmpAry.length;i++) { var op = new Option(tmpAry[i][0], tmpAry[i][1]); selElem.options[i] = op; } return; }

This code sorts the elements in a drop down list, and it manages to do this in a… unique way.

First, we iterate across the elements in the list of options. We build a 2D array, where the first axis is the item, and the second axis contains the text caption and value of each option element.

Once we've built that array, we can sort it. Fortunately for us, when you sort a 2D array, JavaScript helpfully defaults to sorting by the first element in the second dimension, so this will sort by the text value.

Now that we have a sorted list of captions and values, we have to do something about the pesky old ones. So we iterate across the list to set each one to null. Well, not quite. We actually set the first item to null until the length is 0. Fortunately for us, the JavaScript length only takes into account elements with actual values, so this works.

Once they're all empty, we can repopulate the list by using our temporary array to create new options and put them in the list.

Credit to JR, I actually learned new things about JavaScript when wrying to understand this code. I didn't know how sort behaved with 2D arrays, and I'd never seen the while/length construct before, and was shocked that it actually works. Of course, I'd never gotten myself into a situation where I'd needed those.

The truly "god-like" thing is that JR managed to take the task of sorting a list of items and turned it into a task that needed to visit each item in the list three times in addition to sorting. God-like, sure, but the kind of god that Lovecraft warned us about.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianDirk Eddelbuettel: #1 T^4: Adding Some Color to the Shell

The first proper video (following last week’s announcement) is up for new T^4 series of video lightning talks with tips, tricks, tools, and toys. Today we just to a little enhancement for the shell enabled color output (if not already on by default).

The slides are available here.

Next week we continue on shell customization by looking at the prompt.

Also of note, a new repo at GitHub to support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianEnrico Zini: Fraudsters and pirates

Adelheid Luise "Adele" Spitzeder ([ˈaːdl̩haɪt ʔaˈdeːlə ˈʃpɪtˌtseːdɐ]; 9 February 1832 – 27 or 28 October 1895), also known by her stage name Adele Vio, was a German actress, folk singer, and con artist. Initially a promising young actress, Spitzeder became a well-known private banker in 19th-century Munich when her theatrical success dwindled. Running what was possibly the first recorded Ponzi scheme, she offered large returns on investments by continually using the money of new investors to pay back the previous ones. At the height of her success, contemporary sources considered her the wealthiest woman in Bavaria.
Anne Bonny (possibly 1697 – possibly April 1782)[1][2] was an Irish pirate operating in the Caribbean, and one of the most famous female pirates of all time.[3] The little that is known of her life comes largely from Captain Charles Johnson's A General History of the Pyrates.
Mary Read (1685 – 28 April 1721), also known as Mark Read, was an English pirate. She and Anne Bonny are two of the most famed female pirates of all time, and among the few women known to have been convicted of piracy during the early 18th century, at the height of the "Golden Age of Piracy".
While piracy was predominantly a male occupation, a minority of pirates were women.[1] On many ships, women (as well as young boys) were prohibited by the ship's contract, which all crew members were required to sign.[2] :303

Planet DebianBen Hutchings: Debian LTS work, April 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and carried over 8.5 hours from March. I worked 26 hours this month, so I will carry over 2.5 hours to May.

I sent a (belated) request for testing an update of the linux package to 3.16.82. I then prepared and, after review, released Linux 3.16.83, including a large number of security fixes. I rebased the linux package onto that and will soon send out a request for testing. I also spent some time working on a still-embargoed security issue.

I did not spend signficant time on any other LTS activities this month, and unfortunately missed the contributor meeting.

Planet DebianRussell Coker: IT Asset Management

In my last full-time position I managed the asset tracking database for my employer. It was one of those things that “someone” needed to do, and it seemed that only way that “someone” wouldn’t equate to “no-one” was for me to do it – which was ok. We used Snipe IT [1] to track the assets. I don’t have enough experience with asset tracking to say that Snipe is better or worse than average, but it basically did the job. Asset serial numbers are stored, you can have asset types that allow you to just add one more of the particular item, purchase dates are stored which makes warranty tracking easier, and every asset is associated with a person or listed as available. While I can’t say that Snipe IT is better than other products I can say that it will do the job reasonably well.

One problem that I didn’t discover until way too late was the fact that the finance people weren’t tracking serial numbers and that some assets in the database had the same asset IDs as the finance department and some had different ones. The best advice I can give to anyone who gets involved with asset tracking is to immediately chat to finance about how they track things, you need to know if the same asset IDs are used and if serial numbers are tracked by finance. I was pleased to discover that my colleagues were all honourable people as there was no apparent evaporation of valuable assets even though there was little ability to discover who might have been the last person to use some of the assets.

One problem that I’ve seen at many places is treating small items like keyboards and mice as “assets”. I think that anything that is worth less than 1 hour’s pay at the minimum wage (the price of a typical PC keyboard or mouse) isn’t worth tracking, treat it as a disposable item. If you hire a programmer who requests an unusually expensive keyboard or mouse (as some do) it still won’t be a lot of money when compared to their salary. Some of the older keyboards and mice that companies have are nasty, months of people eating lunch over them leaves them greasy and sticky. I think that the best thing to do with the keyboards and mice is to give them away when people leave and when new people join the company buy new hardware for them. If a company can’t spend $25 on a new keyboard and mouse for each new employee then they either have a massive problem of staff turnover or a lack of priority on morale.

Planet DebianNorbert Preining: Updating Dovecot for Debian

A tweet of a friend pointed me at the removal of dovecot from Debian/testing, which surprised me a bit. Investigating the situation it seems that Dovecot in Debian is lagging a bit behind in releases, and hasn’t seen responses to some RC bugs. This sounds critical to me as dovecot is a core part of many mail setups, so I prepared updated packages.

Based on the latest released version of Dovecot, 2.3.10, I have made a package starting from the current Debian packaging and adjusted to the newer upstream. The package builds on Debian Buster (10), Testing, and Unstable on i386 and x64 archs. The packages are available on OBS, as usual:

For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_Testing/ ./

For Debian 10 Buster:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_10/ ./

To make these repositories work, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

These packages are provided without any warranty. Enjoy.

Planet DebianNOKUBI Takatsugu: Virtual Background using webcam

I made a webpage to produce virtual background with webcam.

https://knok.github.io/virtbg/

Sorcecode:
https://github.com/knok/knok.github.io/tree/master/virtbg

Some online meeting software (Zoom, Microsoft Teams) supports virtual background, but I want to use other software like Jitsi (or google meet), so I made it.

To make this, I referred the article “Open Source Virtual Background”. 
The following figure is the diagram.

It depends on docker, GPU, v4l2loopback (only works on Linux), so I want to make more generic solution. To make as a webpage, and using OBS
Studio with plugins (obs-v4l2sink, OBS-VirtualCam or OBS (macOS) Virtual Camera) you can use the solution on more platforms.

To make as a single webpage, I can reduce overhead using inter-process commuication using http via docker.

This is an example animation:

y4069-ju7wv.gif

Using jisti snapshot:

image.png

Unfortunately, BodyPix releases only pretraind models, no training data.

I need more improvements:

  • Accept any background images
  • Suppot choose camera device
  • Useful UI

Planet DebianRuss Allbery: Review: Golden Gates

Review: Golden Gates, by Conor Dougherty

Publisher: Penguin
Copyright: 2020
ISBN: 0-525-56022-X
Format: Kindle
Pages: 249

This review, for reasons that will hopefully become clear later, starts with a personal digression.

I have been interested in political theory my entire life. That sounds like something admirable, or at least neutral. It's not. "Interested" means that I have opinions that are generally stronger than my depth of knowledge warrants. "Interested" means that I like thinking about and casting judgment on how politics should be done without doing the work of politics myself. And "political theory" is different than politics in important ways, not the least of which is that political actions have rarely been a direct danger to me or my family. I have the luxury of arguing about politics as a theory.

In short, I'm at high risk of being one of those people who has an opinion about everything and shares it on Twitter.

I'm still in the process (to be honest, near the beginning of the process) of making something useful out of that interest. I've had some success when I become enough a part of a community that I can do some of the political work, understand the arguments at a level deeper than theory, and have to deal with the consequences of my own opinions. But those communities have been on-line and relatively low stakes. For the big political problems, the ones that involve governments and taxes and laws, those that decide who gets medical treatment and income support and who doesn't, to ever improve, more people like me need to learn enough about the practical details that we can do the real work of fixing them, rather than only making our native (and generally privileged) communities better for ourselves.

I haven't found my path helping with that work yet. But I do have a concrete, challenging, local political question that makes me coldly furious: housing policy. Hence this book.

Golden Gates is about housing policy in the notoriously underbuilt and therefore incredibly expensive San Francisco Bay Area, where I live. I wanted to deepen that emotional reaction to the failures of housing policy with facts and analysis. Golden Gates does provide some of that. But this also turns out to be a book about the translation of political theory into practice, about the messiness and conflict that results, and about the difficult process of measuring success. It's also a book about how substantial agreement on the basics of necessary political change can still founder on the shoals of prioritization, tribalism, and people who are interested in political theory.

In short, it's a book about the difficulty of changing the world instead of arguing about how to change it.

This is not a direct analysis of housing policy, although Dougherty provides the basics as background. Rather, it's the story of the political fight over housing told primarily through two lenses: Sonja Trauss, founder of BARF (the Bay Area Renters' Federation); and a Redwood City apartment complex, the people who fought its rent increases, and the nun who eventually purchased it. Around that framework, Dougherty writes about the Howard Jarvis Taxpayers Association and the history of California's Proposition 13, a fight over a development in Lafayette, the logistics challenge of constructing sufficient housing even when approved, and the political career of Scott Wiener, the hated opponent of every city fighting for the continued ability to arbitrarily veto any new housing.

One of the things Golden Gates helped clarify for me is that there are three core interest groups that have to be part of any discussion of Bay Area housing: homeowners who want to limit or eliminate local change, renters who are vulnerable to gentrification and redevelopment, and the people who want to live in that area and can't (which includes people who want to move there, but more sympathetically includes all the people who work there but can't afford to live locally, such as teachers, day care workers, food service workers, and, well, just about anyone who doesn't work in tech). (As with any political classification, statements about collectives may not apply to individuals; there are numerous people who appear to fall into one group but who vote in alignment with another.) Dougherty makes it clear that housing policy is intractable in part because the policies that most clearly help one of those three groups hurt the other two.

As advertised by the subtitle, Dougherty's focus is on the fight for more housing. Those who already own homes whose values have been inflated by artificial scarcity, or who want to preserve such stratified living conditions as low-density, large-lot single-family dwellings within short mass-transit commute of one of the densest cities in the United States, don't get a lot of sympathy or focus here except as opponents. I understand this choice; I also don't have much sympathy. But I do wish that Dougherty had spent more time discussing the unsustainable promise that California has implicitly made to homeowners: housing may be impossibly expensive, but if you can manage to reach that pinnacle of financial success, the ongoing value of your home is guaranteed. He does mention this in passing, but I don't think he puts enough emphasis on the impact that a single huge, illiquid investment that is heavily encouraged by government policy has on people's attitude towards anything that jeopardizes that investment.

The bulk of this book focuses on the two factions trying to make housing cheaper: Sonja Trauss and others who are pushing for construction of more housing, and tenant groups trying to manage the price of existing housing for those who have to rent. The tragedy of Bay Area housing is that even the faintest connection of housing to the economic principle of supply and demand implies that the long-term goals of those two groups align. Building more housing will decrease the cost of housing, at least if you build enough of it over a long enough period of time. But in the short term, particularly given the amount of Bay Area land pre-emptively excluded from housing by environmental protection and the actions of the existing homeowners, building more housing usually means tearing down cheap lower-density housing and replacing it with expensive higher-density housing. And that destroys people's lives.

I'll admit my natural sympathy is with Trauss on pure economic grounds. There simply aren't enough places to live in the Bay Area, and the number of people in the area will not decrease. To the marginal extent that growth even slows, that's another tale of misery involving "super commutes" of over 90 minutes each way. But the most affecting part of this book was the detailed look at what redevelopment looks like for the people who thought they had housing, and how it disrupts and destroys existing communities. It's impossible to read those stories and not be moved. But it's equally impossible to not be moved by the stories of people who live in their cars during the week, going home only on weekends because they have to live too far away from their jobs to commute.

This is exactly the kind of politics that I lose when I take a superficial interest in political theory. Even when I feel confident in a guiding principle, the hard part of real-world politics is bringing real people with you in the implementation and mitigating the damage that any choice of implementation will cause. There are a lot of details, and those details matter. Without the right balance between addressing a long-term deficit and providing short-term protection and relief, an attempt to alleviate unsustainable long-term misery creates more short-term misery for those least able to afford it. And while I personally may have less sympathy for the relatively well-off who have clawed their way into their own mortgage, being cavalier with their goals and their financial needs is both poor ethics and poor politics. Mobilizing political opponents who have resources and vote locally isn't a winning strategy.

Dougherty is a reporter, not a housing or public policy expert, so Golden Gates poses problems and tells stories rather than describes solutions. This book didn't lead me to a brilliant plan for fixing the Bay Area housing crunch, or hand me a roadmap for how to get effectively involved in local politics. What it did do is tell stories about what political approaches have worked, how they've worked, what change they've created, and the limitations of that change. Solving political problems is work. That work requires understanding people and balancing concerns, which in turn requires a lot of empathy, a lot of communication, and sometimes finding a way to make unlikely allies.

I'm not sure how broad the appeal of this book will be outside of those who live in the region. Some aspects of the fight for housing generalize, but the Bay Area (and I suspect every region) has properties specific to it or to the state of California. It has also reached an extreme of housing shortage that is rivaled in the United States only by New York City, which changes the nature of the solutions. But if you want to seriously engage with Bay Area housing policy, knowing the background explained here is nearly mandatory. There are some flaws — I wish Dougherty would have talked more about traffic and transit policy, although I realize that could be another book — but this is an important story told well.

If this somewhat narrow topic is within your interests, highly recommended.

Rating: 8 out of 10

Planet Linux AustraliaMichael Still: A breadmaker loaf my kids will actually eat

Share

My dad asked me to document some of my baking experiments from the recent natural disasters, which I wanted to do anyway so that I could remember the recipes. Its taken me a while to get around to though, because animated GIFs on reddit are a terrible medium for recipe storage, and because I’ve been distracted with other shiney objects. That said, let’s start with the basics — a breadmaker bread that my kids will actually eat.

A loaf of bread baked in the oven

This recipe took a bunch of iterations to get right over the last year or so, but I’ll spare you the long boring details. However, I suspect part of the problem is that the receipe varies by bread maker. Oh, and the salt is really important — don’t skip the salt!

Wet ingredients (add first)

  • 1.5 cups of warm water (we have an instantaneous gas hot water system, so I pick 42 degrees)
  • 0.25 cups of oil (I use bran oil)

Dry ingredients (add second)

I just kind of chuck these in, although I tend to put the non-flour ingredients in a corner together for reasons that I can’t explain.

  • 3.5 cups of bakers flour (must be bakers flour, not plain flour)
  • 2 tea spoons of instant yeast (we keep in the freezer in a big packet, not the sashets)
  • 4 tea spoons of white sugar
  • 1 tea spoon of salt
  • 2 tea spoons of bread improver

I then just let my bread maker do its thing, which takes about three hours including baking. If I am going to bake the bread in the over, then the dough takes about two hours, but I let the dough rise for another 30 to 60 minutes before baking.

A loaf of bread from the bread maker

I think to be honest that the result is better from the oven, but a little more work. The bread maker loaves are a bit prone to collapsing (you can see it starting on the example above), and there is a big kneeding hook indent in the middle of the bottom of the loaf.

The oven baking technique took a while to develop, but I’ll cover that in a later post.

Share

,

Planet DebianAndrew Cater: CD / DVD testing for Buster release 4 - 202005092130 - Slowing down a bit - but still going.

Last few architectures are being built in the background. Schweer has just confirmed successful testing of all the Debian Edu images - thanks to him, as ever, and to all involved. We're slowing up a bit - it's been a long, hot day and it's not quite over yet. The images release looks to be well on course. As ever, the point release incorporates security fixes and some packages have been removed. The release announcement at https://www.debian.org/News/2020/20200509 gives the details.

Planet DebianAndrew Cater: CD image testing for Buster release 4 - 202005091950 - Most install images checking out well

Lots of hard work going on. schweer has just validated all of the Debian Edu images.  Most of the normal install images have gone through tests with only a few minor hitches. Now moving on to the Live images. These take longer to download and test but we're working through them gradually.

As ever: a point release doesn't mean that the Debian you have is now obsolete - an apt-get / aptitude update will bring you up to the latest release very quickly. If you are updating regularly, you will have most of these files anyway. One small thing: the tools may report that the release version has changed. This is quite normal - base files have changed to reflect the new point release and this causes the notification. The notification is a small warning so that you are not taken by complete surprise but it is quite normal in the circumstances of a Debian point release.

Thanks to the other folk doing the hard work: 10+ hours and continuing.

Rondam RamblingsWeek-end Republican hypocrisy round-up

I've been collecting headlines that I thought would be worth writing about, but the sheer volume of insanity coming in on my news feed just seems overwhelming because I read it all against a backdrop of the fact that Donald Trump's approval ratings remain in the mid-40s.  The Senate might be in play, but just barely.  Biden holds a small lead over Trump, but only a small one.  A few months ago

,

CryptogramFriday Squid Blogging: Jurassic Squid Attack

It's the oldest squid attack on record:

An ancient squid-like creature with 10 arms covered in hooks had just crushed the skull of its prey in a vicious attack when disaster struck, killing both predator and prey, according to a Jurassic period fossil of the duo found on the southern coast of England.

This 200 million-year-old fossil was originally discovered in the 19th century, but a new analysis reveals that it's the oldest known example of a coleoid, or a class of cephalopods that includes octopuses, squid and cuttlefish, attacking prey.

More news.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramUsed Tesla Components Contain Personal Information

Used Tesla components, sold on eBay, still contain personal information, even after a factory reset.

This is a decades-old problem. It's a problem with used hard drives. It's a problem with used photocopiers and printers. It will be a problem with IoT devices. It'll be a problem with everything, until we decide that data deletion is a priority.

Krebs on SecurityMeant to Combat ID Theft, Unemployment Benefits Letter Prompts ID Theft Worries

Millions of Americans now filing for unemployment will receive benefits via a prepaid card issued by U.S. Bank, a Minnesota-based financial institution that handles unemployment payments for more than a dozen U.S. states. Some of these unemployment applications will trigger an automatic letter from U.S. Bank to the applicant. The letters are intended to prevent identity theft, but many people are mistaking these vague missives for a notification that someone has hijacked their identity.

So far this month, two KrebsOnSecurity readers have forwarded scans of form letters they received via snail mail that mentioned an address change associated with some type of payment card, but which specified neither the entity that issued the card nor any useful information about the card itself.

Searching for snippets of text from the letter online revealed pages of complaints from consumers who appear confused about the source and reason for the letter, with most dismissing it as either a scam or considering it a notice of attempted identity theft. Here’s what’s the letter looks like:

A scan of the form letter sent by U.S. Bank to countless people enrolling in state unemployment benefits.

My first thought when a reader shared a copy of the letter was that he recently had been the victim of identity theft. It took a fair amount of digging online to discover that the nebulously named “Cardholder Services” address in Florida referenced at the top of the letter is an address exclusively used by U.S. Bank.

That digging indicated U.S. Bank currently manages the disbursement of funds for unemployment programs in at least 17 states, including Arkansas, Colorado, Delaware, Idaho, Louisiana, Maine, Minnesota, Nebraska, North Dakota, Ohio, Oregon, Pennsylvania, South Dakota, Texas, Utah, Wisconsin, and Wyoming. The funds are distributed through a prepaid debit card called ReliaCard.

To make matters more confusing, the flood of new unemployment applications from people out of work thanks to the COVID-19 pandemic reportedly has overwhelmed U.S. Bank’s system, meaning that many people receiving these letters haven’t yet gotten their ReliaCard and thus lack any frame of reference for having applied for a new payment card.

Reached for comment about the unhelpful letters, U.S. Bank said it automatically mails them to current and former ReliaCard customers when changes in its system are triggered by a customer – including small tweaks to an address — such as changing “Street” to “St.”

“This can include letters to people who formerly had a ReliaCard account, but whose accounts are now inactive,” the company said in a statement shared with KrebsOnSecurity. “If someone files for unemployment and had a ReliaCard in years past for another claim, we can work with the state to activate that card so the cardholder can use it again.”

U.S. Bank said the letters are designed to confirm with the cardholder that the address change is valid and to combat identity theft. But clearly, for many recipients they are having the opposite effect.

“We encourage any cardholders who have questions about the letters to call the number listed on the back of their cards (or 855-282-6161),” the company said.

That’s nice to know, because it’s not obvious from reading the letter which card is being referenced. U.S. Bank said it would take my feedback under advisement, but that the letters were intended to be generic in nature to protect cardholder privacy.

“We are always seeking to improve our programs, so thank you for bringing this to our attention,” the company said. “Our teams are looking at ways to provide more specific information in our communications with cardholders.”

Worse Than FailureError'd: Errors as Substitution for Success

"Why would I be a great fit? Well, [Recruiter], I can [Skill], [Talent], and, most of all, I am certified in [qualification]." David G. wrote.

 

Dave writes, "For years, I've gone by Dave, but from now you can just call me 'Und'."

 

"Sure, BBC Shop, why not, %redirect_to_store_name% it is," wrote Robin L.

 

Christer writes, "Turns out that everything, even if data is missing, has a price."

 

"Well...I have been debating if I should have opted for a few dozen extra exabytes recently," Jon writes.

 

Dustin W. wrote, "$14 Million seems a bit steep for boots, but hey, maybe it's because the shoes come along with actual timberland?"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

CryptogramiOS XML Bug

This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:

What a crazy bug, and Siguza's explanation is very cogent. Basically, it comes down to this:

  • XML is terrible.
  • iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
  • iOS's sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.

So Siguza's exploit ­-- which granted an app full access to the entire file system, and more ­- uses malformed XML comments constructed in a way that one of iOS's XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn't see the fishy entitlements because it thinks they're inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.

This is fixed in the new iOS release, 13.5 beta 3.

Comment:

Implementing 4 different parsers is just asking for trouble, and the "fix" is of the crappiest sort, bolting on more crap to check they're doing the right thing in this single case. None of this is encouraging.

More commentary. Hacker News thread.

Krebs on SecurityTech Support Scam Uses Child Porn Warning

A new email scam is making the rounds, warning recipients that someone using their Internet address has been caught viewing child pornography. The message claims to have been sent from Microsoft Support, and says the recipient’s Windows license will be suspended unless they call an “MS Support” number to reinstate the license, but the number goes to a phony tech support scam that tries to trick callers into giving fraudsters direct access to their PCs.

The fraudulent message tries to seem more official by listing what are supposed to be the recipient’s IP address and MAC address. The latter term stands for “Media Access Control” and refers to a unique identifier assigned to a computer’s network interface.

However, this address is not visible to others outside of the user’s local network, and in any case the MAC address listed in the scam email is not even a full MAC address, which normally includes six groups of two alphanumeric characters separated by a colon. Also, the IP address cited in the email does not appear to have anything to do with the actual Internet address of the recipient.

Not that either of these details will be obvious to many people who receive this spam email, which states:

“We have found instances of child pornography accessed from your IP address & MAC Address.
IP Address: 206.19.86.255
MAC Address : A0:95:6D:C7

This is violation of Information Technology Act of 1996. For now we are Cancelling your Windows License, which means stopping all windows activities & updates on your computer.

If this was not You and would like to Reinstate the Windows License, Please call MS Support Team at 1-844-286-1916 for further help.

Microsoft Support
1 844 286 1916”

KrebsOnSecurity called the toll-free number in the email and was connected after a short hold to a man who claimed to be from MS Support. Immediately, he wanted me to type a specific Web addresses into my browser so he could take remote control over my computer. I was going to play along for a while but for some reason our call was terminated abruptly after several minutes.

These kinds of support scams are a dime a dozen, unfortunately. They prey mainly on elderly and unsophisticated Internet users, walking the frightened caller through a series of steps that allow the fraudsters to take complete, remote control over the system. Once inside the target’s PC, the scammer invariably finds all kinds of imaginary problems that need fixing, at which point the caller is asked for a credit card number or some form of payment and charged an exorbitant fee for some dubious service or software.

What seems new about this scam is the child porn angle, which I’m sure will worry quite a few recipients. I say this because over the past few weeks, someone has massively started sending the same type of sextortion emails that first began in earnest in the summer of 2018, and incredibly over the past few days I’ve received almost a dozen emails from readers wondering if they should be concerned or if they should pay the extortion demand.

Here’s a hard and fast rule: Never respond to spam, and certainly not to any email that threatens some negative consequence unless you respond. Doing otherwise only invites more spammy and scammy emails. On the other hand, I fully support the idea of tying up this scammer’s toll-free number with time-wasting calls.

Worse Than FailureRepresentative Line: Separate Replacements

There's bad date handling code. There's bad date formatting code. There's bad date handling code that abuses date formatting to stringify dates. There are cases where the developer clearly doesn't know the built-in date methods, and cases where they did, but clearly just didn't want to use them.

There's plenty of apocalypticly bad date handling options, but honestly, that gets a little boring after awhile. My personal favorite will always be the near misses. Code that almost, but not quite, "gets it".

Karl's co-worker provided a little nugget of exactly that kind of gold.

formattedID = DateTime.Now.ToString("dd / MM / yyyy").Replace(" / ", "")

Here, they understand that a ToString on a DateTime allows you to pass a format string. They even understand that the format string lets you customize the separators (or they think spaces are standard in date formats). But they didn't quite make the leap to thinking, "hey, maybe I don't need to supply separators," so they Replace them.

There are probably a few other replacements that need to be made in the codebase.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Google AdsenseResources to help optimize your business

Online content and media consumption behaviors are continuously evolving. If you'd like to optimize your online business and help improve your AdSense performance, it's important to follow and adapt to the trends. We'd like to provide some resources to help you successfully navigate in an ever-changing digital environment.

Adapt your content to changing trends

It’s important to understand what’s top of mind for the people you’re aiming to reach in order to make your content interesting and useful to wide audiences. Below are some tools you can use to optimize your content:

Understand user interests 

Use Google Trends to analyze the popularity of top search queries in Google Search across regions and languages. If you need help with understanding, using and visualizing the data better, you can get Google Trends lessons.  

Stay on top of market trends in a dynamic environment and reflect it on your content to keep it up to date. While doing so, please be mindful of our content policies.

Use Question Hub to create richer content by leveraging unanswered questions online. Review these questions to get inspired and create deeper, more comprehensive content.

Track how your content performs 

Get to know your audience and how they engage with your site through Google Analytics. The earlier you spot changes in your user behavior, the quicker you can address them. You can review the below reports to get the insights: 

  • Realtime Content Insights to identify the most popular articles amongst your audience
  • Behavior Reports to understand the overall page and content performance of your site
  • Acquisition Reports to review the shift in your site traffic and traffic sources. If you see unusual spikes from certain sources, you might want to monitor them. 
  • AdSense Overview to see your revenue information once you link your AdSense account to Analytics. 

As an addition to your current content strategy, experiment with different content formats such as video or infographics and track the engagement on your site. If you see an improvement, you can double down on those content formats. Diversifying your content could help you expand your audience, and also improve the engagement of your current ones. 

Optimize your revenue stream

When your content is ready, appealing and easy to reach, you can optimize your AdSense account to maximize your revenue from the content you created. We know that creating content takes time, so we’d like to remind you of some solutions that you can use to get the most out of your content.

You may consider using Auto ads to help you increase your ads revenue. Auto ads are optimized to deliver better performing ads, so that you can spend more time creating the content your audience is searching for. As they work through any AdSense ad code, you can start using Auto ads byturning them on in your account

As time spent on mobile increases, it becomes even more important to have a mobile-friendly site with goodpage speed. This will help people to access your content without problems. Make sure your ad units are responsive in order to provide a positive ad experience regardless of which device people use to visit your site. 

Lastly, make sure that your site complies with the AdSense Program policies so that your business can grow sustainably. 

We’re here to support you through the AdSense forums, email and troubleshooters. Learn more about the support options available. 


Krebs on SecurityEurope’s Largest Private Hospital Operator Fresenius Hit by Ransomware

Fresenius, Europe’s largest private hospital operator and a major provider of dialysis products and services that are in such high demand thanks to the COVID-19 pandemic, has been hit in a ransomware cyber attack on its technology systems. The company said the incident has limited some of its operations, but that patient care continues.

Based in Germany, the Fresenius Group includes four independent businesses: Fresenius Medical Care, a leading provider of care to those suffering from kidney failure; Fresenius Helios, Europe’s largest private hospital operator (according to the company’s Web site); Fresenius Kabi, which supplies pharmaceutical drugs and medical devices; and Fresenius Vamed, which manages healthcare facilities.

Overall, Fresenius employs nearly 300,000 people across more than 100 countries, and is ranked 258th on the Forbes Global 2000. The company provides products and services for dialysis, hospitals, and inpatient and outpatient care, with nearly 40 percent of the market share for dialysis in the United States. This is worrisome because COVID-19 causes many patients to experience kidney failure, which has led to a shortage of dialysis machines and supplies.

On Tuesday, a KrebsOnSecurity reader who asked to remain anonymous said a relative working for Fresenius Kabi’s U.S. operations reported that computers in his company’s building had been roped off, and that a cyber attack had affected every part of the company’s operations around the globe.

The reader said the apparent culprit was the Snake ransomware, a relatively new strain first detailed earlier this year that is being used to shake down large businesses, holding their IT systems and data hostage in exchange for payment in a digital currency such as bitcoin.

Fresenius spokesperson Matt Kuhn confirmed the company was struggling with a computer virus outbreak.

“I can confirm that Fresenius’ IT security detected a computer virus on company computers,” Kuhn said in a written statement shared with KrebsOnSecurity. “As a precautionary measure in accordance with our security protocol drawn up for such cases, steps have been taken to prevent further spread. We have also informed the relevant investigating authorities and while some functions within the company are currently limited, patient care continues. Our IT experts are continuing to work on solving the problem as quickly as possible and ensuring that operations run as smoothly as possible.”

The assault on Fresenius comes amid increasingly targeted attacks against healthcare providers on the front lines of responding to the COVID-19 pandemic. In April, the international police organization INTERPOL warned it “has detected a significant increase in the number of attempted ransomware attacks against key organizations and infrastructure engaged in the virus response. Cybercriminals are using ransomware to hold hospitals and medical services digitally hostage, preventing them from accessing vital files and systems until a ransom is paid.

On Tuesday, the Department of Homeland Security‘s Cybersecurity and Infrastructure Security Agency (CISA) issued an alert along with the U.K.’s National Cyber Security Centre warning that so-called “advanced persistent threat” groups — state-sponsored hacking teams — are actively targeting organizations involved in both national and international COVID-19 responses.

“APT actors frequently target organizations in order to collect bulk personal information, intellectual property, and intelligence that aligns with national priorities,” the alert reads. “The pandemic has likely raised additional interest for APT actors to gather information related to COVID-19. For example, actors may seek to obtain intelligence on national and international healthcare policy, or acquire sensitive data on COVID-19-related research.”

Once considered by many to be isolated extortion attacks, ransomware infestations have become de facto data breaches for many victim companies. That’s because some of the more active ransomware gangs have taken to downloading reams of data from targets before launching the ransomware inside their systems. Some or all of this data is then published on victim-shaming sites set up by the ransomware gangs as a way to pressure victim companies into paying up.

Security researchers say the Snake ransomware is somewhat unique in that it seeks to identify IT processes tied to enterprise management tools and large-scale industrial control systems (ICS), such as production and manufacturing networks.

While some ransomware groups targeting businesses have publicly pledged not to single out healthcare providers for the duration of the pandemic, attacks on medical care facilities have continued nonetheless. In late April, Parkview Medical Center in Pueblo, Colo. was hit in a ransomware attack that reportedly rendered inoperable the hospital’s system for storing patient information.

Fresenius declined to answer questions about specifics of the attack, saying it does not provide detailed information or comments on IT security matters. It remains unclear whether the company will pay a ransom demand to recover from the infection. But if it does so, it may not be the first time: According to my reader source, Fresenius paid $1.5 million to resolve a previous ransomware infection.

“This new attack is on a far greater scale, though,” the reader said.

Update, May 7, 11:44 a.m. ET: Lawrence Abrams over at Bleeping Computer says the attack on Fresenius appears to be part of a larger campaign by the Snake ransomware crooks that kicked into high gear over the past few days. The report notes that Snake also siphons unencrypted files before encrypting computers on a network, and that victims are given roughly 48 hours to pay up or see their internal files posted online for all to access.

LongNowThe Cataclysm Sentence

WNYC’s Radiolab recently released a podcast about what forms of knowledge are worth passing on to future generations.

One day in 1961, the famous physicist Richard Feynman stepped in front of a Caltech lecture hall and posed this question to a group of undergraduate students: “If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence was passed on to the next generation of creatures, what statement would contain the most information in the fewest words?” Now, Feynman had an answer to his own question – a good one. But his question got the entire team at Radiolab wondering, what did his sentence leave out? So we posed Feynman’s cataclysm question to some of our favorite writers, artists, historians, futurists – all kinds of great thinkers. We asked them, “What’s the one sentence you would want to pass on to the next generation that would contain the most information in the fewest words?” What came back was an explosive collage of what it means to be alive right here and now, and what we want to say before we go.

The episode’s framing is very much in line with our Manual For Civilization project. A few Long Now Members and past speakers contributed answers to the project, including Alison Gopnik, Maria Popova, and James Gleick.

CryptogramILOVEYOU Virus

It's the twentieth anniversary of the ILOVEYOU virus, and here are three interesting articles about it and its effects on software design.

Worse Than FailureCodeSOD: Dating Automation

Good idea: having QA developers who can build tooling to automate tests. Testing is tedious, testing needs to be executed repeatedly, and we're not just talking simple unit tests, but in an ideal world key functionality gets benchmarked against acceptance tests. API endpoints get routinely checked.

There's costs and benefits to this though. Each piece of automation is another piece of code that needs to be maintained. It needs to be modified as requirements change. It can have bugs.

And, like any block of code, it can have WTFs.

Nanette got a ticket from QA, which complained that one of the web API endpoints wasn't returning data. "Please confirm why this API isn't returning data."

It didn't take long before Nanette suspected the problem wasn't in the API, but may be in how QA was figuring out its date ranges:

private void setRange(int days){ DateFormat df = new SimpleDateFormat("yyyy-MM-dd") Date d = new Date(); Calendar c = Calendar.getInstance() c.setTime(d); Date start = c.getTime(); if(days==-1){ c.add(Calendar.DAY_OF_MONTH, -1); assertThat(c.getTime()).isNotEqualTo(start); } else if(days==-7){ c.add(Calendar.DAY_OF_MONTH, -7); assertThat(c.getTime()).isNotEqualTo(start); } else if (days==-30){ c.add(Calendar.DAY_OF_MONTH, -30); assertThat(c.getTime()).isNotEqualTo(start); } else if (days==-365){ c.add(Calendar.DAY_OF_MONTH, -365); assertThat(c.getTime()).isNotEqualTo(start); } from = df.format(start).toString()+"T07:00:00.000Z" to = df.format(d).toString()+"T07:00:00.000Z" }

Now, the Java Calendar object is and forever the real WTF with dates. But Java 8 is "only" a few years back, so it's not surprising to see code that still uses that API. Though "uses" might be a bit too strong of a word.

The apparent goal is to set a date range that is one day, one week, one month, or one year prior to the current day. And we can trace through that logic, by checking out the calls to c.add, which even get asserted to make sure the built-in API does what the built-in API is supposed to do.

None of that is necessary, of course- if you only want to support certain values, you could just validate those and simply do c.add(Calendar.DAY_OF_MONTH, days). You can keep the asserts if you want.

But none of that is necessary, because after all that awkward validation, we don't actually use those calculated values. from is equal to start which is equal to the Calendar's current time, which it got from d, which is a new Date() which means it's the current time.

So from and to get to be set to the current time, giving us a date range that is 0 days long. Even better, we can see that from and to, which are clearly class members, are string types, thus the additional calls to DateFormat.format. Remember, .format returns a string, so we have an extra call to toString which really makes sure that this is a string.

The secret downside to automated test suites is that you need to write tests for your tests, which eventually get complicated enough that you need to write tests for your tests which test your tests, and before you know it, you're not writing any code that does real work at all.

Which, in this case, maybe writing no code would have been an improvement.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Chaotic IdealismObesity, COVID, and Statistical Observations

I have been watching some YouTube videos from doctors and scientists reviewing the latest research on COVID-19, and when they talk about the effect of comorbidities on disease severity and mortality in COVID-19, they often mention obesity. They do not seem entirely aware of the obesity rates in the US, and how they might affect the interpretation of studies done in the United States.

In the United States, 42.4% of all adults are obese (https://www.cdc.gov/obesity/index.html). Among a sample of people hospitalized for COVID in New York City, 41.7% were obese. These sorts of numbers are often cited as as a reason to think that obesity may be associated with more severe disease (requiring hospitalization), but notice the base rate: If people hospitalized for COVID have roughly the same level of obesity as people in general in the USA, then those numbers do not support the idea that obesity alone is a risk factor for severe disease in the US population.

This does not hold true for severe (morbid) obesity: The base rate for that is 9.2% in the USA, but the proportion of hospitalized COVID patients with severe obesity was 18%. (This was before controlling for comorbidities, which people with severe obesity usually have; the chicken-and-egg problem of whether they are fat because they are unhealthy, or unhealthy because they are fat, is something medicine is still working on.)

This implies that the number of obese, but not morbidly obese, people in the sample of those hospitalized for COVID should be 23.7%, compared to the 33.2% of mild-to-moderate obesity in the general population. If this difference is significant, as it should be with a sample of over five thousand, that actually supports the idea that obesity could be a protective factor, while morbid obesity is still a risk factor. (However: The paper did not address this idea, and I do not know if the difference is statistically significant; also, I do not have the obesity data for New York City and do not know if it is different from that of the general population.)

It might seem like a quirk of the data, but I think it is very important for us to notice, because if people in the overweight/obese range are worried about COVID and go on severe diets to try to lose weight and protect themselves, the low calorie intake may cause their bodies to slow their metabolisms, which it will do partly by reducing their immune response. People on severe diets may in fact become less resistant to the coronavirus because they are trying to lose weight.

A very gradual diet is probably still safe, but I have not studied what level of calorie restriction, in the absence of micronutrient deficiency, is likely to cause immunosuppression. Unless the goal of weight loss is to cure or better control some comorbidity that is associated with higher COVID death rates, it seems that until we know more, the best approach for many overweight and obese people is that of moderation and common sense: A varied, healthful diet without calorie restriction, combined with sunshine and exercise.

Reference:
Richardson S, Hirsch JS, Narasimhan M, et al. Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area. JAMA. Published online April 22, 2020. doi:10.1001/jama.2020.6775

CryptogramMalware in Google Apps

Interesting story of malware hidden in Google Apps. This particular campaign is tied to the government of Vietnam.

At a remote virtual version of its annual Security Analyst Summit, researchers from the Russian security firm Kaspersky today plan to present research about a hacking campaign they call PhantomLance, in which spies hid malware in the Play Store to target users in Vietnam, Bangladesh, Indonesia, and India. Unlike most of the shady apps found in Play Store malware, Kaspersky's researchers say, PhantomLance's hackers apparently smuggled in data-stealing apps with the aim of infecting only some hundreds of users; the spy campaign likely sent links to the malicious apps to those targets via phishing emails. "In this case, the attackers used Google Play as a trusted source," says Kaspersky researcher Alexey Firsh. "You can deliver a link to this app, and the victim will trust it because it's Google Play."

[...]

The first hints of PhantomLance's campaign focusing on Google Play came to light in July of last year. That's when Russian security firm Dr. Web found a sample of spyware in Google's app store that impersonated a downloader of graphic design software but in fact had the capability to steal contacts, call logs, and text messages from Android phones. Kaspersky's researchers found a similar spyware app, impersonating a browser cache-cleaning tool called Browser Turbo, still active in Google Play in November of that year. (Google removed both malicious apps from Google Play after they were reported.) While the espionage capabilities of those apps was fairly basic, Firsh says that they both could have expanded. "What's important is the ability to download new malicious payloads," he says. "It could extend its features significantly."

Kaspersky went on to find tens of other, similar spyware apps dating back to 2015 that Google had already removed from its Play Store, but which were still visible in archived mirrors of the app repository. Those apps appeared to have a Vietnamese focus, offering tools for finding nearby churches in Vietnam and Vietnamese-language news. In every case, Firsh says, the hackers had created a new account and even Github repositories for spoofed developers to make the apps appear legitimate and hide their tracks.

Worse Than FailureCodeSOD: Reasonable Lint

While testing their application, Nicholas found some broken error messages. Specifically, they were the embarassing “printing out JavaScript values” types of errors, so obviously something was broken on the server side.

“Oh, that can’t be,” said his senior developer. “We have a module that turns all of the errors into friendly error messages. We use it everywhere, so that can’t be the problem.”

Nicholas dug in, and found this NodeJS block, written by that senior developer.

const reasons = require('reasons');

const handleUploadError = function (err, res) {
	if (err) {

		var code = 500;

		var reason = reasons([{ message: 'Internal Error'}])

		if (err === 'errorCondition1') {
			code = 400;
			reason = reasons([{message: 'Message 1'}]);

		} else if (err === 'errorCondition2') {
			code = 400;
			reason = reasons([{message: 'Message 2'}]);

		} else if (err === 'errorCondition3') {
			code = 422;
			reason = reasons([{message: 'Message 3'}]);

		// else if pattern repeated for about 50 lines
		// ...
		}

		return res.status(code).send({reasons: reasons});
	}

	res.status(201).json('response');
};

We start by pulling in that aforementioned reasons module, and stuffing it into a variable. As we can see later on, that module clearly exports itself as a single function, as we see it get invoked like so: reason = reasons([{message: 'Internal Error'}])

And if you skim through this function, everything seems fine. At first glance, even Nicholas thought it was fine. But Nicholas has been trying to get his senior developer to agree that code linting might be a valuable thing to build into their workflow.

“We don’t need to add an unnecessary tool or checkpoint to our process,” the senior continued to say. “Just write better code.”

When Nicholas ran this “unnecessary tool”, in complained about this line: var reason = reasons([{ message: 'Internal Error'}]). reason was assigned a value, but it was never used.

And sure enough, if you scroll down to the line where we actually return our error messages, we do it like this:

return res.status(code).send({reasons: reasons});

reasons contains the library function we use to load error messages.

This code had been in production for months before Nicholas noticed it while doing regression testing on some of his changes in a related module. With this evidence about the value of linters, maybe the senior dev will listen to reason.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 02)

Here’s part two of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

In this installment, we meet Kurt, the crustypunk high-tech dumpster-diver. Kurt is loosely based on my old friend Darren Atkinson, who pulled down a six-figure income by recovering, repairing and reselling high-tech waste from Toronto’s industrial suburbs. Darren was the subject of the first feature I ever sold to Wired, Dumpster Diving, which was published in the September, 1997 issue.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

LongNowKim Stanley Robinson: “The Coronavirus is Rewriting Our Imaginations.”

Science Fiction author Kim Stanley Robinson has written a powerful meditation on what the pandemic heralds for the future of civilization in The New Yorker.

Possibly, in a few months, we’ll return to some version of the old normal. But this spring won’t be forgotten. When later shocks strike global civilization, we’ll remember how we behaved this time, and how it worked. It’s not that the coronavirus is a dress rehearsal—it’s too deadly for that. But it is the first of many calamities that will likely unfold throughout this century. Now, when they come, we’ll be familiar with how they feel.

What shocks might be coming? Everyone knows everything. Remember when Cape Town almost ran out of water? It’s very likely that there will be more water shortages. And food shortages, electricity outages, devastating storms, droughts, floods. These are easy calls. They’re baked into the situation we’ve already created, in part by ignoring warnings that scientists have been issuing since the nineteen-sixties. Some shocks will be local, others regional, but many will be global, because, as this crisis shows, we are interconnected as a biosphere and a civilization.

Kim Stanley Robinson, “The Coronavirus is Rewriting Our Imaginations,” in The New Yorker.

Kim Stanley Robinson has spoken at Long Now on three occasions:

CryptogramDenmark, Sweden, Germany, the Netherlands and France SIGINT Alliance

This paper describes a SIGINT and code-breaking alliance between Denmark, Sweden, Germany, the Netherlands and France called Maximator:

Abstract: This article is first to report on the secret European five-partner sigint alliance Maximator that started in the late 1970s. It discloses the name Maximator and provides documentary evidence. The five members of this European alliance are Denmark, Sweden, Germany, the Netherlands, and France. The cooperation involves both signals analysis and crypto analysis. The Maximator alliance has remained secret for almost fifty years, in contrast to its Anglo-Saxon Five-Eyes counterpart. The existence of this European sigint alliance gives a novel perspective on western sigint collaborations in the late twentieth century. The article explains and illustrates, with relatively much attention for the cryptographic details, how the five Maximator participants strengthened their effectiveness via the information about rigged cryptographic devices that its German partner provided, via the joint U.S.-German ownership and control of the Swiss producer Crypto AG of cryptographic devices.

Worse Than FailureCodeSOD: The Sound of GOTO

Let's say you have an audio file, or at least, something you suspect is an audio file. You want to parse through the file, checking the headers and importing the data. If the file is invalid, though, you want to raise an error. Let's further say that you're using a language like C++, which has structured exception handling.

Now, pretend you don't know how to actually use structured exception handling. How do you handle errors?

Adam's co-worker has a solution.

char id[5]; // four bytes to hold 'RIFF' bool ok = false; id[sizeof(id) - 1] = 0; do { size_t nread = fread(id, 4, 1, m_sndFile); // read in first four bytes if (nread != 1) { break; } if (strcmp(id, "RIFF")) { break; } // ... // 108 more lines of file parsing code like this // ... ok = true; } while (time(0L) == 0); // later if (ok) { //pass the parsed data back } else { //return an error code }

This code was written by someone who really wanted to use goto but knew it'd never pass code review. So they reinvented it. Our loop is a do/while with a condition which will almost certainly be false- time(0L) == 0). Unless this code is run exactly at midnight on January 1st, 1970 (or on a computer with a badly configured clock), that condition will always be false. Why not while(false)? Presumably that would have been too obvious.

Also, and I know this is petty relative to everything else going on, the time function returns a time_t struct, and accepts a pointer to a time_t struct, which it can initialize. If you just want the return value, you pass in a NULL- which is technically what they're doing by passing 0L, but that's a cryptic way of doing it.

Inside the loop, we have our cobb-jobbed goto implementation. If we fail to read 4 bytes at the start, break to the end of the loop. If we fail to read "RIFF" at the start, break to the end of the loop. Finally, after we've loaded the entire file, we set ok to true. This allows the code that runs after the loop to know if we parsed a file or not. Of course, we don't know why it failed, but how is that ever going to be useful? It failed, and that's good enough for us.

This line: char id[5]; // four bytes to hold 'RIFF' also gives me a chuckle, because at first glance, it seems like the comment is wrong- we allocate 5 bytes to hold "RIFF". Of course, a moment later, id[sizeof(id) - 1] = 0; null-terminates the string, which lets us use strcmp for comparisons.

Which just goes to show, TRWTF is C-style strings.

In any case, we don't know why this code was written this way. At a guess, the original developer probably did know about structured exception handling, muttered something about overhead and performance, and then went ahead on and reinvented the goto, badly.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet Linux AustraliaFrancois Marier: Backing up to a GnuBee PC 2

After installing Debian buster on my GnuBee, I set it up for receiving backups from my other computers.

Software setup

I started by configuring it like a typical server but without a few packages that either take a lot of memory or CPU:

I changed the default hostname:

  • /etc/hostname: foobar
  • /etc/mailname: foobar.example.com
  • /etc/hosts: 127.0.0.1 foobar.example.com vogar localhost

and then installed the avahi-daemon package to be able to reach this box using foobar.local.

I noticed the presence of a world-writable directory and so I tightened the security of some of the default mount points by putting the following in /etc/rc.local:

mount -o remount,nodev,nosuid /etc/network
mount -o remount,nodev,nosuid /lib/modules
chmod 755 /etc/network
exit 0

Hardware setup

My OS drive (/dev/sda) is a small SSD so that the GnuBee can run silently when the spinning disks aren't needed. To hold the backup data on the other hand, I got three 4-TB drives drives which I setup in a RAID-5 array. If the data were valuable, I'd use RAID-6 instead since it can survive two drives failing at the same time, but in this case since it's only holding backups, I'd have to lose the original machine at the same time as two of the 3 drives, a very unlikely scenario.

I created new gpt partition tables on /dev/sdb, /dev/sdbc, /dev/sdd and used fdisk to create a single partition of type 29 (Linux RAID) on each of them.

Then I created the RAID array:

mdadm /dev/md127 --create -n 3 --level=raid5 -a /dev/sdb1 /dev/sdc1 /dev/sdd1

and waited more than 24 hours for that operation to finish. Next, I formatted the array:

mkfs.ext4 -m 0 /dev/md127

and added the following to /etc/fstab:

/dev/md127 /mnt/data/ ext4 noatime,nodiratime 0 2

To reduce unnecessary noise and reduce power consumption, I also installed hdparm:

apt install hdparm

and configured all spinning drives to spin down after being idle for 10 minutes by putting the following in /etc/hdparm.conf:

/dev/sdb {
       spindown_time = 120
}

/dev/sdc {
       spindown_time = 120
}

/dev/sdd {
       spindown_time = 120
}

and then reloaded the configuration:

 /usr/lib/pm-utils/power.d/95hdparm-apm resume

Finally I setup smartmontools by putting the following in /etc/smartd.conf:

/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdb -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdc -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdd -a -o on -S on -s (S/../.././02|L/../../6/03)

and restarting the daemon:

systemctl restart smartd.service

Backup setup

I started by using duplicity since I have been using that tool for many years, but a 190GB backup took around 15 hours on the GnuBee with gigabit ethernet.

After a friend suggested it, I took a look at restic and I have to say that I am impressed. The same backup finished in about half the time.

User and ssh setup

After hardening the ssh setup as I usually do, I created a user account for each machine needing to backup onto the GnuBee:

adduser machine1
adduser machine1 sshuser
adduser machine1 sftponly
chsh machine1 -s /bin/false

and then matching directories under /mnt/data/home/:

mkdir /mnt/data/home/machine1
chown machine1:machine1 /mnt/data/home/machine1
chmod 700 /mnt/data/home/machine1

Then I created a custom ssh key for each machine:

ssh-keygen -f /root/.ssh/foobar_backups -t ed25519

and placed it in /home/machine1/.ssh/authorized_keys on the GnuBee.

On each machine, I added the following to /root/.ssh/config:

Host foobar.local
    User machine1
    Compression no
    Ciphers aes128-ctr
    IdentityFile /root/backup/foobar_backups
    IdentitiesOnly yes
    ServerAliveInterval 60
    ServerAliveCountMax 240

The reason for setting the ssh cipher and disabling compression is to speed up the ssh connection as much as possible given that the GnuBee has avery small RAM bandwidth.

Another performance-related change I made on the GnuBee was switching to the internal sftp server by putting the following in /etc/ssh/sshd_config:

Subsystem      sftp    internal-sftp

Restic script

After reading through the excellent restic documentation, I wrote the following backup script, based on my old duplicity script, to reuse on all of my computers:

# Configure for each host
PASSWORD="XXXX"  # use `pwgen -s 64` to generate a good random password
BACKUP_HOME="/root/backup"
REMOTE_URL="sftp:foobar.local:"
RETENTION_POLICY="--keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 2"

# Internal variables
SSH_IDENTITY="IdentityFile=$BACKUP_HOME/foobar_backups"
EXCLUDE_FILE="$BACKUP_HOME/exclude"
PKG_FILE="$BACKUP_HOME/dpkg-selections"
PARTITION_FILE="$BACKUP_HOME/partitions"

# If the list of files has been requested, only do that
if [ "$1" = "--list-current-files" ]; then
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL ls latest
    exit 0

# Show list of available snapshots
elif [ "$1" = "--list-snapshots" ]; then
    RESTIC_PASSWORD=$GPG_PASSWORD restic --quiet -r $REMOTE_URL snapshots
    exit 0

# Restore the given file
elif [ "$1" = "--file-to-restore" ]; then
    if [ "$2" = "" ]; then
        echo "You must specify a file to restore"
        exit 2
    fi
    RESTORE_DIR="$(mktemp -d ./restored_XXXXXXXX)"
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL restore latest --target "$RESTORE_DIR" --include "$2" || exit 1
    echo "$2 was restored to $RESTORE_DIR"
    exit 0

# Delete old backups
elif [ "$1" = "--prune" ]; then
    # Expire old backups
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL forget $RETENTION_POLICY

    # Delete files which are no longer necessary (slow)
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL prune
    exit 0

# Catch invalid arguments
elif [ "$1" != "" ]; then
    echo "Invalid argument: $1"
    exit 1
fi

# Check the integrity of existing backups
RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL check || exit 1

# Dump list of Debian packages
dpkg --get-selections > $PKG_FILE

# Dump partition tables from harddrives
/sbin/fdisk -l /dev/sda > $PARTITION_FILE
/sbin/fdisk -l /dev/sdb > $PARTITION_FILE

# Do the actual backup
RESTIC_PASSWORD=$PASSWORD restic --quiet --cleanup-cache -r $REMOTE_URL backup / --exclude-file $EXCLUDE_FILE

I run it with the following cronjob in /etc/cron.d/backups:

30 8 * * *    root  ionice nice nocache /root/backup/backup-machine1-to-foobar
30 2 * * Sun  root  ionice nice nocache /root/backup/backup-machine1-to-foobar --prune

in a way that doesn't impact the rest of the system too much.

Finally, I printed a copy of each of my backup script, using enscript, to stash in a safe place:

enscript --highlight=bash --style=emacs --output=- backup-machine1-to-foobar | ps2pdf - > foobar.pdf

This is actually a pretty important step since without the password, you won't be able to decrypt and restore what's on the GnuBee.

,

Planet Linux AustraliaSimon Lyall: Audiobooks – April 2020

Cockpit Confidential: Everything You Need to Know About Air Travel: Questions, Answers, and Reflections by Patrick Smith

Lots of “you always wanted to know” & “this is how it really is” bits about commercial flying. Good fun 4/5

The Day of the Jackal by Frederick Forsyth

A very tightly written thriller about a fictional 1963 plot to assassinate Frnch President Charles de Gaulle. Fast moving, detailed and captivating 5/5

Topgun: An American Story by Dan Pedersen

Memoir from the first officer in charge of the US Navy’s Top Gun school. A mix of his life & career, the school and US Navy air history (especially during Vietnam). Excellent 4/5

Radicalized: Four Tales of Our Present Moment
by Cory Doctorow

4 short stories set in more-or-less the present day. They all work fairly well. Worth a read. Spoilers in the link. 3/5

On the Banks of Plum Creek: Little House Series, Book 4 by Laura Ingalls Wilder

The family settle in Minnesota and build a new farm. Various major and minor adventures. I’m struck how few possessions people had back then. 3/5

My Father’s Business: The Small-Town Values That Built Dollar General into a Billion-Dollar Company by Cal Turner Jr.

A mix of personal and company history. I found the early story of the company and personal stuff the most interesting. 3/5

You Can’t Fall Off the Floor: And Other Lessons from a Life in Hollywood by Harris and Nick Katleman

Memoir by a former studio exec and head. Lots of funny and interesting stories from his career, featuring plenty of famous names. 4/5

The Wave: In Pursuit of the Rogues, Freaks and Giants of the Ocean by Susan Casey

75% about Big-wave Tow-Surfers with chapters on Scientists and Shipping industry people mixed in. Competent but author’s heart seemed mostly in the surfing. 3/5

Share

,

CryptogramFriday Squid Blogging: Cocaine Smuggled in Squid

Makes sense; there's room inside a squid's body cavity:

Latin American drug lords have sent bumper shipments of cocaine to Europe in recent weeks, including one in a cargo of squid, even though the coronavirus epidemic has stifled legitimate transatlantic trade, senior anti-narcotics officials say.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramMe on COVID-19 Contact Tracing Apps

I was quoted in BuzzFeed:

"My problem with contact tracing apps is that they have absolutely no value," Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. "I'm not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? ... This is just something governments want to do for the hell of it. To me, it's just techies doing techie things because they don't know what else to do."

I haven't blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let's say it's less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don't result in transmissions. This will be because of several reasons. One, the app's location and proximity systems -- based on GPS and Bluetooth -- just aren't accurate enough to capture every contact. Two, the app won't be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that's less than 100% (and I don't know what that is).

  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app's location and proximity systems. Two, transmissions that occur from people who don't have the app (even Singapore didn't get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact -- the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It's not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can't confirm the app's diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn't alert you of any contact. Are you in the clear? No, you're not. You actually have no idea if you've been infected.

The end result is an app that doesn't work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.

Worse Than FailureError'd: Call Me Maybe (Not)

Jura K. wrote, "Cigna is trying to answer demand for telehealth support, but apparently they are a little short on supply."

 

"While Noodles World Kitchen's mobile app is really great with placing orders, it's less than great at handling linear time," writes Robert H.

 

Hans K. wrote, "Whoever is in charge of sanitizing the text didn't know about C# generics."

 

"These PDFs might also be great in my Chocolate Cake!" Randolf writes. Hint: Look up "Bitte PDF drucken"

 

Carl C. writes, "I wanted to have plenty to read in my Kindle app while I was self-isolating at home, but 18 kajillion pages?"

 

"I mean, I guess the error message about the error message not working might be preferrable to an actual Stack Overflow," James B. wrote.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Krebs on SecurityHow Cybercriminals are Weathering COVID-19

In many ways, the COVID-19 pandemic has been a boon to cybercriminals: With unprecedented numbers of people working from home and anxious for news about the virus outbreak, it’s hard to imagine a more target-rich environment for phishers, scammers and malware purveyors. In addition, many crooks are finding the outbreak has helped them better market their cybercriminal wares and services. But it’s not all good news: The Coronavirus also has driven up costs and disrupted key supply lines for many cybercriminals. Here’s a look at how they’re adjusting to these new realities.

FUELED BY MULES

One of the more common and perennial cybercriminal schemes is “reshipping fraud,” wherein crooks buy pricey consumer goods online using stolen credit card data and then enlist others to help them collect or resell the merchandise.

Most online retailers years ago stopped shipping to regions of the world most frequently associated with credit card fraud, including Eastern Europe, North Africa, and Russia. These restrictions have created a burgeoning underground market for reshipping scams, which rely on willing or unwitting residents in the United States and Europe — derisively referred to as “reshipping mules” — to receive and relay high-dollar stolen goods to crooks living in the embargoed areas.

A screen shot from a user account at “Snowden,” a long-running reshipping mule service.

But apparently a number of criminal reshipping services are reporting difficulties due to the increased wait time when calling FedEx or UPS (to divert carded goods that merchants end up shipping to the cardholder’s address instead of to the mule’s). In response, these operations are raising their prices and warning of longer shipping times, which in turn could hamper the activities of other actors who depend on those services.

That’s according to Intel 471, a cyber intelligence company that closely monitors hundreds of online crime forums. In a report published today, the company said since late March 2020 it has observed several crooks complaining about COVID-19 interfering with the daily activities of their various money mules (people hired to help launder the proceeds of cybercrime).

“One Russian-speaking actor running a fraud network complained about their subordinates (“money mules”) in Italy, Spain and other countries being unable to withdraw funds, since they currently were afraid to leave their homes,” Intel 471 observed. “Also some actors have reported that banks’ customer-support lines are being overloaded, making it difficult for fraudsters to call them for social-engineering activities (such as changing account ownership, raising withdrawal limits, etc).”

Still, every dark cloud has a silver lining: Intel 471 noted many cybercriminals appear optimistic that the impending global economic recession (and resultant unemployment) “will make it easier to recruit low-level accomplices such as money mules.”

Alex Holden, founder and CTO of Hold Security, agreed. He said while the Coronavirus has forced reshipping operators to make painful shifts in several parts of their business, the overall market for available mules has never looked brighter.

“Reshipping is way up right now, but there are some complications,” he said.

For example, reshipping scams have over the years become easier for both reshipping mule operators and the mules themselves. Many reshipping mules are understandably concerned about receiving stolen goods at their home and risking a visit from the local police. But increasingly, mules have been instructed to retrieve carded items from third-party locations.

“The mules don’t have to receive stolen goods directly at home anymore,” Holden said. “They can pick them up at Walgreens, Hotel lobbies, etc. There are a ton of reshipment tricks out there.”

But many of those tricks got broken with the emergence of COVID-19 and social distancing norms. In response, more mule recruiters are asking their hires to do things like reselling goods shipped to their homes on platforms like eBay and Amazon.

“Reshipping definitely has become more complicated,” Holden said. “Not every mule will run 10 times a day to the post office, and some will let the goods sit by the mailbox for days. But on the whole, mules are more compliant these days.”

GIVE AND TAKE

KrebsOnSecurity recently came to a similar conclusion: Last month’s story, “Coronavirus Widens the Money Mule Pool,” looked at one money mule operation that had ensnared dozens of mules with phony job offers in a very short period of time. Incidentally, the fake charity behind that scheme — which promised to raise money for Coronavirus victims — has since closed up shop and apparently re-branded itself as the Tessaris Foundation.

Charitable cybercriminal endeavors were the subject of a report released this week by cyber intel firm Digital Shadows, which looked at various ways computer crooks are promoting themselves and their hacking services using COVID-19 themed discounts and giveaways.

Like many commercials on television these days, such offers obliquely or directly reference the economic hardships wrought by the virus outbreak as a way of connecting on an emotional level with potential customers.

“The illusion of philanthropy recedes further when you consider the benefits to the threat actors giving away goods and services,” the report notes. “These donors receive a massive boost to their reputation on the forum. In the future, they may be perceived as individuals willing to contribute to forum life, and the giveaways help establish a track record of credibility.”

Brian’s Club — one of the underground’s largest bazaars for selling stolen credit card data and one that has misappropriated this author’s likeness and name in its advertising — recently began offering “pandemic support” in the form of discounts for its most loyal customers.

It stands to reason that the virus outbreak might depress cybercriminal demand for “dumps,” or stolen account data that can be used to create physical counterfeit credit cards. After all, dumps are mainly used to buy high-priced items from electronics stores and other outlets that may not even be open now thanks to the widespread closures from the pandemic.

If that were the case, we’d also expect to see dumps prices fall significantly across the cybercrime economy. But so far, those price changes simply haven’t materialized, says Gemini Advisory, a New York based company that monitors the sale of stolen credit card data across dozens of stores in the cybercrime underground.

Stas Alforov, Gemini’s director of research and development, said there’s been no notable dramatic changes in pricing for both dumps and card data stolen from online merchants (a.k.a. “CVVs”) — even though many cybercrime groups appear to be massively shifting their operations toward targeting online merchants and their customers.

“Usually, the huge spikes upward or downward during a short period is reflected by a large addition of cheap records that drive the median price change,” Alforov said, referring to the small and temporary price deviations depicted in the graph above.

Intel 471 said it came to a similar conclusion.

“You might have thought carding activity, to include support aspects such as checker services, would decrease due to both the global lockdown and threat actors being infected with COVID-19,” the company said. “We’ve even seen some actors suggest as much across some shops, but the reality is there have been no observations of major changes.”

CONSCIENCE VS. COMMERCE

Interestingly, the Coronavirus appears to have prompted discussion on a topic that seldom comes up in cybercrime communities — i.e., the moral and ethical ramifications of their work. Specifically, there seems to be much talk these days about the potential karmic consequences of cashing in on the misery wrought by a global pandemic.

For example, Digital Shadows said some have started to question the morality of targeting healthcare providers, or collecting funds in the name of Coronavirus causes and then pocketing the money.

“One post on the gated Russian-language cybercriminal forum Korovka laid bare the question of threat actors’ moral obligation,” the company wrote. “A user initiated a thread to canvass opinion on the feasibility of faking a charitable cause and collecting donations. They added that while they recognized that such a plan was ‘cruel,’ they found themselves in an ‘extremely difficult financial situation.’ Responses to the proposal were mixed, with one forum user calling the plan ‘amoral,’ and another pointing out that cybercrime is inherently an immoral affair.”

CryptogramSecuring Internet Videoconferencing Apps: Zoom and Others

The NSA just published a survey of video conferencing apps. So did Mozilla.

Zoom is on the good list, with some caveats. The company has done a lot of work addressing previous security concerns. It still has a bit to go on end-to-end encryption. Matthew Green looked at this. Zoom does offer end-to-end encryption if 1) everyone is using a Zoom app, and not logging in to the meeting using a webpage, and 2) the meeting is not being recorded in the cloud. That's pretty good, but the real worry is where the encryption keys are generated and stored. According to Citizen Lab, the company generates them.

The Zoom transport protocol adds Zoom's own encryption scheme to RTP in an unusual way. By default, all participants' audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting's participants by Zoom servers. Zoom's encryption and decryption use AES in ECB mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input.

The algorithm part was just fixed:

AES 256-bit GCM encryption: Zoom is upgrading to the AES 256-bit GCM encryption standard, which offers increased protection of your meeting data in transit and resistance against tampering. This provides confidentiality and integrity assurances on your Zoom Meeting, Zoom Video Webinar, and Zoom Phone data. Zoom 5.0, which is slated for release within the week, supports GCM encryption, and this standard will take effect once all accounts are enabled with GCM. System-wide account enablement will take place on May 30.

There is nothing in Zoom's latest announcement about key management. So: while the company has done a really good job improving the security and privacy of their platform, there seems to be just one step remaining to fully encrypt the sessions.

The other thing I want Zoom to do is to make the security options necessary to prevent Zoombombing to be made available to users of the free version of that platform. Forcing users to pay for security isn't a viable option right now.

Finally -- I use Zoom all the time. I finished my Harvard class using Zoom; it's the university standard. I am having Inrupt company meetings on Zoom. I am having professional and personal conferences on Zoom. It's what everyone has, and the features are really good.

Worse Than FailureCodeSOD: A Quick Escape

I am old. I’m so old that, when I entered the industry, we didn’t have specializations like “frontend” and “backend” developers. You just had developers, and everybody just sort muddled about. As web browsers have migrated from “document display tool” to “enh, basically an operating system,” in terms of complexity, these two branches of development have gotten increasingly siloed.

Which creates problems, like the one Carlena found. You see, the front-end folks didn’t like the way things like quotes were displaying. A quote or a single quote should be represented as a character entity- &#39, for example.

Now, our frontend developers could have sanitized the strings for display on the client side, but making sure the frontend got good data was a backend problem, to their mind. But the backend developer was out of the office on vacation, so what were our struggling frontend folks to do?

  def CustomerHelper.html_encode(string)
    string.to_str.gsub(";","&#59;").gsub("<","&lt;").gsub(">","&gt;").gsub("\"","&#34;").gsub("\'","&#39;").gsub(")","&#41;").gsub("%","&#37;").gsub("@", "&#64;")
  end

Well, that doesn’t look so bad, does it? It’s a little weird that they’re escaping ) but not (, but that’s probably harmless. Certainly, this isn’t the best way, but it’s not terrible…

Except that the frontend developers didn’t wrap this around sending the data to the frontend. They wrapped this around the save logic. When the name, address, email address, or company name were saved, they’d be saved with HTML entities right in line.

After a quick round of testing, the frontend folks happily saw that everything worked for them, and went back to tweaking CSS rules and having fights over how whether CSS classnames should reflect purpose or behavior.

There was just one little problem. The frontend wasn’t the only module which consumed this data. Some of them escaped strings on the client side. So, when the user inputs their name as “Miles O’Keefe”, the database stores “Miles O&#39;Keefe”. When client code that escapes on the client side fetches, they convert that into “Miles O&#38#39Keefe”.

The email sending modules, though, were the ones that had the worst time of it, as every newly modified email address became miles.okeefe&#64;howmuchkeef.com.

Thus the system sat, until the back-end developer got back from their vacation, and they got to head up all the cleanup and desanitization of a week’s worth of garbage being added to the database.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

CryptogramHow Did Facebook Beat a Federal Wiretap Demand?

This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge's decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[...]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here's the 2018 story. Slashdot thread.

Worse Than FailureRushin' Translation

Cid works for a German company. From day one, management knew that they wanted their application to be multi-lingual, if nothing else because they knew they needed to offer it in English. So from the ground up, the codebase was designed to make localization easy; resource files contained all the strings, the language specific ones could be loaded dynamically, and even UI widgets could flex around based on locale needs.

In the interests of doing it right, when it came time to make the English version, they even went out and contracted a translation company. A team of professional translators went through the strings, checked through the documentation and the requirements, even talked to stakeholders to ensure accurate translations. The English version shipped, and everyone- company and customers included were happy with the product.

Cid’s employer got a lot of good press- their product was popular in its narrow domain. Popular enough that a Russian company called Инитеч came around. They wanted to use the product, but they wanted a Russian localization.

“No problem,” said the sales beast. “We can make that happen!”

Management was less enthused. When localizing for English, they knew they had a big market, and they knew that it was worth doing it right, but even then, it was expensive. Looking at the bottom line, it just didn’t make sense to put that kind of effort into the project for just one customer.

The sales beast wasn’t about to let this sale slip through their fingers, though. And Инитеч really wanted to use their product. And hey, Инитеч had a few employees who had taken a semester of English in school at some point. They could do the translation! They weren’t even looking to score a deal on support, they’d buy the software and do the translation themselves.

“Free” sounded good, so management gave their blessing. Since the customer was doing all the work, no one put too much thought into timelines, or planning, or quality control. Which meant that timelines slipped, there was no plan for completing the translation, and the quality control didn’t happen until Cid had the bright idea of realizing that co-worker Marya was natively Russian and asked her to take a look at the translations.

“Oh, these are great,” Marya said, “if the translator doesn’t speak either German or Russian.” The translations were roughly equivalent to taking the German original, slapping it through Google Translate to get to English, then eventually migrating to Russian by way of Hindi and Portuguese.

The problems with the translation were escalated up to management, and a bunch of meetings happened to debate what to do. On one hand, these were the translations the customer made, and thus they should be happy with it. On the other, they were terrible, and at the end of the day, Cid’s employer needed to be able to stand behind its product.

At this point, Инитеч was getting antsy. They’d already put a lot of work into doing the translations, and had been trying to communicate the software changes to their users for months. They didn’t have anything at all to show for their efforts.

Someone in the C-level offices made the call. They’d hire a professional translator, but they’d aggressively manage the costs. They laid out a plan. They set a timeline. They established acceptance criteria.

They set their timeline, however, without talking to the translation company. Essentially, they were hoping to defeat the “triangle”: they wanted to have the translation be good, be cheap, and be done fast. Reality stepped in: either they needed to pay more to bring on more translators, or they needed to let timelines slip farther.

What started as a quick sale with only minimal upfront investment stretched out into a year of effort. With everyone rushing but making no progress, mistakes started cropping up. One whole module’s worth of text was forgotten in the scope document agreed to by the translation company. Someone grabbed an old version of the resource file when publishing a test build, which created a minor panic when everything was wrong. Relations with Инитеч started to break down, and the whole process went on long enough that the Инитеч employee which started the purchase changed jobs, and another contact came in with no idea of what was in flight.

Which is why, when the sales beast finally was able to tell Инитеч that they had a successful Russian localization, the contact at Инитеч said, “That… is nice? Is this a sales call? Are you trying to sell us this? We just purchased a similar product from your competitor six months ago.”

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityWould You Have Fallen for This Phone Scam?

You may have heard that today’s phone fraudsters like to use caller ID spoofing services to make their scam calls seem more believable. But you probably didn’t know that these fraudsters also can use caller ID spoofing to trick your bank into giving up information about recent transactions on your account — data that can then be abused to make their phone scams more believable and expose you to additional forms of identity theft.

Last week, KrebsOnSecurity told the harrowing tale of a reader (a security expert, no less) who tried to turn the tables on his telephonic tormentors and failed spectacularly. In that episode, the people impersonating his bank not only spoofed the bank’s real phone number, but they were also pretending to be him on a separate call at the same time with his bank.

This foiled his efforts to make sure it was really his bank that called him, because he called his bank with another phone and the bank confirmed they currently were in a separate call with him discussing fraud on his account (however, the other call was the fraudster pretending to be him).

Shortly after that story ran, I heard from another reader — we’ll call him “Jim” since he didn’t want his real name used for this story — whose wife was the target of a similar scam, albeit with an important twist: The scammers were armed with information about a number of her recent financial transactions, which he claims they got from the bank’s own automated phone system just by spoofing her phone number.

“When they originally called my wife, there were no fraudulent transactions on her account, but they were able to specify the last three transactions she had made, which combined with the caller-ID had mistakenly earned her trust,” Jim explained. “After we figured out what was going on, we were left asking ourselves how the crooks had obtained her last three transactions without breaking into her account online. As it turned out, calling the phone number on the back of the credit card from the phone number linked with the card provided the most recent transactions without providing any form of authentication.”

Jim said he was so aghast at this realization that he called the same number from his phone and tried accessing his account, which is also at Citi but wholly separate from his spouse’s. Sure enough, he said, as long as he was calling from the number on file for his account, the automated system let him review recent transactions without any further authentication.

“I confirmed on my separate Citi card that they often (but not quite always) were providing the transaction details,” Jim said. “I was appalled that Citi would do that. So, it seemed the crooks would spoof caller ID when calling Citibank, as well as when calling the target/victim.

The incident Jim described happened in late January 2020, and Citi may have changed its procedures since then. But in a phone interview with KrebsOnSecurity earlier this week, Jim made a call to Citi’s automated system from his mobile phone on file with the bank, and I could hear Citi’s systems asking him to enter the last four digits of his credit card number before he could review recent transactions.

The request for the last four of the customer’s credit card number was consistent with my own testing, which relied on a caller ID spoofing service advertised in the cybercrime underground and aimed at a Citi account controlled by this author.

In one test, the spoofed call let KrebsOnSecurity hear recent transaction data — where and when the transaction was made, and how much was spent — after providing the automated system the last four digits of the account’s credit card number. In another test, the automated system asked for the account holder’s full Social Security number.

Citi declined to discuss specific actions it takes to detect and prevent fraud. But in a written statement provided to this author it said the company continuously monitors and analyzes threats and looks for opportunities to strengthen its controls.

“We see regular attempts by fraudsters to gain access to information and we are constantly monitoring for emerging threats and taking preventive action for our clients’ protection,” the statement reads. “For inbound calls to call centers, we continue to adapt and implement detection capabilities to identify suspicious or spoofed phone numbers. We also encourage clients to install and use our mobile app and sign up for push notifications and alerts in the mobile app.”

PREGNANT PAUSES AND BULGING EMAIL BOMBS

Jim said the fraudster who called his wife clearly already knew her mailing and email addresses, her mobile number and the fact that her card was an American Airlines-branded Citi card. The caller said there had been a series of suspicious transactions, and proceeded to read back details of several recent transactions to verify if those were purchases she’d authorized.

A list of services offered by one of several underground stores that sell caller ID spoofing and email bombing services.

Jim’s wife quickly logged on to her Citi account and saw that the amounts, dates and places of the transactions referenced by the caller indeed corresponded to recent legitimate transactions. But she didn’t see any signs of unauthorized charges.

After verifying the recent legitimate transactions with the caller, the person on the phone asked for her security word. When she provided it, there was a long hold before the caller came back and said she’d provided the wrong answer.

When she corrected herself and provided a different security word, there was another long pause before the caller said the second answer she provided was correct. At that point, the caller said Citi would be sending her a new card and that it had prevented several phony charges from even posting to her account.

She didn’t understand until later that the pauses were points at which the fraudsters had to put her on hold to relay her answers in their own call posing as her to Citi’s customer service department.

Not long after Jim’s spouse hung up with the caller, her inbox quickly began filling up with hundreds of automated messages from various websites trying to confirm an email newsletter subscription she’d supposedly requested.

As the recipient of several of theseemail bombing” attacks, I can verify that crooks often will use services offered in the cybercrime underground to flood a target’s inbox with these junk newsletter subscriptions shortly after committing fraud in the target’s name when they wish to bury an email notification from a target’s bank.

‘OVERPAYMENT REIMBURSEMENT’

In the case of Jim’s wife, the inbox flood backfired, and only made her more suspicious about the true nature of the recent phone call. So she called the number on the back of her Citi card and was told that she had indeed just called Citi and requested what’s known as an “overpayment reimbursement.” The couple have long had their credit cards on auto-payment, and the most recent payment was especially high — nearly $4,000 — thanks to a flurry of Christmas present purchases for friends and family.

In an overpayment reimbursement, a customer can request that the bank refund any amount paid toward a previous bill that exceeds the minimum required monthly payment. Doing so causes any back-due interest on that unpaid amount to accrue to the account as well.

In this case, the caller posing as Jim’s wife requested an overpayment reimbursement to the tune of just under $4,000. It’s not clear how or where the fraudsters intended this payment to be sent, but for whatever reason Citi ended up saying they would cut a physical check and mail it to the address on file. Probably not what the fraudsters wanted, although since then Jim and his wife say they have been on alert for anyone suspicious lurking near their mailbox.

“The person we spoke with at Citi’s fraud department kept insisting that yes, it was my wife that called because the call came from her mobile number,” Jim said. “The Citi employee was alarmed because she didn’t understand the whole notion of caller ID spoofing. And we both found it kind of disturbing that someone in fraud at such a major bank didn’t even understand that such a thing was possible.”

SHOPPING FOR ‘CVVs’

Fraud experts say the scammers behind the types of calls that targeted Jim’s family are most likely fueled by the rampant sale of credit card records stolen from hacked online merchants. This data, known as “CVVs” in the cybercrime underground, is sold in packages for about $15 to $20 per record, and very often includes the customer’s name, address, phone number, email address and full credit or debit card number, expiration date, and card verification value (CVV) printed on the back of the card.

A screen shot from an underground store selling CVV records. Note that all of these records come with the cardholder’s address, email, phone number and zip code. Click to enlarge. Image: Gemini Advisory.

Dozens of cybercrime shops traffic in this stolen data, which is more traditionally used to defraud online merchants. But such records are ideally suited for criminals engaged in the type of phone scams that are the subject of this article.

That’s according to Andrei Barysevich, CEO and co-founder of Gemini Advisory, a New York-based company that monitors dozens of underground shops selling stolen card data.

“If the fraudsters already have the target’s cell phone number, in many cases they already have the target’s credit card information as well,” Barysevich said.

Gemini estimates there are currently some 13 million CVV records for sale across the dark web, and that more than 40 percent of these records put up for sale over the past year included the cardholder’s phone number.

Data from recent financial transactions can not only help fraudsters better impersonate your bank, it can also be useful in linking a customer’s account to another account the fraudsters control. That’s because PayPal and a number of other pure-play online financial institutions allow customers to link accounts by verifying the value of microdeposits.

For example, if you wish to be able to transfer funds between PayPal and a bank account, the company will first send a couple of tiny deposits — a few cents, usually — to the account you wish to link. Only after verifying those exact amounts will the account-linking request be granted.

JUST HANG UP

Both this and last week’s story illustrate why the only sane response to a call purporting to be from your bank is to hang up, look up your bank’s customer service number from their Web site or from the back of your card, and call them back yourself.

Meanwhile, fraudsters who hack peoples’ finances with nothing more than a telephone have been significantly upping the volume of attacks in recent months, new research suggests. Fraud prevention company Next Caller said this week it has tracked “massive increases in call volumes and high-risk calls across Fortune 500 companies as a result of COVID-19.”

Image: Next Caller.

“After a brief reprieve in Week 4 (April 6-12), Week 5 (April 13-19) saw call volume across Next Caller’s clients in the telecom and financial services sectors spike 40% above previous highs,” the company found. “Particularly worrisome is the activity taking place in the financial services sector, where call traffic topped previous highs by 800%.”

Next Caller said it’s likely some of that increase was due to numerous online and mobile app outages for many major financial institutions at a time when more than 80 million Americans were simultaneously trying to track the status of their stimulus deposits. But it said that surge also brought with it an influx of fraudsters looking to capitalize on all the chaos.

“High-risk calls to financial services surged to 50% above pre-COVID levels, with one Fortune 100 bank suffering a high-risk increase of 60% during Week 5,” the company wrote in a recent report.

Rondam RamblingsFox news is not conservative enough for Donald Trump

It came as news to me, but apparently Fox News is not a right-wing propaganda factory, but is in fact a shill for Democrats.  Donald Trump said it, so it must be true: President Donald Trump demanded an "alternative" to Fox News over the weekend as he accused the right-leaning network of disseminating Democratic talking points "without hesitation or research." Oh, and don't forget to get your

CryptogramFooling NLP Systems Through Word Swapping

MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:

The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence "The characters, cast in impossibly contrived situations, are totally estranged from reality" to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality" makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

The results of this adversarial machine learning attack are impressive:

For example, Google's powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

The paper:

Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective -- it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving -- it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient -- it generates adversarial text with computational complexity linear to the text length.

Cory DoctorowA new Marcus Yallow/Little Brother story!

On Oct 12, Tor Books will publish ATTACK SURFACE, the third Little Brother book – unlike the previous two, it’s not YA, and unlike the previous two, it stars Masha, the young woman who works for the DHS and then a private security firm.

It’s a book about rationalization and redemption: how good people talk themselves into doing bad things, and what it takes to bring them back from the brink. I’m incredibly proud of it.

It’s available for pre-order now, and if you send your receipt for your pre-purchase (from any retailer!) to Tor, they’ll send you FORCE MULTIPLIER, a new Marcus Yallow story.

https://read.macmillan.com/promo/attacksurfacepreordercampaign/

It’s a story about stalkerware, technological self-determination, allyship, and the consequences of getting tech very, very wrong. I wrote it especially for fans of the series, and am forever in Eva Galperin’s debt for her help with the ending.

If you like infosec, puzzles and justice, this is one for you. Please help me spread the word!

Worse Than FailureCodeSOD: The Evil CMS

Content Management Systems always end up suffering, at least a little, from the Inner Platform Effect. There’s the additional problem that, unlike say, a big ol’ enterprise HR system or similar, CMSes are useful for just about everyone. It’s a quick and easy way to put together a site which anyone can maintain. But it never has enough features for your content. So you always install plugins- plugins of wildly varying quality and compatibility.

Lucio Crusca was doing a security audit of a Joomla site, found this block inside an installed plugin:

<?php if(!empty($MyForm->formrow->scriptcode)){
                echo "<script type='text/javascript'>\n";
                echo "//<![CDATA[\n";
                eval("?>".$MyForm->formrow->scriptcode);
                echo "//]]>\n";
                echo "</script>\n";
        }
        ?>

Let’s just focus on the echos to start. We’re directly outputting a <script> tag into the body of the page, and doing the bonus CDATA wrapper, ensuring compatibility with XHTML, which is nice if if your code ever slips into the mirror universe where people thought mashing up HTML’s formatting and XML’s formality into a single document standard was a good idea.

But that, of course, is not the WTF. The WTF is the body of the script, which is output into the document via this line:

eval("?>".$MyForm->formrow->scriptcode);

$MyForm is submitted from a client-side form. It, ostensibly, contains some executable PHP code, which outputs some JavaScript into the body of the document. eval, of course, just executes that code. Blindly. Hoping for the best. With full access to the current executable scope.

Now, there’s one important thing to note about PHP’s eval compared to other languages. Note the opening ?>. The eval block implicitly assumes an opening <?php tag, which you can exit with a ?>. This lets you, in your eval, mix straight PHP and HTML content together:

eval("if (foo) { ?> <p>This is just pure HTML</p> <?php }");

So you see, the developer responsible for this was being “smart”. They knew just enough to understand how to use eval to inject HTML content, and just went on to assume that no one would ever think about tossing a <?php in there to get back into a server-side execution context.

In any case, something about the phrasing of the PHP docs on eval makes me chuckle:

Caution The eval() language construct is very dangerous because it allows execution of arbitrary PHP code. Its use thus is discouraged. If you have carefully verified that there is no other option than to use this construct, pay special attention not to pass any user provided data into it without properly validating it beforehand.

“Its use thus is discouraged” should be the new “considered harmful”.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 01)

Here’s part one (MP3) of my new reading of my novel Someone Comes to Town, Someone Leaves Town, which debuted last weekend on the Podapalooza festival.

It’s easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

CryptogramAutomatic Instacart Bots

Instacart is taking legal action against bots that automatically place orders:

Before it closed, to use Cartdash users first selected what items they want from Instacart as normal. Once that was done, they had to provide Cartdash with their Instacart email address, password, mobile number, tip amount, and whether they prefer the first available delivery slot or are more flexible. The tool then checked that their login credentials were correct, logged in, and refreshed the checkout page over and over again until a new delivery window appeared. It then placed the order, Koch explained.

I think I am writing a new book about hacking in general, and want to discuss this. First, does this count as a hack? I feel like it is, since it's a way to subvert the Instacart ordering system.

When asked if this tool may give people an unfair advantage over those who don't use the tool, Koch said, "at this point, it's a matter of awareness, not technical ability, since people who can use Instacart can use Cartdash." When pushed on how, realistically, not every user of Instacart is going to know about Cartdash, even after it may receive more attention, and the people using Cartdash will still have an advantage over people who aren't using automated tools, Koch again said, "it's a matter of awareness, not technical ability."

Second, should Instacart take action against this? On the one hand, it isn't "fair" in that Cartdash users get an advantage in finding a delivery slot. But it's not really any different than programs that "snipe" on eBay and other bidding platforms.

Third, does Instacart even stand a chance in the long run. As various AI technologies give us more agents and bots, this is going to increasingly become the new normal. I think we need to figure out a fair allocation mechanism that doesn't rely on the precise timing of submissions.

Planet Linux AustraliaGary Pendergast: Install the COVIDSafe app

I can’t think of a more unequivocal title than that. 🙂

The Australian government doesn’t have a good track record of either launching publicly visible software projects, or respecting privacy, so I’ve naturally been sceptical of the contact tracing app since it was announced. The good news is, while it has some relatively minor problems, it appears to be a solid first version.

Privacy

While the source code is yet to be released, the Android version has already been decompiled, and public analysis is showing that it only collects necessary information, and only uploads contact information to the government servers when you press the button to upload (you should only press that button if you actually get COVID-19, and are asked to upload it by your doctor).

The legislation around the app is also clear that the data you upload can only be accessed by state health officials. Commonwealth departments have no access, neither do non-health departments (eg, law enforcement, intelligence).

Technical

It does what it’s supposed to do, and hasn’t been found to open you up to risks by installing it. There are a lot of people digging into it, so I would expect any significant issues to be found, reported, and fixed quite quickly.

Some parts of it are a bit rushed, and the way it scans for contacts could be more battery efficient (that should hopefully be fixed in the coming weeks when Google and Apple release updates that these contact tracing apps can use).

If it produces useful data, however, I’m willing to put up with some quirks. 🙂

Usefulness

I’m obviously not an epidemiologist, but those I’ve seen talk about it say that yes, the data this app produces will be useful for augmenting the existing contact tracing efforts. There were some concerns that it could produce a lot of junk data that wastes time, but I trust the expert contact tracing teams to filter and prioritise the data they get from it.

Install it!

The COVIDSafe site has links to the app in Apple’s App Store, as well as Google’s Play Store. Setting it up takes a few minutes, and then you’re done!

Worse Than FailureCodeSOD: A Tern Off

Jim J's co-worker showed him this little snippet in the codebase.

foreach (ToolStripMenuItem item in documentMenuItem.DropDownItems) { item.Enabled = item.Enabled ? Document.Status == DocumentStatusConsts.New : item.Enabled; }

Tracing through the ternary, if the menu item is currently enabled, set it enabled if the document in question is new, otherwise set it to itself (that is to say, disabled).

Or, to put it differently, if it's not enabled, make sure it's not enabled.

My suspicion is that the original developer just really wanted to use a ternary, even if it didn't make much sense.

Jim writes:

When one of my colleagues showed me his find I suggested him to add this line into the loop: if (!item.Enabled) item.Enabled = false || item.Enabled;

Just to be absolutely sure the item will be disabled.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet Linux AustraliaSimon Lyall: YouTube Channels I subscribe to in April 2020

I did a big twitter thread of the YouTube channels I am following. Below is a copy of the tweets. They are a quick description of the channel and a link to a sample video.

Lots of pop-Science and TV/Movie analysis channels plus a few on other topics.

I should mention that I watch the majority of YouTube videos at speed 1.5x since they usually speak quite slowly. To Speed up videos click on the settings “cog” and then select “Playback Speed” . YouTube lets you go up to 2x

Image

Chris Stuckmann reviews movies. During normal times he does a couple per week. Mostly currently releases with some old ones. His reviews are low-spoiler although sometimes he’ll do an extra “Spoiler Review”. Usually around 6 minutes long.
Star Wars: The Rise of Skywalker – Movie Review

Wendover Productions does explainer videos. Air & Sea travel are quite common topics. Usually a bit better researched than some of the other channels and a little longer at around 12 minutes. Around 1 video per week.
The Logistics of the US Census

City Beautiful is a channel about cities and City planning. 1-2 videos per month. Usually around 10 minutes. Pitched for the amateur city and planning enthusiast
Where did the rules of the road come from?

PBS Eons does videos about the history of life on Earth. Lots of Dinosaurs, early humans and the like. Run and advised by experts so info is great quality. Links to refs! Accessible but dives into the detail. Around 1 video/week. About 10 minutes each.
How the Egg Came First

Pitch Meetings are a writer pitching a real (usually recent) movie or show to a studio exec. Both a played by Ryan George. Very funny. Part of the Screen Rant channel but I don’t watch their other stuff
Playlist
Netflix’s Tiger King Pitch Meeting

MrMobile [Michael Fisher] reviews Phones, Laptops, Smart Watches & other tech gadgets. Usually about one video/week. I like the descriptive style and good production values, Not too much spec flooding.
A Stunning Smartwatch With A Familiar Failing – New Moto 360 Review

Verge Science does professional level stories about a range of Science topics. They usually are out in the field with Engineers and scientists.
Why urban coyote sightings are on the rise

Alt Shift X do detailed explainer videos about Books & TV Shows like Game of Thrones, Watchmen & Westworld. Huge amounts of detail and a great style with a wall of pictures. Weekly videos when shows are on plus subscriber extras.
Watchmen Explained (original comic)

The B1M talks about building and construction projects. Many videos are done with cooperation of the architects or building companies so a bit fluffy at times. But good production values and interesting topics.
The World’s Tallest Modular Hotel

CineFix doesn’t a variety of Movie-related videos. Over the last year only putting about one or two per month and mostly high quality. A few years ago they were at higher volume and had more throw-aways
Jojo Rabbit – What’s the Difference?

Marques Brownlee (MKBHD) does tech reviews. Mainly phones but also other gear and the odd special. His videos are extremely high quality and well researched. Averaging 2 videos per week.
Samsung Galaxy S20 Ultra Review: Attack of the Numbers!

How it Should have Ended does cartoons of funny alternative endings for movies. Plus some other long running series. Usually only a few minutes long.
Avengers Endgame Alternate HISHE

Power Play Chess is a Chess channel from Daniel King. He usually covers 1 round/day from major tournaments as well as reviewing older games and other videos.
World Champion tastes the bullet | Firouzja vs Carlsen | Lichess Bullet match 2020

Tom Scott makes explainer videos mostly about science, technology and geography. Often filmed on site rather than being talks over pictures like other channels.
Inside The Billion-Euro Nuclear Reactor That Was Never Switched On

Screen Junkies does stuff about movies. I mostly watch their “Honest Trailers” but they sometimes do ‘Serious Questions” which are good too.
Honest Trailers | Terminator: Dark Fate

Half as Interesting is an offshoot of Wendover Productions (see above). It does shorter 3-5 minutes weekly videos on a quick amusing fact or happening (that doesn’t justify a longer video)
United Airlines’ Men-Only Flights

Red Team Review is another movie and TV review channel. I was mostly watching them when Game of Thrones was on and since then they have had a bit less content. They are making some Game of Thrones videos narrated by the TV actors though
Game of Thrones Histories & Lore – The Rains of Castamere

Signum University do online classes about Fantasy (especially Tolkien) and related literature. Their channel features their classes and related videos. I mainly follow “Exploring The Lord of the Rings”. Often sounds better at 2x or 3x speed.
A Wizard of Earthsea: Session 01 – Mageborn

The Nerdwriter does approx monthly videos. Usually about a specific type of art, a painting or film making technique. Very high quality
How Walter Murch Worldized Film Sound

Real Life Lore does infotainment videos. “Answers to questions that you’ve never asked. Mostly over topics like history, geography, economics and science”.
This Was the World’s Most Dangerous Amusement Park

Janice Fung is a Sydney based youtuber who makes videos mostly about food and travel. She puts out 2 videos most weeks.
I Made the Viral Tik Tok Frothy DALGONA COFFEE! (Whipped Coffee Without Mixer!!)

Real Engineering is a bit more technical than the average popsci channel. The especially like doing videos covering flight dynamics. but they cover lots of other topics
How The Ford Model T Took Over The World

Just Write by Sage Hyden puts out a video roughly once a month. They are essays usually about writing and usually tied into a recently movie or show.
A Disney Monopoly Is A Problem (According To Disney’s Recess)

CGP Grey makes high quality explainer videos. Around one every month. High quality and usually with lots of animation.
The Trouble With Tumbleweed

Lessons from the Screenplay are “videos that analyze movie scripts to examine exactly how and why they are so good at telling their stories”
Casino Royale — How Action Reveals Character

HaxDogma is another TV Show review/analysis channel. I started watching him for his Watchmen Series videos and now watch his Westworld ones.
Official Westworld Trailer Breakdown + 3 Hidden Trailers

Lindsay Ellis does videos mostly about pop culture, Usually movies. These days she only does a few a year but they are usually 20+ minutes.
The Hobbit: A Long-Expected Autopsy (Part 1/2)

A bonus couple of recommended Courses on ‘Crash Course
Crash Course Astronomy with Phil Plait
Crash Course Computer Science by Carrie Anne Philbin

Share

,

CryptogramFriday Squid Blogging: Humboldt Squid Backlight Themselves to Communicate More Clearly

This is neat:

Deep in the Pacific Ocean, six-foot-long Humboldt squid are known for being aggressive, cannibalistic and, according to new research, good communicators.

Known as "red devils," the squid can rapidly change the color of their skin, making different patterns to communicate, something other squid species are known to do.

But Humboldt squid live in almost total darkness more than 1,000 feet below the surface, so their patterns aren't very visible. Instead, according to a new study, they create backlighting for the patterns by making their bodies glow, like the screen of an e-reader.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityUnproven Coronavirus Therapy Proves Cash Cow for Shadow Pharmacies

Many of the same shadowy organizations that pay people to promote male erectile dysfunction drugs via spam and hacked websites recently have enjoyed a surge in demand for medicines used to fight malaria, lupus and arthritis, thanks largely to unfounded suggestions that these therapies can help combat the COVID-19 pandemic.

A review of the sales figures from some of the top pharmacy affiliate programs suggests sales of drugs containing hydroxychloroquine rivaled that of their primary product — generic Viagra and Cialis — and that this as-yet-unproven Coronavirus treatment accounted for as much as 25 to 30 percent of all sales over the past month.

A Google Trends graph depicting the incidence of Web searches for “chloroquine” over the past 90 days.

KrebsOnSecurity reviewed a number of the most popular online pharmacy enterprises, in part by turning to some of the same accounts at these invite-only affiliate programs I relied upon for researching my 2014 book, Spam Nation: The Inside Story of Organized Cybercrime, from Global Epidemic to Your Front Door.

Many of these affiliate programs — going by names such as EvaPharmacy, Rx-Partners and Mailien/Alientarget — have been around for more than a decade, and were major, early catalysts for the creation of large-scale botnets and malicious software designed to enslave computers for the sending of junk email.

Their products do not require a prescription, are largely sourced directly from pharmaceutical production facilities in India and China, and are shipped via international parcel post to customers around the world.

In mid-March, two influential figures — President Trump and Tesla CEO Elon Muskbegan suggesting that hydroxychloroquine should be more strongly considered as a treatment for COVID-19.

The pharmacy affiliate programs immediately took notice of a major moneymaking opportunity, noting that keyword searches for terms related to chloroquine suddenly were many times more popular than for the other mainstays of their business.

“Everyone is hysterical,” wrote one member of the Russian language affiliate forum gofuckbiz[.]com on Mar. 17. “Time to make extra money. Do any [pharmacy affiliate] programs sell drugs for Coronavirus or flu?”

The larger affiliate programs quickly pounced on the opportunity, which turned out to be a major — albeit short-lived — moneymaker. Below is a screenshot of the overall product sales statistics for the previous 30 days from all affiliates of PharmCash. As we can see, Aralen — a chloroquine drug used to treat and prevent malaria — was the third biggest seller behind Viagra and Cialis.

Recent 30-day sales figures from the pharmacy affiliate program PharmCash.

In mid-March, the affiliate program Rx-Partners saw a huge spike in demand for Aralen and other drugs containing chloroquine phosphate, and began encouraging affiliates to promote a new set of product teasers targeting people anxiously seeking remedies for COVID-19.

Their main promotion page — still online at about-coronavirus2019[.]com — touts the potential of Aralen, generic hydroxychloroquine, and generic Kaletra/Lopinavir, a drug used to treat HIV/AIDS.

An ad promoting various unproven remedies for COVID-19, from the pharmacy affiliate program Rx-Partners.

On Mar. 18, a manager for Rx-Partners said that like PharmCash, drugs which included chloroquine phosphate had already risen to the top of sales for non-erectile dysfunction drugs across the program.

But the boost in sales from the global chloroquine frenzy would be short-lived. Demand for chloroquine phosphate became so acute worldwide that India — the world’s largest producer of hydroxychloroquine — announced it would ban exports of the drug. On Mar. 25, India also began shutting down its major international shipping ports, leaving the pharmacy affiliate programs scrambling to source their products from other countries.

A Mar. 31 message to affiliates working with the Union Pharm program, noting that supplies of Aralen had dried up due to the shipping closures in India.

India recently said it would resume exports of the drug, and judging from recent posts at the aforementioned affiliate site gofuckbiz[.]com, denizens of various pharmacy affiliate programs are anxiously awaiting news of exactly when shipments of chloroquine drugs will continue.

“As soon as India opens and starts mail, then we will start everything, so get ready,” wrote one of Rx-Partners’ senior recruiters. “I am sure that there will still be demand for pills.”

Global demand for these pills, combined with India’s recent ban on exports, have conspired to create shortages of the drug for patients who rely on it to treat chronic autoimmune diseases, including lupus and rheumatoid arthritis.

While hydroxychloroquine has long been considered a relatively safe drug, some people have been so anxious to secure their own stash of the drug that they’ve turned to unorthodox sources.

On March 19, Fox News ran a story about how demand for hydroxychloroquine had driven up prices on eBay for bottles of chloroquine phosphate designed for removing parasites from fish tanks. A week later, an Arizona man died and his wife was hospitalized after the couple ingested one such fish tank product in hopes of girding their immune systems against the Coronavirus.

Despite many claims that hydroxychloroquine can be effective at fighting COVID-19, there is little real data showing how it benefits patients stricken with the disease. The largest test of the drug’s efficacy against Coronavirus showed no benefit in a large analysis of its use in U.S. veterans hospitals. On the contrary, there were more deaths among those given hydroxychloroquine versus standard care, researchers reported.

In an advisory released today, the U.S. Food and Drug Administration (FDA) cautioned against use of hydroxychloroquine or chloroquine for COVID-19 outside of the hospital setting or a clinical trial due to risk of heart rhythm problems.

CryptogramGlobal Surveillance in the Wake of COVID-19

OneZero is tracking thirty countries around the world who are implementing surveillance programs in the wake of COVID-19:

The most common form of surveillance implemented to battle the pandemic is the use of smartphone location data, which can track population-level movement down to enforcing individual quarantines. Some governments are making apps that offer coronavirus health information, while also sharing location information with authorities for a period of time. For instance, in early March, the Iranian government released an app that it pitched as a self-diagnostic tool. While the tool's efficacy was likely low, given reports of asymptomatic carriers of the virus, the app saved location data of millions of Iranians, according to a Vice report.

One of the most alarming measures being implemented is in Argentina, where those who are caught breaking quarantine are being forced to download an app that tracks their location. In Hong Kong, those arriving in the airport are given electronic tracking bracelets that must be synced to their home location through their smartphone's GPS signal.

Worse Than FailureError'd: Burrito Font

"I've always ordered my burritos in Times New Roman. I'll have to make sure to try the Helvetica option next time I go in," Winston M. writes.

 

"Giving its all and another 5%. That's a battery that I can be seriously proud of," wrote Chris.

 

James S. writes, "What are the odds that the amount of entropy that went into my password would result in personal data of mine. Now if only I knew what it was!"

 

Paul writes, "Announcement about the cloud? Something about AI? Perhaps an massively useful new feature added to Windows. This email can be whatever you want it to be!"

 

"Well, I guess I can let the price slide, it is an estimate after all," Carl C. wrote.

 

Peter W. writes, "I've spent a bit of money to get the best laptop within my budget and when I looked up what type of hardware hp have put into their Omen device, I was glad to see they had the most excellent microprocessor cache, video graphics, and audio system (not visible here) that is available in the world.

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaFrancois Marier: Disabling mail sending from your domain

I noticed that I was receiving some bounced email notifications from a domain I own (cloud.geek.nz) to host my blog. These notifications were all for spam messages spoofing the From address since I do not use that domain for email.

I decided to try setting a strict DMARC policy to see if DMARC-using mail servers (e.g. GMail) would then drop these spoofed emails without notifying me about it.

I started by setting this initial DMARC policy in DNS in order to monitor the change:

@ TXT v=spf1 -all
_dmarc TXT v=DMARC1; p=none; ruf=mailto:dmarc@fmarier.org; sp=none; aspf=s; fo=0:1:d:s;

Then I waited three weeks without receiving anything before updating the relevant DNS records to this final DMARC policy:

@ TXT v=spf1 -all
_dmarc TXT v=DMARC1; p=reject; sp=reject; aspf=s;

This policy states that nobody is allowed to send emails for this domain and that any incoming email claiming to be from this domain should be silently rejected.

I haven't noticed any bounce notifications for messages spoofing this domain in a while, so maybe it's working?

Chaotic IdealismUphill. Both ways. You kids got it easy.

I’m stranded at home, because I can’t go out. I can’t work. To survive, I fill out paperwork for the government, proving that I need food and shelter, constantly facing the default assumption that I am trying to cheat the system.

I look at the rest of the world, people who say they are going crazy because they can’t leave their homes, who are enraged and frustrated because they are having trouble with the unemployment office, because they’ve waited a week or two weeks to get benefits.

And I think: Well, now they know what it’s like. Because unlike the people who have been dealing with this for a month or two, this has been my reality for fifteen years now. I waited for six months for benefits. Many people wait two years. Driving is not possible for me, nor is public transportation readily available. The way people live in isolation now, short on money and housebound, is the way my life has been for over a decade.

I’m disabled, and that means I’m a second-class citizen. The world takes it for granted that I have to live like this. And, however much I wish I could wipe this virus from the face of the earth so it would never make anyone sad, scared, or lonely again, I find it vaguely satisfying to have people finally acknowledge that the way I have had to live is lonely, unjust, and frustrating.

Don’t get me wrong; I don’t hate my life, or even my disability. I find happiness and I’m satisfied. But there are annoying things in my life that nobody seems to recognize as annoying, that people seem to take for granted as being part of the experience of having a disability and thus unchangeable. But they are not unchangeable. They come from society being too inflexible to include us the way it should. When people are upset about the things I have had to deal with for years, that tells me that those things really are as unacceptable as I say they are, even if, normally, nobody seems to think so.

Planet Linux AustraliaDavid Rowe: FreeDV Beacon Maintenance

There’s been some recent interest in the FreeDV Beacon project, originally developed back in 2015. A FreeDV beacon was operating in Sunbury, VK3, for several years and was very useful for testing FreeDV.

After being approach by John (VK3IC) and Bob (VK4YA), I decided to dust off the software and bring it across to a GitHub repo. It’s now running on my laptop happily and I hope John and Bob will soon have some beacons running on the air.

I’ve added support for FreeDV 700C and 700D modes, finding a tricky bug in the process. I really should read the instructions for my own API!

Thanks also to Richard (KF5OIM) for help with the Cmake build system.


,

Krebs on SecurityWhen in Doubt: Hang Up, Look Up, & Call Back

Many security-conscious people probably think they’d never fall for a phone-based phishing scam. But if your response to such a scam involves anything other than hanging up and calling back the entity that claims to be calling, you may be in for a rude awakening. Here’s how one security and tech-savvy reader got taken for more than $10,000 in an elaborate, weeks-long ruse.

Today’s lesson in how not to get scammed comes from “Mitch,” the pseudonym I picked for a reader in California who shared his harrowing tale on condition of anonymity. Mitch is a veteran of the tech industry — having worked in security for several years at a fairly major cloud-based service — so he’s understandably embarrassed that he got taken in by this confidence scheme.

On Friday, April 17, Mitch received a call from what he thought was his financial institution, warning him that fraud had been detected on his account. Mitch said the caller ID for that incoming call displayed the same phone number that was printed on the back of his debit card.

But Mitch knew enough of scams to understand that fraudsters can and often do spoof phone numbers. So while still on the phone with the caller, he quickly logged into his account and saw that there were indeed multiple unauthorized transactions going back several weeks. Most were relatively small charges — under $100 apiece — but there were also two very recent $800 ATM withdrawals from cash machines in Florida.

If the caller had been a fraudster, he reasoned at the time, they would have asked for personal information. But the nice lady on the phone didn’t ask Mitch for any personal details. Instead, she calmly assured him the bank would reverse the fraudulent charges and said they’d be sending him a new debit card via express mail. After making sure the representative knew which transactions were not his, Mitch thanked the woman for notifying him, and hung up.

The following day, Mitch received another call about suspected fraud on his bank account. Something about that conversation didn’t seem right, and so Mitch decided to use another phone to place a call to his bank’s customer service department — while keeping the first caller on hold.

“When the representative finally answered my call, I asked them to confirm that I was on the phone with them on the other line in the call they initiated toward me, and so the rep somehow checked and saw that there was another active call with Mitch,” he said. “But as it turned out, that other call was the attackers also talking to my bank pretending to be me.”

Mitch said his financial institution has in the past verified his identity over the phone by sending him a one-time code to the cell phone number on file for his account, and then asking him to read back that code. After he hung up with the customer service rep he’d phoned, the person on the original call said the bank would be sending him a one-time code to validate his identity.

Now confident he was speaking with a representative from his bank and not some fraudster, Mitch read back the code that appeared via text message shortly thereafter. After more assurances that any additional phony charges would be credited to his account and that he’d be receiving a new card soon, Mitch was annoyed but otherwise satisfied. He said he checked his account online several times over the weekend, but saw no further signs of unauthorized activity.

That is, until the following Monday, when Mitch once again logged in and saw that a $9,800 outgoing wire transfer had been posted to his account. At that point, it dawned on Mitch that both the Friday and Saturday calls he received had likely been from scammers — not from his bank.

Another call to his financial institution and some escalation to its fraud department confirmed that suspicion: The investigator said another man had called in on Saturday posing as Mitch, had provided a one-time code the bank texted to the phone number on file for Mitch’s account — the same code the real Mitch had been tricked into giving up — and then initiated an outgoing wire transfer.

It appears the initial call on Friday was to make him think his bank was aware of and responding to active fraud against his account, when in actuality the bank was not at that time. Also, the Friday call helped to set up the bigger heist the following day.

Mitch said he and his bank now believe that at some point his debit card and PIN were stolen, most likely by a skimming device planted at a compromised point-of-sale terminal, gas pump or ATM he’d used in the past few weeks. Armed with a counterfeit copy of his debit card and PIN, the fraudsters could pull money out of his account at ATMs and go shopping in big box stores for various items. But to move lots of money out of his account all at once, they needed Mitch’s help.

To make matters worse, the fraud investigator said the $9,800 wire transfer had been sent to an account at an online-only bank that also was in Mitch’s name. Mitch said he didn’t open that account, but that this may have helped the fraudsters sidestep any fraud flags for the unauthorized wire transfer, since from the bank’s perspective Mitch was merely wiring money to another one of his accounts. Now, he’s facing the arduous task of getting identity theft (new account fraud) cleaned up at the online-only bank.

Mitch said that in retrospect, there were several oddities that should have been additional red flags. For one thing, on his outbound call to the bank on Saturday while he had the fraudsters on hold, the customer service rep asked if he was visiting family in Florida.

Mitch replied that no, he didn’t have any family members living there. But when he spoke with the bank’s fraud department the following Monday, the investigator said the fraudsters posing as Mitch had succeeded in adding a phony “travel notice” to his account — essentially notifying the bank that he was traveling to Florida and that it should disregard any geographic-based fraud alerts created by card-present transactions in that region. That would explain why his bank didn’t see anything strange about their California customer suddenly using his card in Florida.

Also, when the fake customer support rep called him, she stumbled a bit when Mitch turned the tables on her. As part of her phony customer verification script, she asked Mitch to state his physical address.

“I told her, ‘You tell me,’ and she read me the address of the house I grew up in,” Mitch recalled. “So she was going through some public records she’d found, apparently, because they knew my previous employers and addresses. And she said, ‘Sir, I’m in a call center and there’s cameras over my head. I’m just doing my job.’ I just figured she was just new or shitty at her job, but who knows maybe she was telling the truth. Anyway, the whole time my girlfriend is sitting next to me listening to this conversation and she’s like, ‘This sounds like bullshit.'”

Mitch’s bank managed to reverse the unauthorized wire transfer before it could complete, and they’ve since put all the stolen funds back into his account and issued a new card. But he said he still feels like a chump for not observing the golden rule: If someone calls saying they’re from your bank, just hang up and call them back — ideally using a phone number that came from the bank’s Web site or from the back of your payment card. As it happened, Mitch only followed half of that advice.

What else could have made it more difficult for fraudsters to get one over on Mitch? He could have enabled mobile alerts to receive text messages anytime a new transaction posts to his account. Barring that, he could have kept a closer eye on his bank account balance.

If Mitch had previously placed a security freeze on his credit file with the three major consumer credit bureaus, the fraudsters likely would not have been able to open a new online checking account in his name with which to receive the $9,800 wire transfer (although they might have still been able to wire the money to another account they controlled).

As Mitch’s experience shows, many security-conscious people tend to focus on protecting their online selves, while perhaps discounting the threat from less technically sophisticated phone-based scams. In this case, Mitch and his bank determined that his assailants never once tried to log in to his account online.

“What’s interesting here is the entirety of the fraud was completed over the phone, and at no time did the scammers compromise my account online,” Mitch said. “I absolutely should have hung up and initiated the call myself. And as a security professional, that’s part of the shame that I will bear for a long time.”

Further reading:

Voice Phishing Scams are Getting More Clever
Why Phone Numbers Stink as Identity Proof
Apple Phone Phishing Scams Getting Better
SMS Phishing + Cardless ATM = Profit

CryptogramChinese COVID-19 Disinformation Campaign

The New York Times is reporting on state-sponsored disinformation campaigns coming out of China:

Since that wave of panic, United States intelligence agencies have assessed that Chinese operatives helped push the messages across platforms, according to six American officials, who spoke on the condition of anonymity to publicly discuss intelligence matters. The amplification techniques are alarming to officials because the disinformation showed up as texts on many Americans' cellphones, a tactic that several of the officials said they had not seen before.

Worse Than FailureCodeSOD: WTFYou, Pay Me

Julien’s employer has switched their payroll operations to a hosted solution. The hosted solution has some… interesting features. The fact that it has a “share” button, implying you can share your paystub infromation with other people is unusual (but good: keeping salaries confidential only helps management underpay their employees). More problematic is that this feature emails it, and instead of putting in an email address manually, you instead pick off a drop-down list- which contains the email of every user of the hosted system.

Seeing this, Julien had to take a peek at the code, just to see what other horrors might lurk in there.

Let’s open with some ugly regexes:

var regExtURL =/(http(s)?|ftp:\/\/.)?(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)/;
	///^(?:(?:https?|ftp):\/\/)?[\w.-]+(?:\S+(?::\S*)?@)?(?:(?!(?:0|127)(?:\.\d{1,3}){3})(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))|(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*(?:\.(?:[a-z\u00a1-\uffff]{2,}))\.?)(?::\d{2,5})?(?:[/?#]\S*)?$/;  	
function isValidURL(thisObj){
	if (thisObj.value != '' && !regExtURL.test(thisObj.value)){
	    alert('Veuillez entrer une URL valide.');
	    return false;
	}
};

var re = /^(([a-zA-Z0-9-_"+"]+(\.[a-zA-Z]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})|(([a-zA-Z0-9])+(\-[a-zA-Z0-9]+)*(\.[a-zA-Z0-9-]+)*(\.[a-zA-Z]{2,})+))$/;
function isEmailKey(thisObj){
	//var re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z0-9]+\.)+[a-zA-Z]{2,}))$/;
	//var re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})|(([a-zA-Z0-9])+(\-[a-zA-Z0-9]+)*(\.[a-zA-Z0-9-]+)*(\.[a-zA-Z]{2,})+))$/;
	
	if (thisObj.value != '' && !re.test(thisObj.value)){
	    alert('Please enter a valid email address.');
	    return false;
	}
};

I like that they kept their old attempts at validating email addresses right there in the code. Real helpful for us. Presenting errors via the built-in alert method is also really great UX.

Do you want a super complex and probably inaccurate date validator? We’ve got one of those:

function validateDateValue(obj, format, errorMesage){
	 try {
		 format = format.toUpperCase();
	        if(obj != null){

	            var dateValue = obj.value;
	            
	            if (dateValue.length == 0)
	            	return;
	            
	            if (dateValue.length > 10)
	            	dateValue = dateValue.substring(0, 10);
	            
	            if (dateValue.length < 6) {
	            	alert(errorMesage);
	            	return;
	            }
	            
	            var d = null;
	            
	            var sep = getSeparator(format);
	            if (sep.length > 0)
	            	d = stringToDate(dateValue, format, sep);
	            else d = Date.parse(dateValue.substring(0,4) + '-' + dateValue.substring(4,6) + '-' + dateValue.substring(6,8));
	            
	            if (d == null) {	            
	            	if (dateValue.length == 6 ) {
		            	
		            	if (d == null)
			            	d = stringToDate(dateValue,"ddMMyy","");
			            if (d == null)
			            	d = stringToDate(dateValue,"MMddyy","");
			            
		            } else if (dateValue.length == 8 ) {
		            	d = Date.parse(dateValue.substring(0,4) + '-' + dateValue.substring(4,6) + '-' + dateValue.substring(6,8));
		            	if(isNaN(d))
		            		d = null;
		            	if (d == null)	            	            
		            		d = stringToDate(dateValue,"dd/MM/yy","/");
		            	if (d == null)
		            		d = stringToDate(dateValue,"dd-MM-yy","-");
		            	if (d == null)
		            		d = stringToDate(dateValue,"dd.MM.yy",".");
		            	if (d == null)
		            		d = stringToDate(dateValue,"dd MM yy"," ");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM/dd/yy","/");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM-dd-yy","-");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM.dd.yy",".");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM dd yy"," ");
		            	
		            	if (d == null)
		            		d = stringToDate(dateValue,"yy/MM/dd","/");
		            	if (d == null)
		            		d = stringToDate(dateValue,"yy-MM-dd","-");
		            	if (d == null)
		            		d = stringToDate(dateValue,"yy.MM.dd",".");
		            	if (d == null)
		            		d = stringToDate(dateValue,"yy MM dd"," ");
		            } else {
		            	if (d == null)	            	            
		            		d = stringToDate(dateValue,"dd/MM/yyyy","/");
		            	if (d == null)
		            		d = stringToDate(dateValue,"dd-MM-yyyy","-");
		            	if (d == null)
		            		d = stringToDate(dateValue,"dd.MM.yyyy",".");
		            	if (d == null)
		            		d = stringToDate(dateValue,"dd MM yyyy"," ");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM/dd/yyyy","/");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM-dd-yyyy","-");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM.dd.yyyy",".");
		            	if (d == null)
		            		d = stringToDate(dateValue,"MM dd yyyy"," ");
		            	
		            	if (d == null)
		            		d = stringToDate(dateValue,"yyyy/MM/dd","/");
		            	if (d == null)
		            		d = stringToDate(dateValue,"yyyy-MM-dd","-");
		            	if (d == null)
		            		d = stringToDate(dateValue,"yyyy.MM.dd",".");
		            	if (d == null)
		            		d = stringToDate(dateValue,"yyyy MM dd"," ");
		            			            	
		            }
	            }
	            
	          
	            if (d == null) {
	            	alert(errorMesage);
	            } else {
	            	
		            var formatedDate = moment(d).format(format);	            	
		            obj.value = formatedDate;	            
	            }   
	        }
	 } catch(e) {
		 alert(errorMesage);
	 }
};

In fact, that one’s so good, let’s do it again!

function validateDateValue_(dateValue, format, errorMesage) {

	try {
		format = format.toUpperCase();

		if (dateValue.length == 0) {
			return errorMesage;
		}

		if (dateValue.length > 10)
			dateValue = dateValue.substring(0, 10);

		if (dateValue.length < 6) {
			return errorMesage;
		}

		var d = null;

		var sep = getSeparator(format);
		if (sep.length > 0)
			d = stringToDate(dateValue, format, sep);
		else
			d = Date.parse(dateValue.substring(0, 4) + '-' + dateValue.substring(4, 6) + '-' + dateValue.substring(6, 8));

		if (d == null) {
			if (dateValue.length == 6 ) {
            	
            	if (d == null)
	            	d = stringToDate(dateValue,"ddMMyy","");
	            if (d == null)
	            	d = stringToDate(dateValue,"MMddyy","");
	            
            } else if (dateValue.length == 8) {
				d = Date.parse(dateValue.substring(0, 4) + '-' + dateValue.substring(4, 6) + '-' + dateValue.substring(6, 8));
				if (isNaN(d))
					d = null;
				if (d == null)
					d = stringToDate(dateValue, "dd/MM/yy", "/");
				if (d == null)
					d = stringToDate(dateValue, "dd-MM-yy", "-");
				if (d == null)
					d = stringToDate(dateValue, "dd.MM.yy", ".");
				if (d == null)
					d = stringToDate(dateValue, "MM/dd/yy", "/");
				if (d == null)
					d = stringToDate(dateValue, "MM-dd-yy", "-");
				if (d == null)
					d = stringToDate(dateValue, "MM.dd.yy", ".");
				if (d == null)
	            	d = stringToDate(dateValue,"ddMMyyyy","");
	            if (d == null)
	            	d = stringToDate(dateValue,"MMddyyyy","");
			} else {
				if (d == null)
					d = stringToDate(dateValue, "dd/MM/yyyy", "/");
				if (d == null)
					d = stringToDate(dateValue, "dd-MM-yyyy", "-");
				if (d == null)
					d = stringToDate(dateValue, "dd.MM.yyyy", ".");
				if (d == null)
					d = stringToDate(dateValue, "MM/dd/yyyy", "/");
				if (d == null)
					d = stringToDate(dateValue, "MM-dd-yyyy", "-");
				if (d == null)
					d = stringToDate(dateValue, "MM.dd.yyyy", ".");
				if (d == null)
					d = stringToDate(dateValue, "yyyy-MM-dd", "-");
			}
		}

		if (d == null) {
			return errorMesage;
		} else {
			var formatedDate = moment(d).format(format);
			dateValue = formatedDate;
		}
	} catch (e) {
		return errorMesage;
	}
	return "";
};

Yes, that is basically the identical method, but some of the parameter names are different, and one of them has more sensible indentation than the other.

But that only handles dates. What about datetimes?

function validateDateTimeValue(obj, format, errorMesage){
	format = format.toUpperCase();
	
	format = format.substring(0, format.length-2) + "mm";
		
	
    if(obj != null){
    	var dateTimeValue = obj.value;
    
    	    	    	
        if (dateTimeValue.length == 0)
         	return;
        
        if (dateTimeValue.length == 8 || dateTimeValue.length == 10 || dateTimeValue.length == 6)
        	dateTimeValue = dateTimeValue + " 00:00";
        
        if (dateTimeValue.length > 16)
        	dateTimeValue = dateTimeValue.substring(0, 16);
        
        
    	if (dateTimeValue.length < 12 || dateTimeValue.length > 16){
        	alert(errorMesage);
        	return;
        }
    	
    	var time = dateTimeValue.substring(dateTimeValue.length-6, dateTimeValue.length);
    	
    	if (time.charAt(0) != ' '){
        	alert(errorMesage);
        	return;
        }
    	
    	var h = parseInt(time.substring(1, 3));
    	if (isNaN(h)){
        	alert(errorMesage);
        	return;
        }
    	
    	var m = parseInt(time.substring(4, 6));
    	
    	if (isNaN(m)){
        	alert(errorMesage);
        	return;
        }
    	
    	var d = null;
    	var dateValue = dateTimeValue.substring(0,dateTimeValue.length-6);
    	
    	var sep = getSeparator(format);
    	
        if (sep.length>0)
        	d = stringToDateTime(dateValue,format.substring(0, format.length-6), sep,h,m);
        else d = Date.parse(dateValue.substring(0,4) + '-' + dateValue.substring(4,6) + '-' + dateValue.substring(6,8) + 'T' + h + ':' + m + ':00');
        
        
        if (d == null){
        	
        	if (dateTimeValue.length == 12 ){
	        	
	        	if (d == null)
	            	d = stringToDateTime(dateValue,"ddMMyy","",h,m);
	            if (d == null)
	            	d = stringToDateTime(dateValue,"MMddyy","",h,m);
	        	
	        } else if (dateTimeValue.length == 14 ){
	        	d = Date.parse(dateValue.substring(0,4) + '-' + dateValue.substring(4,6) + '-' + dateValue.substring(6,8) + 'T' + h + ':' + m + ':00');
	        	if(isNaN(d))
	        		d = null;
	        	if (d == null)	            	            
	        		d = stringToDateTime(dateValue,"dd/MM/yy","/",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"dd-MM-yy","-",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"dd.MM.yy",".",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"dd MM yy"," ",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM/dd/yy","/",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM-dd-yy","-",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM.dd.yy",".",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM dd yy"," ",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yy/MM/dd","/",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yy-MM-dd","-",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yy.MM.dd",".",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yy MM dd"," ",h,m);
	            if (d == null)
	            	d = stringToDateTime(dateValue,"ddMMyyyy","",h,m);
	            if (d == null)
	            	d = stringToDateTime(dateValue,"MMddyyyy","",h,m);
	        } else {
	        
	        	if (d == null)	            	            
	        		d = stringToDateTime(dateValue,"dd/MM/yyyy","/",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"dd-MM-yyyy","-",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"dd.MM.yyyy",".",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"dd MM yyyy"," ",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM/dd/yyyy","/",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM-dd-yyyy","-",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM.dd.yyyy",".",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"MM dd yyyy"," ",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yyyy/MM/dd","/",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yyyy-MM-dd","-",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yyyy.MM.dd",".",h,m);
	        	if (d == null)
	        		d = stringToDateTime(dateValue,"yyyy MM dd"," ",h,m);
	        }
        }
        
        
        if (d == null)
        {
        	alert(errorMesage);
        }
        else{        	
            var formatedDate = moment(d).format(format);
            obj.value = formatedDate;	            
        }
    	    	
    	
    }
    
    return true;
	
}

And is it also duplicated? You know it is, following the same underscore naming convention: validateDateTimeValue_. (I won’t share it again)

Okay, complex regexes that you can’t debug is bad. Custom date handling code is a WTF. Duplicating that code for no clear reason is bizarre. But what’s the stinger?

How about “hardcoded credentials for connecting to the database”?

function Register(){
	registerCurrentUserToFDB("Doe", "name surname", "abc@abc.com", "chgitem", "0fc6c0427cea929a3e21028f68cecf42");
}
function Login(){
	loginUserToFDB("abc@abc.com", "0fc6c0427cea929a3e21028f68cecf42");
}

In this the backend is Firebase (that's the FDB above), so this is client side JS phoning home to a Firebase backend.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

TED5 ways to live (and thrive) while social distancing

The novel coronavirus has dramatically changed how we spend time and share physical and virtual space with each other. On Friday, March 27, conflict mediator and author Priya Parker joined head of TED Chris Anderson and current affairs curator Whitney Pennington Rodgers on TED Connects to discuss what we all can do to stay connected and sustain relationships while apart during the pandemic. Here’s some advice to help you get through this uncertain time:

Bring intention to planning a virtual gathering

As platforms like Zoom, Slack and email become more integrated into our lives, it’s clear that technology will play an important role in helping us keep in touch. Whether you’re organizing a Zoom dinner party or Facetiming a friend, Parker invites us to consider how we can elevate the conversation beyond just check-ins. In planning a virtual gathering, ask:

  • Who’s joining and why?
  • What are your community’s needs?
  • What’s the reason you’re coming together?

As the pandemic evolves, these needs will likely shift. Stay attuned to the kinds of connections your communities are seeking.

Include fun themes to elevate your digital get-togethers

Parker suggests centering your gatherings around themes or activities to encourage more meaningful and purposeful conversations. Incorporate elements of the physical world to create a shared experience, like asking everyone to wear a funny costume or making the same recipe together. Though screens don’t quite replace the energy of in-person gatherings, we can still strengthen community bonds by reminding ourselves that there are real people on the other end of our devices.

Set healthy boundaries to maintain wellbeing

As we’re figuring out the best way to exist in the digital world, it’s also crucial we put in the effort to meaningfully connect with those we’re quarantining with. The distinctions between time to work, socialize and rest can grow blurrier by the day, so be sure to set boundaries and ground rules with those you live with. In having this conversation with your roommates, family or partner, reflect on these prompts:

  • How do you want to distinguish time spent together versus apart?
  • How do you want to share time together?
  • Since we look at screens most of the day, could it be helpful to set no-screen times or brainstorm new, non-digital ways to hang out?

Allow yourself to reflect on the unknown

It’s important to acknowledge that this is not a normal time, Parker says. The coronavirus pandemic has transformed the world, and as a global society we’ll experience the reverberations of this period as they ripple across every sector of human life. Make sure to create space for those conversations, too.

Take time to wander through the unknown, to talk about how we are being changed — individually and collectively — by this shared experience. It’s perfectly normal to feel worried, vulnerable, even existential, and this may be a great time to lean into those feelings and think about what really matters to you.

Recognize the power and feeling community brings — no matter the size

While the coronavirus pandemic has physically isolated many of us from each other, our ingenuity and resilience ensures that we can still build and forge community together. Across the world, people are gathering in new and amazing ways to set up “care-mongering” support groups, sing with their neighbors, take ceramics classes, knit together and break bread.

Now is the time to discover (or rediscover) the value and power of community. We are all members of many different communities: our neighborhoods, families, countries, faith circles and so on. Though we’re living in unprecedented times of social isolation, we can forge stronger bonds by gathering in ways that reflect our best values and principles. In the United Kingdom, a recent campaign asked people across the country to go outside at a synchronized time and collectively applaud health workers on the frontlines of the crisis; a similar effort was made across India to ring bells in honor of the ill and those caring for them. During this crisis and beyond, we can use thoughtful ritual-making to transform our unease and isolation into community bonding.

“Gathering is contagious,” Parker says. “These small, simple ideas allow people to feel like we can shape some amount — even a small amount — of our collective reality together.”

Looking for more tips, advice and wisdom? Watch the full conversation with Priya below:

Sociological ImagesPartisanship and the Pandemic

Can political leaders put partisanship aside to govern in a crisis? The COVID-19 pandemic has proved to be a crucial test of politicians’ willingness to put state before party. Acting swiftly to slow the spread of a novel virus and cooperating with cross-partisans could mean the difference between life and death for many state residents.

The first confirmed case of the novel coronavirus in the United States was reported in Washington state in January 2020. New cases, including incidents of community spread, continued to be recorded across the country in February. However, federal-level efforts to “flatten the curve” did not begin in force until March. Michigan’s Democratic Governor Gretchen Whitmer was among the first governors to openly criticize the Trump administration’s slow response. Her criticism led to an open partisan feud on Twitter between the two leaders.

In the absence of a national order to limit the virus’ spread within the country, state governors took action. Leaders in states with some of the earliest-recorded cases – such as Washington, Illinois, and California – put stay-at-home or shelter-in-place orders into effect shortly after the US closed its northern and southern borders to non-essential travel. In a matter of weeks, most states’ residents were under similar orders.

Did governors’ decisions to order their states’ residents to hunker down vary by party? In the figure below, I have plotted the date stay-at-home or shelter-in-place orders went into effect (as of April 15, according to the New York Times) by the date of the state’s first reported confirmed case of COVID-19 (according to US News & World Report). States with Democratic governors are labeled in blue and Republican governors are labeled in red. As of April 15, no statewide stay-home orders had been issued in the Republican-governed states labeled in grey on the plot.

Of the 50 states plus Washington DC and Puerto Rico, a total of 44 governors have issued stay-at-home or shelter-in-place orders. All Democratic-governed states were under similar orders after Governor Janet Mills called for Maine’s residents to stay home beginning April 2. By contrast, just over two-thirds of states led by Republican executives have mandated residents stay home. Eight states – all led by Republicans – had not issued such statewide orders as of April 15, 2020. States without stay-at-home orders have had substantial outbreaks of COVID-19, including in South Dakota where nearly 450 Smithfield Foods workers were infected in April causing the plant to close indefinitely.

Republican governors have generally been slower to issue restrictions on residents’ non-essential movement. Democrats and Republicans govern an equal number of states and territories on the above plot (26 each). Fifteen Democratic governors had issued statewide stay-home orders by March 26. The fifteenth Republican governor to mandate state residents stay home did not put this order into effect until April 3. This move came after all states with Democratic governors had announced similar orders and over two weeks after COVID-19 cases had been confirmed in all states.

The median number of days Democratic governors took to mandate their residents to stay home after their state’s first confirmed case was 21 days. By contrast, the median Republican governor took four additional days (25) to restrict residents’ non-essential movement, not accounting for states without stay-home orders as of April 15.

In short, the timing of governors’ decisions to mandate #stayhomesavelives appears to be partisan. However, there are select cases of governors putting public health before party. Ohio’s Republican Governor Mike DeWine has been heralded as one example. He was the first governor to order all schools to close, an action for which CNN described DeWine as the “anti-Trump on coronavirus.” These deviations from the norm suggest that divisive partisanship is not inevitable when governing a crisis.

Morgan C. Matthews is a PhD candidate in sociology at the University of Wisconsin-Madison. She studies gender, partisanship, and U.S. political institutions.

(View original at https://thesocietypages.org/socimages)