Planet Russell

,

CryptogramFirst Look Media Shutting Down Access to Snowden NSA Archives

The Daily Beast is reporting that First Look Media -- home of The Intercept and Glenn Greenwald -- is shutting down access to the Snowden archives.

The Intercept was the home for Greenwald's subset of Snowden's NSA documents since 2014, after he parted ways with the Guardian the year before. I don't know the details of how the archive was stored, but it was offline and well secured -- and it was available to journalists for research purposes. Many stories were published based on those archives over the years, albeit fewer in recent years.

The article doesn't say what "shutting down access" means, but my guess is that it means that First Look Media will no longer make the archive available to outside journalists, and probably not to staff journalists, either. Reading between the lines, I think they will delete what they have.

This doesn't mean that we're done with the documents. Glenn Greenwald tweeted:

Both Laura & I have full copies of the archives, as do others. The Intercept has given full access to multiple media orgs, reporters & researchers. I've been looking for the right partner -- an academic institution or research facility -- that has the funds to robustly publish.

I'm sure there are still stories in those NSA documents, but with many of them a decade or more old, they are increasingly history and decreasingly current events. Every capability discussed in the documents needs to be read with a "and then they had ten years to improve this" mentality.

Eventually it'll all become public, but not before it is 100% history and 0% current events.

Worse Than FailureCodeSOD: Swing and You're Out

George G was hired to do some UI work for a company which sold a suite of networking hardware. As networking hardware needs to be highly configurable, George was hired on to do “some minor tweaks” to the UI. “It’s just some sizing issues, fonts, the like. I’m sure you’ll do something to the stylesheets or whatever,” said the boss.

The boss didn’t know what they were talking about. The UI for some of the products was a web based config tool. That was their consumer-grade products. Their professional grade products used the same Java program which was originally released 15 years earlier. There were no stylesheets. Instead, there was an ancient and wobbling pile of Java Swing UI code, maintained by a “master” who had Strong Opinions™ about how that code should look.

For example, dispatching a call to a method is indirection. Indirection is confusing. So inline those calls, especially if there's a conditional involved: inline all the “ifs”. Factory methods and other tools for constructing complex objects are confusing, so always inline your calls to constructors, and always pass as many parameters as you can, except for the times where you don’t do that, because why would we be consistent about anything?

All of the developers on the project had to attend to the master’s wishes during code reviews, where the master gleefully unrefactored code “to make it more clear.”

Also, keep in mind that this UI started back in an era where “800x600” was a viable screen resolution, and in fact, that’s the resolution it was designed against. On a modern monitor, it’s painfully tiny, stuffed with tabs and icons and other UI widgets. Over the years, the UI code has been tweaked to handle edge cases: one customer had the UI zoom turned on, so now there were piles of conditionals to check if the UI needed to be re-laid out. Somebody got a HiDPI display, so again, a bunch of checks and custom code paths, all piled together.

Speaking of layout, Swing was a prime case of Java taking object orientation to the extreme, so in addition to passing in widgets you want displayed, you also can supply a layout object which decides how to fill the frame. There was a whole library of them, but if you wanted a flexible layout that also handled screen scaling well, you had to use the most complicated one: Grid Bag. The Grid Bag, as the name implies, is a grid, but where the grid cells can be arbitrary sizes. You control this by adding constraints to the flow. It’s peak Java overcomplification, so even simple UIs tend to get convoluted, and with the “inline all the things” logic, you end up with code like this:

if( os.getSystemFontSize( NORMAL ) == 14 )
text = new JText("5", new GridBagConstraints(3, 1, 3,3, 1.0, 0.5, GridBagConstraints.PAGE_END, 1.5, Insets(10,5,10,5, 5, 5 ) );
else   
text = new JText("5", new GridBagConstraints(3, 1, 3,3, 0.99, 0.4, GridBagConstraints.PAGE_END, 1.5, Insets(4,2,4,5, 5, 5  ) );

This particular code checks to see if the user has their font set to 14pt. If they do, we’ll set a constraint one way. If it’s any other value, we’ll set the constraint a different way. What is the expected result of that constraint? Why all this just to display the number 5? There are a lot of numbers other than 14, and they’re all going to impact the layout of the screen.

George made it two months, and then quit. This happened just a week after another developer had quit. Another quit a week later. No one in the management chain could understand why they were losing developers so quickly, with only the master remaining behind.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cory DoctorowToronto! I’m at the Metro Reference Library tonight at 7PM with my new book RADICALIZED! Next up: Chicago, San Francisco, Portland/Ft Washington…

We had a hell of an event last night at The Strand in NYC, and I’m about to head to the airport for my flight to Toronto for tonight’s event at the Metro Reference Library, hosted by the Globe & Mail’s Barry Hertz.

Tomorrow it’s Chicago’s C2E2 festival and then to Berkeley for an event with the writer and photographer Richard Kadrey, and then the Revolutionary Reads program at Fort Washington’s Clark College (just outside of Portland, OR); and then the tour takes me to Seattle and Anaheim! I hope you’ll come out and say hi! (Image: Vlado Vince)

Planet DebianArturo Borrero González: The martian packet case in our Neutron floating IP setup

Networking

A community member opened a bug the other day related to a weird networking behavior in the Cloud VPS service, offered by the Cloud Services team at Wikimedia Foundation. This VPS hosting service is based on Openstack, and we implement the networking bits by means of Neutron.

Our current setup is based on Openstack Mitaka (old, I know) and the networking architecture we use is extensively described in our docs. What is interesting today is our floating IP setup, which Neutron uses by means of the Netfilter NAT engine.

Neutron creates a couple of NAT rules for each floating IP, to implement both SNAT and DNAT. In our setup, if a VM uses a floating IP, then all its traffic to and from The Internet will use this floating IP. In our case, the floating IP range is made of public IPv4 addresses.

WMCS neutron setup

The bug/weird behavior consisted on the VM being unable to contact itself using the floating IP. A packet is generated in the VM with destination address the floating IP, a packet like this:

172.16.0.148 > 185.15.56.55 ICMP echo request

This packet reaches the neutron virtual router, and I could see it in tcpdump:

root@neutron-router:~# tcpdump -n -i qr-defc9d1d-40 icmp and host 172.16.0.148
11:51:48.652815 IP 172.16.0.148 > 185.15.56.55: ICMP echo request, id 32318, seq 1, length 64

Then, the PREROUTING NAT rules applies, translating 185.15.56.55 into 176.16.0.148. The corresponding conntrack NAT engine event:

root@neutron-router:~# conntrack -E -p icmp --src 172.16.0.148
    [NEW] icmp     1 30 src=172.16.0.148 dst=185.15.56.55 type=8 code=0 id=32395 [UNREPLIED] src=172.16.0.148 dst=172.16.0.148 type=0 code=0 id=32395

When this happens, the packet is put again in the wire, and I could see it again in a tcpdump running in the Neutron server box. You can see the 2 packets, the first without NAT, the second with the NAT applied:

root@neutron-router:~# tcpdump -n -i qr-defc9d1d-40 icmp and host 172.16.0.148
11:51:48.652815 IP 172.16.0.148 > 185.15.56.55: ICMP echo request, id 32318, seq 1, length 64
11:51:48.652842 IP 172.16.0.148 > 172.16.0.148: ICMP echo request, id 32318, seq 1, length 64

The Neutron virtual router routes this packet back to the original VM, and you can see the NATed packet reaching the interface. Note how I selected only incoming packets in tcpdump using -Q in

root@vm-instance:~# tcpdump -n -i eth0 -Q in icmp
11:51:48.650504 IP 172.16.0.148 > 172.16.0.148: ICMP echo request, id 32318, seq 1, length 64

And here is the thing. That packet can’t be routed by the VM:

root@vm-instance:~# ip route get 172.16.0.148 from 172.16.0.148 iif eth0
RTNETLINK answers: Invalid argument

This is known as a martian packet and you can actually see the kernel complaining if you turn on martian packet logging:

root@vm-instance:~# sysctl net.ipv4.conf.all.log_martians=1
root@vm-instance:~# dmesg -T | tail -2
[Tue Mar 19 12:16:26 2019] IPv4: martian source 172.16.0.148 from 172.16.0.148, on dev eth0
[Tue Mar 19 12:16:26 2019] ll header: 00000000: fa 16 3e d9 29 75 fa 16 3e ae f5 88 08 00        ..>.)u..>.....

The problem is that for local IP address, we recv a packet with same src/dst IPv4, with different src/dst MAC address. That’s nonsense from the network stack if not configured otherwise. If one wants to instruct the network stack to allow this, the fix is pretty easy:

root@vm-instance:~# sysctl net.ipv4.conf.all.accept_local=1

Now, ping from the VM to the floating IP works:

root@vm-intance:~# ping 185.15.56.55
PING 185.15.56.55 (185.15.56.55) 56(84) bytes of data.
64 bytes from 172.16.0.148: icmp_seq=1 ttl=64 time=0.202 ms
64 bytes from 172.16.0.148: icmp_seq=2 ttl=64 time=0.228 ms
^C
--- 185.15.56.55 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.202/0.215/0.228/0.013 ms

And ip route reports it correctly:

root@vm-intance:~# ip route get 172.16.0.148 from 172.16.0.148 iif eth0
local 172.16.0.148 from 172.16.0.148 dev lo 
    cache <local>  iif eth0

You can read more about all the sysctl configs for network in the Linux kernel docs. In concrete this one:

accept_local - BOOLEAN
	Accept packets with local source addresses. In combination with
	suitable routing, this can be used to direct packets between two
	local interfaces over the wire and have them accepted properly.
	default FALSE

The Cloud VPS service offered by the Wikimedia Foundation is an open project, open to use by anyone connected with the Wikimedia movement and we encourage the community to work with us in improving it. Yes, is open to collaboration as well, also technical / engineering contributors, and you are welcome to contribute to this or any of the many other collaborative efforts in this global movement.

Planet DebianIan Jackson: Pandemic Rising Tide - a new board design

As I wrote previously (link added here):
[personal profile] ceb gave me the board game Pandemic Rising Tide for Christmas. I like it a lot. However, the board layout, while very pretty and historically accurate, is awkward for play. I decided to produce a replacement board design, with a schematic layout.

This project is now complete at last! Not only do I have PDFs ready for printing on a suitable printer, but I have made a pretty good properly folding actual board.

Why a new board design

The supplied board is truly a work of art. Every wrinkle in the coastline and lots of details of boundaries of various parts of the Netherlands are faithfully reproduced.

To play the game, though, it is necessary to see quickly which "squares" (faces of the boundary graph; the rules call them regions) are connected to which others, and what the fastest walking route is, and so on. Also one places dyke tokens - small brown sticks - along some of the edges; it is often necessary to quickly see whether a face has any dykes on any of its edges, or whether there is a dyke between two adjacent faces.

This is hard to do on the original board. This has been at least one forum thread and one player shared their modifications involving pipe cleaners and glue!

Results - software, and PDFs

Much of the work in this project was producing the image to go on the board - in particular, laying out the graph was quite hard and involved shaving a number of yaks. (I'll be posting properly about my planar graph layout tool too.)

In case you like my layout, I have published a complete set of PDFs suitable for printing out yourself. There's a variety depending on what printer you are going to use. See the README.txt in that directory for details.

Of course the source code is available too. (Building it is not so easy - see the same README for details.)

Results - physical board

I consulted with [personal profile] ceb who had very useful bookbinding expertise and gave copious and useful advice, and also very kindly let me use some of their supplies. I had a local print shop print out a suitable PDF on their excellent A1 colour laserprinter, with very good results. (The photos below don't do justice to the colour rendering.)

The whole board is backed with bookcloth (the cloth which is used for the spines of hardback books), and that backing forms one of the two hinges. The other hinge is a separate piece of bookcloth on the top face. Then on top of that is the actual board image sheet, all put on in one go (so it all aligns correctly) and then cut along the "convex" hinge after the glue was dry.

I did some experiments to get the hang of the techniques and materials, and to try out a couple of approaches. Then I wrote myself a set of detailed instruction notes, recalculated the exact sizes, and did a complete practice run at 1/sqrt(8) scale. That served me well.

The actual construction took most of a Saturday afternoon and evening, and then the completed board had to be pressed for about 48h while it dried, to stop it warping.

There was one part that it wasn't really practical to practice: actually pasting a 624 x 205mm sheet of 120gsm paper, covered in a mixture of PVA and paste, onto a slightly larger arrangement of boards, is really quite tricky to do perfectly - even if you have a bookbinder on hand to help with another pair of hands. So if you look closely at my finished article you can see some blemishes. But, overall, I am pleased.

Pictures

If you just want to admire my board design, you can look at this conveniently sized PDF. I also took some photographs. But, for here, a taster:



comment count unavailable comments

,

Planet DebianSteinar H. Gunderson: RC-bugginess

The RMs very helpfully unblocked my Nageru upload so that a bunch of Futatabi bugfixes could go to buster. I figured out this was a good time to find a long-standing RC bug to debug and fix in return.

(Granted, I didn't upload yet, so the bug isn't closed. But a patch should go a long way.)

Only 345 to go…

Planet DebianLucas Nussbaum: Call for help: graphing Debian trends

It has been raised in various discussions how much it’s difficult to make large-scale changes in Debian.

I think that one part of the problem is that we are not very good at tracking those large-scale changes, and I’d like to change that. A long time ago, I did some graphs about Debian (first in 2011, then in 2013, then again in 2015). An example from 2015 is given below, showing the market share of packaging helpers.

Those were generated using a custom script. Since then, classification tags were added to lintian, and I’d like to institutionalize that a bit, to make it easier to track more trends in Debian, and maybe motivate people with switching to new packaging standards. This could include stuff like VCS used, salsa migration, debhelper compat levels, patch systems and source formats, but also stuff like systemd unit files vs traditional init scripts, hardening features, etc. The process would look like:

  1. Add classification tags to lintian for relevant stuff (maybe starting with being able to regenerate the graphs from 2015).
  2. Use lintian to scan all packages on snapshot.debian.org, which stores all packages ever uploaded to Debian (well, since 2005), and generate a dataset
  3. Generate nice graphs

Given my limited time available for Debian, I would totally welcome some help. I can probably take care of the second step (I actually did it recently on a subset of packages to check feasibility), but I would need:

  • The help of someone with Perl knowledge, willing to modify lintian to add additional classification tags. There’s no need to be a Debian Developer, and lintian has an extensive test suite, that should make it quite fun to hack on. The code could either be integrated in lintian, or live in a lintian fork that would only be used to generate this data.
  • Ideally (but that’s less important at this stage), the help of someone with web skills to generate a nice website.

Let me know if you are interested.

Planet DebianBits from Debian: DebConf19 registration is open!

DebConf19 banner open registration

Registration for DebConf19 is now open. The event will take place from July 21st to 28th, 2019 at the Central campus of Universidade Tecnológica Federal do Paraná - UTFPR, in Curitiba, Brazil, and will be preceded by DebCamp, from July 14th to 19th, and an Open Day on the 20th.

DebConf is an event open to everyone, no matter how you identify yourself or how others perceive you. We want to increase visibility of our diversity and work towards inclusion at Debian Project, drawing our attendees from people just starting their Debian journey, to seasoned Debian Developers or active contributors in different areas like packaging, translation, documentation, artwork, testing, specialized derivatives, user support and many other. In other words, all are welcome.

To register for the event, log into the registration system and fill out the form. You will be able to edit and update your registration at any point. However, in order to help the organisers have a better estimate of how many people will attend the event, we would appreciate if you could access the system and confirm (or cancel) your participation in the Conference as soon as you know if you will be able to come. The last day to confirm or cancel is June 14th, 2019 23:59:59 UTC. If you don't confirm or you register after this date, you can come to the DebConf19 but we cannot guarantee availability of accommodation, food and swag (t-shirt, bag…).

For more information about registration, please visit Registration Information

Bursary for travel, accomodation and meals

In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register.

As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined:

  • To active Debian contributors.
  • To promote diversity: newcomers to Debian and/or DebConf, especially from under-represented communities.

Giving a talk, organizing an event or helping during DebConf19 is taken into account when deciding upon your bursary, so please mention them in your bursary application. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

For more information about bursaries, please visit Applying for a Bursary to DebConf

Attention: the registration for DebConf19 will be open until Conference, but the deadline to apply for bursaries using the registration form before April 15th, 2019 23:59:59 UTC. This deadline is necessary in order to the organisers use time to analyze the requests, and for successful applicants to prepare for the conference.

To register for the Conference, either with or without a bursary request, please visit: https://debconf19.debconf.org/register

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Infomaniak and Google. DebConf19 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

Planet DebianJonathan Carter: GitLab and Debian

As part of my DPL campaign, I thought that I’d break out a few items out in blog posts that don’t quite fit into my platform. This is the first post in that series.

When Debian was hunting for a new VCS-based collaboration suite in 2017, the administrators of the then current platform, called Alioth (which was a FusionForge instance) strongly considered adopting Pagure, a git hosting framework from Fedora. I was a bit saddened that GitLab appeared to be losing the race, since I’ve been a big fan of the project for years already. At least Pagure would be a huge improvement over the status quo and it’s written in Python, which I considered a plus over GitLab, so at least it wasn’t going to be all horrible.

The whole discussion around GitLab vs Pagure turned out to be really fruitful though. GitLab did some introspection around its big non-technical problems, especially concerning their contributor licence agreement, and made some major improvements which made GitLab a lot more suitable for large free software projects, which shortly lead to its adoption by both the Debian project and the Gnome project. I think it’s a great example of how open communication and engagement can help reduce friction and make things better for everyone. GitLab has since became even more popular and is now the de facto self-hosted git platform across all types of organisations.

Fun fact: I run a few GitLab instances myself, and often get annoyed with all my tab favicons looking the same, so the first thing I do is create a favicon for my GitLab instances. I’m also the creator of the favicon for the salsa.debian.org, it’s basically the GitLab logo re-arranged and mangled to be a crude representation of the debian swirl:

The move to GitLab had some consequences that I’m not sure was completely intended. For example, across the project, we used to use a whole bunch of different version control systems (git, bzr, mercurial, etc), but since GitLab only supports git, it has made git the gold standard in Debian too. For better or worse, I do think that it makes it easier for new contributors to get involved since they can contribute to different teams without having to learn a different VCS for each one.

I don’t think it’s a problem that some teams don’t use salsa (or even git for that matter), but within salsa we have quite a number of team-specific workflows that I think can be documented a lot better and I think in doing so, may merge some of the cases a bit so that it’s more standardised.

When I started working on my DPL platform, I pondered whether I should host my platform in a git repository. I decided to go ahead and do so because it would make me more accountable since any changes I make can be tracked and tagged.

I also decided to run for DPL rather late, and prepared my platform under some pressure, making quite a few mistakes. In another twist of unintended consequences for using git, I woke up this morning with a pleasant surprise of 2 merge requests that fixed those mistakes.

I think GitLab is the best thing that has happened to Debian in a long time, and I think whoever becomes DPL should consider making both git and the salsa.debian.org a regular piece of the puzzle for new processes that are put in place. Git is becoming so ubiquitous that over time, it’s not even going to be something that an average person would need to learn anymore when getting involved in Debian and it makes sense to embrace it.

Planet DebianJan Wagner: HAProxy - a journey into multithreading (and SSL)

HAProxy - a journey into multithreading (and SSL)

I'm running some load balancers which are using HAProxy to distribute HTTP traffic to multiple systems.

While using SSL with HAProxy is possible since some time, it wasn't in the early days. So we decided for some customers, which was in need to provide encryption, to offload it with Apache.
When later HAProxy got added SSL support this also had benefits when keeping this setup for larger sites, because HAProxy had a single process model and doing encryption is indeed way more resource consuming.
Still using Apache for SSL offloading was a good choice because it comes with the Multi-Processing Modules worker and event that are threading capable. We did choose the event mpm cause it should deal better with the 'keep alive problem' in HTTP. So far so good.

Last year some large setups started to suffer accepting new connections out of the blue. Unfortunately I found nothing in the logs and also couldn't reproduce this behaviour. After some time I decided to try using another Apache mpm and switched over to the worker model. And guess what ... the connection issues vanished.
Some days later I surprisingly learned about the Apache Bug in Debian BTS "Event MPM listener thread may get blocked by SSL shutdowns" which was an exact description of my problem.

While being back in safe waters I thought it would be good to have a look into HAProxy again and learned that threading support was added in version 1.8 and in 1.9 got some more improvements.
So we started to look into it on a system with a couple of real CPUs:

# grep processor /proc/cpuinfo | tail -1
processor	: 19

At first we needed to install a newer version of HAProxy, since 1.8.x is available via backports but 1.9.x is only available via haproxy.debian.net. I thought I should start with a simple configuration and keep 2 spare CPUs for other tasks:

global
        # one process
        nbproc 1
        # 18 threads
        nbthread 18
        # mapped to the first 18 CPU cores
        cpu-map auto:1/1-18 0-17

Now let's start:

# haproxy -c -V -f /etc/haproxy/haproxy.cfg
# service haproxy reload
# pstree haproxy
No processes found.
# grep "worker #1" /var/log/haproxy.log | tail -2
Mar 20 13:06:51 lb13 haproxy[22156]: [NOTICE] 078/130651 (22156) : New worker #1 (22157) forked
Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault)

Okay .. cool! ;) So I started lowering the used CPUs since without threading I did not experiencing segfaults. With 17 threads it seems to be better:

# service haproxy restart
# pstree haproxy
haproxy---16*[{haproxy}]
# grep "worker #1" /var/log/haproxy.log | tail -2
Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault)
Mar 20 13:14:33 lb13 haproxy[27001]: [NOTICE] 078/131433 (27001) : New worker #1 (27002) forked

Now I started to move traffic from Apache to HAProxy slowly and watching logs carefully. With shifting more and more traffic over, the amount of SSL handshake failure entries went up. While there was the possibility this were just some clients not supporting our ciphers and/or TLS versions I had some doubts, but our own monitoring was unsuspicious. So I started to have a look on external monitoring and after some time I cought some interesting errors:

error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error
error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac

The last time I had issues I lowered the thread count so I did this again. And you might guessed it already, this worked out. With 12 threads I had no issues anymore:

global
        # one process
        nbproc 1
        # 12 threads
        nbthread 12
        # mapped to the first 12 CPU cores (with more then 17 cpus haproxy segfaults, with 16 cpus we have a high rate of ssl errors)
        cpu-map auto:1/1-12 0-11

So we got rid of SSL offloading and the proxy on localhost, with the downside that HAProxy is failing 1/146 h2spec test, which is a conformance testing tool for HTTP/2 implementation, where Apache was failing not a single test.

CryptogramZipcar Disruption

This isn't a security story, but it easily could have been. Last Saturday, Zipcar had a system outage: "an outage experienced by a third party telecommunications vendor disrupted connections between the company's vehicles and its reservation software."

That didn't just mean people couldn't get cars they reserved. Sometimes is meant they couldn't get the cars they were already driving to work:

Andrew Jones of Roxbury was stuck on hold with customer service for at least a half-hour while he and his wife waited inside a Zipcar that would not turn back on after they stopped to fill it up with gas.

"We were just waiting and waiting for the call back," he said.

Customers in other states, including New York, California, and Oregon, reported a similar problem. One user who tweeted about issues with a Zipcar vehicle listed his location as Toronto.

Some, like Jones, stayed with the inoperative cars. Others, including Tina Penman in Portland, Ore., and Heather Reid in Cambridge, abandoned their Zipcar. Penman took an Uber home, while Reid walked from the grocery store back to her apartment.

This is a reliability issue that turns into a safety issue. Systems that touch the direct physical world like this need better fail-safe defaults.

Planet DebianAntoine Beaupré: Securing registration email

I've been running my own email server basically forever. Recently, I've been thinking about possible attack vectors against my personal email. There's of course a lot of private information in that email address, and if someone manages to compromise my email account, they will see a lot of personal information. That's somewhat worrisome, but there are possibly more serious problems to worry about.

TL;DR: if you can, create a second email address to register on websites and use stronger protections on that account from your regular mail.

Hacking accounts through email

Strangely what keeps me up at night is more what kind of damage an attacker could do to other accounts I hold with that email address. Because basically every online service is backed by an email address, if someone controls my email address, they can do a password reset on every account I have online. In fact, some authentication systems just gave up on passwords algother and use the email system itself for authentication, essentially using the "password reset" feature as the authentication mechanism.

Some services have protections against this: for example, GitHub require a 2FA token when doing certain changes which the attacker hopefully wouldn't have (although phishing attacks have been getting better at bypassing those protections). Other services will warn you about the password change which might be useful, except the warning is usually sent... to the hacked email address, which doesn't help at all.

The solution: a separate mailbox

I had been using an extension (anarcat+register@example.com) to store registration mail in a separate folder for a while already. This allows me to bypass greylisting on the email address, for one. Greylisting is really annoying when you register on a service or do a password reset... The extension also allows me to sort those annoying emails in a separate folder automatically with a simple Sieve rule.

More recently, I have been forced to use a completely different email alias (register@example.com) on some services that dislike having plus signs (+) in email address, even though they are perfectly valid. That got me thinking about the security problem again: if I have a different alias why not make it a completely separate account and harden that against intrusion. With a separate account, I could enforce things like SSH-only access or 2FA that would be inconvenient for my main email address when I travel, because I sometimes log into webmail for example. Because I don't frequently need access to registration mail, it seemed like a good tradeoff.

So I created a second account, with a locked password and SSH-only authentication. That way the only way someone can compromise my "registration email" is by hacking my physical machine or the server directly, not by just bruteforcing a password.

Now of course I need to figure out which sites I'm registered on with a "non-registration" email (anarcat@example.com): before I thought of using the register@ alias, I sometimes used my normal address instead. So I'll have to track those down and reset those. But it seems I already blocked a large attack surface with a very simple change and that feels quite satisfying.

Implementation details

Using syncmaildir (SMD) to sync my email, the change was fairly simple. First I need to create a second SMD profile:

if [ $(hostname) = "marcos" ]; then
    exit 1
fi

SERVERNAME=smd-server-register
CLIENTNAME=$(hostname)-register
MAILBOX_LOCAL=Maildir/.register/
MAILBOX_REMOTE=Maildir
TRANSLATOR_LR="smd-translate -m move -d LR register"
TRANSLATOR_RL="smd-translate -m move -d RL register"
EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

Very similar to the normal profile, except mails get stored in the already existing Maildir/.register/ and different SSH profile and translation rules are used. The new SSH profile is basically identical to the previous one:

# wrapper for smd
Host smd-server-register
    Hostname imap.anarc.at
    BatchMode yes
    Compression yes
    User register
    IdentitiesOnly yes
    IdentityFile ~/.ssh/id_ed25519_smd

Then we need to ignore the register folder in the normal configuration:

diff --git a/.smd/config.default b/.smd/config.default
index c42e3d0..74a8b54 100644
--- a/.smd/config.default
+++ b/.smd/config.default
@@ -59,7 +59,7 @@ TRANSLATOR_RL="smd-translate -m move -d RL default"
 # EXCLUDE_LOCAL="Mail/spam Mail/trash"
 # EXCLUDE_REMOTE="OtherMail/with%20spaces"
 #EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"
-EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"
+EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/* Maildir/.register/*"
 #EXCLUDE_LOCAL="$MAILBOX_LOCAL/.notmuch/hooks/* $MAILBOX_LOCAL/.notmuch/xapian/*"
 #EXCLUDE_REMOTE="$MAILBOX_REMOTE/.notmuch/hooks/* $MAILBOX_REMOTE/.notmuch/xapian/*"
 #EXCLUDE_REMOTE="Maildir/Koumbit Maildir/Koumbit* Maildir/Koumbit/* Maildir/Koumbit.INBOX.Archives/ Maildir/Koumbit.INBOX.Archives.2012/ Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

And finally we add the new profile to the systemd services:

diff --git a/.config/systemd/user/smd-pull.service b/.config/systemd/user/smd-pull.service
index a841306..498391d 100644
--- a/.config/systemd/user/smd-pull.service
+++ b/.config/systemd/user/smd-pull.service
@@ -8,6 +8,7 @@ ConditionHost=!marcos
 Type=oneshot
 # --show-tags gives email counts
 ExecStart=/usr/bin/smd-pull --show-tags
+ExecStart=/usr/bin/smd-pull --show-tags register

 [Install]
 WantedBy=multi-user.target
diff --git a/.config/systemd/user/smd-push.service b/.config/systemd/user/smd-push.service
index 10d53c7..caa588e 100644
--- a/.config/systemd/user/smd-push.service
+++ b/.config/systemd/user/smd-push.service
@@ -8,6 +8,7 @@ ConditionHost=!marcos
 Type=oneshot
 # --show-tags gives email counts
 ExecStart=/usr/bin/smd-push --show-tags
+ExecStart=/usr/bin/smd-push --show-tags register

 [Install]
 WantedBy=multi-user.target

That's about it on the client side. On the server, the user is created with a locked password the mailbox moved over:

adduser --disabled-password register
mv ~anarcat/Maildir/.register/ ~register/Maildir/
chown -R register:register Maildir/

The SSH authentication key is added to .ssh/authorized_keys, and the alias is reversed:

--- a/aliases
+++ b/aliases
@@ -24,7 +24,7 @@ spamtrap: anarcat
 spampd: anarcat
 junk: anarcat
 devnull: /dev/null
-register: anarcat+register
+anarcat+register: register

 # various sandboxes
 anarcat-irc: anarcat

... and the email is also added to /etc/postgrey/whitelist_recipients.

That's it: I now have a hardened email service! Of course there are other ways to harden an email address. On-disk encryption comes to mind but that only works with password-based authentication from what I understand, which is something I want to avoid to remove bruteforce attacks.

Your advice and comments are of course very welcome, as usual

Planet DebianMichal Čihař: translation-finder 1.1

The translation-finder module has been released in version 1.1. It is used by Weblate to detect translatable files in the repository making setup of translation components in Weblate much easier. This release brings lot of improvements based on feedback from our users, making the detection more reliable and accurate.

Full list of changes:

  • Improved detection of translation with full language code.
  • Improved detection of language code in directory and file name.
  • Improved detection of language code separated by full stop.
  • Added detection for app store metadata files.
  • Added detection for JSON files.
  • Ignore symlinks during discovery.
  • Improved detection of matching pot files in several corner cases.
  • Improved detection of monolingual Gettext.

Filed under: Debian English SUSE Weblate

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #203

Here’s what happened in the Reproducible Builds effort between Sunday March 10 and Saturday March 16 2019:

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy which offers paid internships to work on free software. Internships are open to applicants around the world and are paid a stipend for the three month internship with an additional travel stipend to attend conferences. So far, we received more than ten initial requests from candidates and the closing date for applicants is April 2nd. More information is available on the application page.

Packages reviewed and fixed, and bugs filed

strip-nondeterminism

strip-nondeterminism is our tool that post-processes files to remove known non-deterministic output. This week, Chris Lamb:

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, the following changes were made:

  • Alexander Couzens (OpenWrt support):
    • Correct the arguments for the reproducible_openwrt_package_parser script. []
    • Copy over Package-* files when building. []
    • Fix the Packages.manifest parser. [] []
  • Mattia Rizzolo:

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Cory DoctorowNYC! I’m coming to The Strand tonight at 7PM with my new book RADICALIZED! Next up: Toronto, Chicago, San Francisco…

Thanks to everyone who came to last night’s launch event at San Diego’s Mysterious Galaxy! The next stop on my tour is an event at 7PM at The Strand in NYC where I’ll be appearing with the award-winning investigative journalist Julia Angwin, who is pinch-hitting for Anand Giridharadas, who has had a family emergency.

Tomorrow night, I’ll be appearing at the Toronto Metro Reference Library at 7PM, with the Globe & Mail’s Barry Hertz; from there, I go to Chicago’s C2E2 festival and then to Berkeley for an event with the writer and photographer Richard Kadrey, and from there, the tour takes me to Portland/Ft Washington, Seattle, and Anaheim! I hope you’ll come out and say hi!

Planet DebianJonathan Dowland: Learning new things about my old Amiga A500

This is the sixth part in a series of blog posts. The previous post was glitched Amiga video. The next post is First successful Amiga disk-dumping session.

Sysinfo output for my A500

Sysinfo output for my A500

I saw a tweet from Sophie Haskins who is exploring her own A500 and discovered that it had an upgraded Agnus chip. The original A500 shipped with a set of chips which are referred to as the Original Chip Set (OCS). The second generation of the chips were labelled Enhanced Chip Set (ECS). A500s towards the end of their production lifetime were manufactured with some ECS chips instead. I had no idea which chipset was in my A500, but Sophie's tweet gave me a useful tip, she was using some software called sysinfo to enumerate what was going on. I found an ADF disk image that included Sysinfo ("LSD tools") and gave it a try. To my surprise, my Amiga has an ECS "AGNUS" chip too!

I originally discovered Sophie due to her Pizzabox Computer project: An effort to acquire, renovate and activate a pantheon of vintage "pizzabox" form-factor workstation computers. I once had one of these, the Sun SPARCStation 10, but it's long since gone. I'm mildly fascinated to learn more about some of these other machines. After proofreading Fabien Senglard's DOOM book, I was interested to know more about NeXTstations, and Sophie is resurrecting a NeXTstation mono, but there are plenty of other interesting esoteric things on that site, such as Apple A/UX UNIX on a Quadra 610 (the first I'd heard of both Apple's non-macOS UNIX, and their pizzabox form-factor machines).

Planet DebianJonathan Dowland: First successful Amiga disk-dumping session

This is the seventh part in a series of blog posts. The previous post was Learning new things about my old Amiga A500.

[X-COPY](http://jope.fi/xcopy/) User Interface

X-COPY User Interface

[Totoro](https://en.wikipedia.org/wiki/My_Neighbor_Totoro) Soot Sprites?

Totoro Soot Sprites?

"Cyberpunk" party invitation

My childhood home

My childhood home

[HeroQuest](https://en.wikipedia.org/wiki/HeroQuest) board game guide

HeroQuest board game guide

I've finally dumped some of my Amiga floppies, and started to recover some old files! The approach I'm taking is to use the real Amiga to read the floppies (in the external floppy disk drive) and then copy them onto a virtual floppy disk image on the Gotek Floppy Emulator. I use X-COPY to perform the copy (much as I would have done back in 1992).

FlashFloppy's default mode of operation is to scan over the filesystem on the attached USB and assign a number to every disk image that it discovers (including those in sub-folders). If your Gotek device has the OLED display, then it reports the path to the disk image to you; but I have the simpler model that simply displays the currently selected disk slot number.

For the way I'm using it, its more basic "indexed" mode fits better: you name files in the root of the USB's filesystem using a sequential scheme starting at DSKA0000.ADF (which corresponds to slot 0) and it's then clear which image is active at any given time. I set up the banks with Workbench, X-COPY and a series of blank floppy disk images to receive the real contents, which I was able to generate using FS-UAE (they aren't just full of zeroes).

A few weeks ago I had a day off work and spent an hour in the morning dumping floppies. I managed to dump around 20 floppies successfully, with only a couple of unreadable disks (from my collection of 200). I've prioritised home-made disks, in particular ones that are likely to contain user-made content rather than just copies of commercial disks. But in some cases it's hard to know for sure what's on a disk, and sometimes I've made copies of e.g. Deluxe Paint and subsequently added home-made drawings on top.

Back on my laptop, FS-UAE can quite happily read the resulting disk images, and Deluxe Paint IV via FS-UAE can happily open the drawings that I've found (and it was a lot of fun to fire up DPaint for the first time in over 20 years. This was a really nice piece of software. I must have spent days of my youth exploring it).

I tried a handful of user-mode tools for reading the disk images (OFS format) but they all had problems. In the end I just used the Linux kernel's AFFS driver and loop-back mounts. (I could have looked at libguestfs instead).

To read Deluxe Paint image files on a modern Linux system one can use ImageMagick (via netpbm back-end) or ffmpeg. ffmpeg can also handle Deluxe Paint animation files, but more care is needed with these: It does not appear to correctly convert frame durations, setting the output animations to a constant 60fps. Given the input image format colour depth, it's tempting to output to animated GIF, rather than a lossy video compression format, but from limited experimentation it seems some nuances of the way that palettes are used in the source files are not handled optimally in the output either. More investigation here is required.

Enjoy a selection of my childhood drawings…

CryptogramAn Argument that Cybersecurity Is Basically Okay

Andrew Odlyzko's new essay is worth reading -- "Cybersecurity is not very important":

Abstract: There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. This continuing general progress of society suggests that cyber security is not very important. Adaptations to cyberspace of techniques that worked to protect the traditional physical world have been the main means of mitigating the problems that occurred. This "chewing gum and baling wire"approach is likely to continue to be the basic method of handling problems that arise, and to provide adequate levels of security.

I am reminded of these two essays. And, as I said in the blog post about those two essays:

This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.

Planet DebianJonathan Dowland: WadC 3.0

[blockmap.wl](https://redmars.org/wadc/examples/#_blockmap_wl) being reloaded (click for animation)

blockmap.wl being reloaded (click for animation)

A couple of weeks ago I release version 3.0 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

3.0 introduces more flexible randomness with rand; two new test maps (blockmap and bsp) that demonstrate approaches to random dungeon generation; some useful data structures in the library; better Hexen support and a bunch of other improvements.

Check the release notes for the full details, and check out the gallery of examples to see the kind of things you can do.

Version 3.0 of WadC is dedicated to Lu (1972-2019). RIP.

Worse Than FailureCodeSOD: Remove This

Denae inherited some 90s-era GUI code. The original developers have long since gone away, and the source control history has vanished into oblivion. In the 90s, the Standard Template Library for C++ was still very new, so when Denae needs to debug things in this application, it means doing some code archaeology and picking through home-brewed implementations of common data structures.

Denae’s most recent bug involved understanding why some UI elements weren’t updating properly. The basic logic the application used is that it maintained a List of GUI items which need to be repainted. So Denae dug into the the List implementation.

template <class Type> void List<Type>::Remove(Type t)
{
	int i;
	for (i=(num-1); i>=0; i--)
	{
		if(element[i] == t)
		{
			break;
		}
	}
	LoudAssert(i>=0);
	DelIndex(i);
}

template <class Type> void List<Type>::DelIndex(int i)
{
	LoudAssert(i<num);
	num--;
	while(i<num)
	{
		element[i] = element[i+1];
		i++;
	}
}

Let’s start by talking about LoudAssert. Denae didn’t provide the implementation, but had this to say:

LoudAssert is an internally defined assert that is really just an fprintf to stderr. In our system stderr is silenced, always, so these asserts do nothing.

LoudAssert isn’t an assert, in any meaningful way. It’s a logging method which also doesn’t log in production. Which means there’s nothing that stops the Remove method from getting a negative index to remove- since it loops backwards- and passing it over to DelIndex. If you try and remove an item which isn’t in the list, that’s exactly what happens. And note how num, the number of items in the list, gets decremented anyway.

Denae noticed that this must be the source of the misbehaving UI updates when the debugger told her that the list of items contained -8 entries. She adds:

We have no idea how this ever worked, or what might be affected by fixing it, but it’s been running this way for over 20 years

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianNeil McGovern: GNOME ED Update – February

Another update is now due from what we’ve been doing at the Foundation, and we’ve been busy!

As you may have seen, we’ve hired three excellent people over the past couple of months. Kristi Progri has joined us as Program Coordinator, Bartłomiej Piorski as a devops sysadmin, and Emmanuele Bassi as our GTK Core developer. I hope to announce another new hire soon, so watch this space…

There’s been quite a lot of discussion around the Google API access, and GNOME Online Accounts. The latest update is that I submitted the application to Google to get GOA verified, and we’ve got a couple of things we’re working through to get this sorted.

Events all round!

Although the new year’s conference season is just kicking off, it’s been a busy one for GNOME already. We were at FOSDEM in Brussels where we had a large booth, selling t-shirts, hoodies and of course, the famous GNOME socks. I held a meeting of the Advisory Board, and we had a great GNOME Beers event – kindly sponsored by Codethink.

We also had a very successful GTK Hackfest – moving us one step closer to GTK 4.0.

Coming up, we’ll have a GNOME booth at:

  • SCALEx17 – Pasadena, California (7th – 10th March)
  • LibrePlanet – Boston Massachusetts (23rd – 24th March)
  • FOSS North – Gothenburg, Sweden (8th – 9th April)
  • Linux Fest North West – Bellingham, Washington (26th – 28th April)

If you’re at any of these, please come along and say hi! We’re also planning out events for the rest of the year. If anyone has any particularly exciting conferences we may not have heard of, please let us know.

Discourse

It hasn’t yet been announced, but we’re trialling an instance of Discourse for the GTK and Engagement teams. It’s hopeful that this may replace mailman, but we’re being quite careful to make sure that email integration continues to work. Expect more information about this in the coming month. If you want to go have a look, the instance is available at discourse.gnome.org

Sociological ImagesForging New Paths

The built environment reflects our social world. From urban streets that encourage neighborly relationships, to “hostile design” and policing practices that keep people out of public spaces, the physical structure of a space carries with it a whole set of assumptions about how people should interact in that space.

But social structure isn’t always just imposed on us by architects and city planners. It also invites the opportunity for improvisation and innovation to create new norms. A great example of this is the emergence of “desire paths“—the people-made paths that defy, or improve on, the work of urban design.

Source: Wikimedia Commons

Desire paths were the bulk of my commute for years without even realizing it. When you walk a lot, you start to see how much our neighborhoods aren’t built for this most basic kind of travel. It is fun to spot these paths along the way, because they show little pockets of collective action where people have found a better way to get from point A to B. Authors like to highlight how some universities, for example, even wait for desire paths to emerge and then pave them to fit students’ commuting routes.

That said, it is also important to pay attention to the limits that urban design, like all kinds of social structures, continues to impose in our communities. Research shows that walkability may only be weakly related to the social health of a neighborhood, since community cohesion takes more work than just putting people in the same space. Walkable neighborhoods also attract more drivers as people commute in to walk around to shops and restaurants. My alma mater, Michigan State University, paved a ton of desire paths in student neighborhoods across campus. It was great, but if you needed to get to the other side of campus for class in a pinch, there was still the matter of that big stadium complex in the way. Sometimes social improvement still takes a bit more conscious effort.

Desire paths at MSU
For all the desire paths, the fastest route to a freshman economics class crashes through stadiums, parking lots, and practice fields before falling into the river.
Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianKeith Packard: metro-snek

MetroSnek — snek on Metro M0 Express

When I first mentioned Snek a few months ago, Phillip Torrone from Adafruit pointed me at their Metro M0 board, which uses an Arduino-compatible layout but replaces the ATMega 328P with a SAMD21G18A. This chip is an ARM Cortex M0 part with 256kB of flash and 32kB of RAM. Such space!

Even though there is already a usable MicroPython port for this board, called CircuitPython, I figured it would be fun to get Snek running as well. The CircuitPython build nearly fills the chip, so the Circuit Python boards all include an off-chip flash part for storing applications. With Snek, there will be plenty of space inside the chip itself for source code, so one could build a cheaper/smaller version without the extra part.

UF2 Boot loader

I decided to leave the existing boot loader in place instead of replacing it with the AltOS version. This makes it easy to swap back to CircuitPython without needing any custom AltOS tools.

The Metro M0 Express boot loader is reached by pressing the reset button twice; it's pretty sweet in exposing a virtual storage device with a magic file, CURRENT.UF2, into which you write the ROM image. You write a UF2 formatted file to this name and the firmware extracts the data on the fly and updates the flash in the device. Very slick.

To make this work with AltOS, I had to adjust the start location of the operating system to 0x2000 and leave a bit of space at the end of ROM and RAM clear for the boot loader to use.

Porting AltOS

I already have an embedded operating system that works on Cortex M0 parts, AltOS, which I've been developing for nearly 10 years for use in rocketry and satellite applications. It's also what powers [ChaosKey])(http://altusmetrum.org/ChaosKey/).

Getting AltOS running on another Cortex M0 part is a simple matter of getting clocks running and writing drivers.

What I haven't really settled on is whether to leave this code as a part of AltOS, or to pull the necessary bits into the Snek repository and doing a bare-metal implementation.

I've set up the Snek distribution to make integrating it into another operating system simple; that's how the NuttX port works, for instance. It does make the build process more complicated as you have to build and install Snek, then build AltOS for the target device.

SAMD21 Clocks

Every SoC has a different way of configuring and wiring clocks within the system. Most that I've used have a complex clock-tree that you plug various configuration values into to generate clocks for the processor and peripherals.

The SAMD21 is simpler in offering a set of general-purpose clock controllers that can source a variety of clock signals and divide them by an integer. The processor uses clock controller 0; all of the other peripherals can be configured to use any clock controller you like.

The Metro M0 express and Feather M0 express have only a 32.768kHz crystal; they don't have a nice even-MHz crystal connected to the high-speed oscillator. As a result, to generate a '48MHz' clock for the processor and USB controller, I ended up multiplying the 32.768kHz frequency by 1464 using a PLL to generate a 47.972352MHz signal, which is about 0.06% low. Close enough for USB to work.

At first, I typo'd a register value leaving the PLL un-locked. The processor still ran fine, but when I looked at the clock with my oscilloscope, it was very ragged with a mean frequency around 30MHz. It took a few hours to track down the incorrect value, at which point the clock stabilized at about 48MHz.

SAMD21 USART

Next on the agenda was getting a USART to work; nothing terribly complicated there, aside from the clock problem mentioned above which generated a baud rate of around 6000 instead of 9600.

I like getting a USART working because it's usually (always?) easier than USB, plus demonstrates that clocking is working as expected. I can debug serial data with a simple logic analyzer. This time, the logic analyzer is how I discovered the clocking issue -- a bit time of 166µs does not equal 9600 baud.

SAMD21 USB

While I like having USB on-chip in the abstract, the concrete adventure of implementing USB for a new chip is always fraught with peril. In this case, the chip documentation was missing a couple of key details that I had to discover experimentally.

I'm still trying to come up with an abstraction for writing USB drivers for small systems; every one is different enough that I keep using copy&paste instead of building a driver core on top of hardware-specific primitives. In this case, the USB driver is 883 lines of code; the second shortest in AltOS with the ATMega32u4 driver being slightly smaller.

TODO

The only hardware that works today is one USARTs and USB. I also go Snek compiled and running. Left to do:

  • Digital GPIO controls. I've got basic GPIO functionality available in the underlying operating system, but it isn't exposed through Snek yet.

  • Analog outputs. This will involve hooking timers to outputs so that we can PWM them.

  • Analog inputs. That requires getting an ADC driver written and then hooking that into Snek.

  • On-board source storage. I think the ATMega model of storing just one set of source code on the device and reading that at boot time is clean and simple, so I want to do the same here. I think it will be simpler to use the on-chip flash instead of the external flash part. That means reserving a specific chunk of that for source code.

  • Figure out whether this code is part of AltOS, or part of Snek.

Links

Cory DoctorowSan Diego! I’m coming to town tonight with my new book Radicalized! (next: NYC, Toronto, Chicago…)

Thanks to the folks who came to last night’s LA launch for Radicalized, my latest book of science fiction for adults; I’m about to hop a train to San Diego for an event tonight at Mysterious Galaxy at 7:30 PM. From there, the tour takes me to NYC on Wednesday (The Strand, 7PM); then Toronto on Thursday (Metro Reference Library, 7PM, with Barry Hertz), and then to Chicago for events at C2E2. From there, I head to San Francisco, Fort Vancouver, WA (Portland, essentially!), Seattle and Anaheim. Looking forward to seeing you!

CryptogramTriton

Good article on the Triton malware which targets industrial control systems.

Worse Than FailureExceptionally Serial

You may remember Kara, who recently found some "interesting" serialization code. Now, this code happens to be responsible for sending commands to pieces of machine equipment.

Low-level machine interfaces remain one of the domains where serial protocols rule. Serial communications use simple hardware and have minimal overhead, and something like RS232 has been in real-world use since the 60s. Sure, it's slow, sure it's not great with coping with noise, sure you have to jump through some hoops if you want a connection longer than 15m, but its failures are well understood.

Nothing is so well understood that some developer can't make a mess of it.

Public Function SendCommand(ByVal cmd As String) As String Dim status As Integer ' Write cmd to the serial port using a protocol that is too painful to reproduce here. ' status receives an appropriate value along the way as the protocol checks for various error ' conditions including timeout If status <> 0 Then Throw MakeComPortException(status) End Function Private Function MakeComPortException(ByVal status As Integer) As ComPortException Dim code As Integer Dim message As String = Nothing GetErrorCode(status, code, message) Return New ComPortException(code, message) End Function Private Sub GetErrorCode(ByVal ErrorNum As Integer, ByRef code As Integer, ByRef message As String) code = ErrorNum Select Case ErrorNum Case 129 : message = "Hardware error occured during Send Data" ' Talk Error' Case 130 : message = "System asked to talk but did not recieve Previous Talk Command" ' Nothing to say Case 131 : SendCommand("ERRMS?") Dim EMsg As String = GetResponse() Dim EmsgStart As Integer = EMsg.IndexOf(" (") Try If EMsg.Contains("ERR=") Then code = CInt(EMsg.Substring(4, EmsgStart - 4)) Catch ex As Exception End Try message = EMsg.Substring(EmsgStart) Case 132 : message = "H/W Error while system trying to accept data" 'Listen Error Case 133 : message = "More than 80 characters received before term char" Case 134 : message = "Archive media is full" Case 135 : message = "Listen state interrupted by ESC key" ' Interrupted from keyboard Case 136 : message = "Listen state interrupted by controller sending '*'" ' Interrupted by Controller Case 137 : message = "Error Occured in UART" Case StatusCodes.PortDeviceNotFoundErrorCode : message = "No device Found" Case StatusCodes.PortTimeoutErrorCode : message = "COM port Timeout Error" ' This next occurs if cable is unplugged at controller Case StatusCodes.PortDisconnectedErrorCode : message = "Serial cable disconnected" Case Else : message = "Error #: " & ErrorNum End Select End Sub

So, the SendCommand method takes a string and passes it down the serial port. The protocol details were elided here, but we know that we receive a status number. MakeComPortException takes that number and helpfully looks up the message which goes with it, using GetErrorCode.

GetErrorCode is one gigantic switch statement. And let's pay close attention to the message lookup process for error 131. You'll note that we call SendCommand to ask the remote device to tell us what the error message was. But in some cases, it's going to reply to that request with an error code. Error 131, to be exact.

So if we trace this: we call SendCommand which gets a 131 error, which forces it to throw the results of calling MakeComPortException, which calls GetErrorCode, which calls SendCommand, which throws a new MakeComPortException, which…

There's an interesting side effect of this approach. Despite looking like a series of recursive calls, throw unwinds the stack, so this code will never actually trigger a stack overflow. It's actually more of an exception-assisted infinite loop.

For a bonus, note the PortTimeoutErrorCode entry. On the hardware side, they use a custom serial cable which wires a loopback on the RS232 "Ready to Send" and "Clear to Send" pins, which the software uses to detect that the cable is unplugged. It also has the side effect of ensuring that no off-the-shelf RS232 cables will work with the software. This is either a stupid mistake, or a fiendishly clever way to sell heavily marked-up replacement cables.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianSteinar H. Gunderson: When your profiler fools you

If you've been following my blog, you'll know about Nageru, my live video mixer, and Futatabi, my instant replay program with slow motion. Nageru and Futatabi both work on the principle that the GPU should be responsible for all the pixel pushing—it's just so much better suited than the CPU—but to do that, the GPU first needs to get at the data.

Thus, in Nageru, pushing the data from the video card to the GPU is one of the main CPU drivers. (The CPU also runs the UI, does audio processing, runs an embedded copy of Chromium if needed—we don't have full GPU acceleration there yet—and not the least encodes the finished video with x264 if you don't want to use Quick Sync for that.) It's a simple task; take two pre-generated OpenGL textures (luma and chroma) with an associated PBO, take the frame that the video capture card has DMAed into system RAM, and copy it while splitting luma from chroma. It goes about as fast as memory bandwidth will allow.

However, when you also want to run Futatabi, it runs on a different machine and also needs to get at the video somehow. Nageru solves this by asking Quick Sync (through VA-API) to encode it to JPEG, and then sending it over the network. Assuming your main GPU is an NVIDIA one (which gives oodles of more headroom for complicated things than the embedded Intel GPU), this means you now need an extra copy, and that's when things started to get hairy for me.

To simplify a long and confusing search, what I found was that with five inputs (three 1080p, two 720p), Nageru would be around two cores (200% CPU in top), and with six (an aditional 1080p), it would go to 300%. (This is on a six-core machine.) This made no sense, so I pulled up “perf record”, which said… nothing. The same routines and instructions were hot (mostly the memcpy into the NVIDIA GPU's buffers); there was no indication of anything new taking time. How could it be?

Eventually I looked at top instead of perf, and saw that some of the existing threads went from 10% to 30% CPU. In other words, everything just became uniformly slower. I had a hunch, and tried “perf stat -ddd“ instead. One value stood out:

  3 587 503      LLC-load-misses           #   94,72% of all LL-cache hits     (71,91%)

Aha! It was somehow falling off a cliff in L3 cache usage. It's a bit like going out of RAM and starting to swap, just more subtle.

So, what was using so much extra cache? I'm sure someone extremely well-versed in perf or Intel VTune would figure out a way to look at all the memory traffic with PEBS or something, but I'm not a perf expert, so I went for the simpler choice: Remove things. E.g., if you change a memcpy to a memset(dst, 0, len), the GPU will be still be encoding a frame, and you'll still be writing things to memory, but you won't be reading anymore. I noticed a really confusing thing: Occasionally, when I did less work, CPU usage would be going up. E.g., have a system at 2.4 cores used, remove some stuff, go up to 3.1 cores! Even worse, occasionally after start, it would be at e.g. 2.0 cores, then ten seconds later, it would be at 3.0. I suspected overheating, but even stopping and waiting ten minutes would not be enough to get it reliably down to 2.0 again.

It eventually turned out (again through perf stat) that power saving was getting the better of me. I hadn't expected it to kick in, but seemingly it did, and top doesn't compensate in any way. Change from the default powersave governor to performance (I have electric heating anyway, and it's still winter up here), and tada... My original case was now down from 3.0 to 2.6 cores.

However, this still wasn't good enough, so I had to do some real (if simple) optimization. Some rearchitecting saved a copy; instead of the capture card receiver thread memcpy-ing into a buffer and the MJPEG encoder thread memcpy-ing it from there to the GPU's memory-mapped buffers, I let the capture card receiver memcpy it directly over instead. (This wasn't done originally due to some complications around locking and such; it took a while to find a reasonable architecture. Of course, when you look at the resulting patch, it doesn't look hard at all.)

This took me down from 3.0 to 2.1 cores for the six-camera case. Success! But still too much; the cliff was still there. It was pretty obvious that it was stalling on something in the six-camera case:

    8 552 429 486      cycle_activity.stalls_mem_any   # 5 inputs
   17 082 914 261      cycle_activity.stalls_mem_any   # 6 inputs

I didn't get a whole lot of more L3 misses, though; they seemed to just get much slower (occasionally as much as 1200 cycles on average, which is about four times as much as normal). Unfortunately, you have zero insight in what the Intel GPU is doing to your memory bus, so it's not easy to know if it's messing up your memory or what. Turning off the actual encoding didn't help much, though; it was really the memcpy into them that hurt. And since it was into uncached (write-combined) memory, doing things like non-temporal writes didn't help at all. Neither did combining the two copies into one loop (read once, write twice). Or really anything else I could think of.

Eventually someone on ##intel-media came up with a solution; instead of copying directly into the memory-mapped buffers (vaDeriveImage), copy into a VAImage in system RAM, and then let the GPU do the copy. For whatever reason, this helped a lot; down from 2.1 to 1.4 cores! And 0.6 of those cores were non-VA-API related things. So more patch and happiness all around.

At this point, I wanted to see how far I could go, so I borrowed more capture inputs and set up my not-so-trusty SDI matrix to create an eight-headed capture monster:

Capture rig

Seemingly after a little more tuning of freelist sizes and such, it could sustain eight 1080p59.94 MJPEG inputs, or 480 frames per second if you wanted to—at around three cores again. Now the profile was starting to look pretty different, too, so there were more optimization opportunities, resulting in this pull request (helping ~15% of a core). Also, setting up the command buffers for the GPU copy seemingly takes ~10% of a core now, but I couldn't find a good way of improving it. Most of the time now is spent in the original memcpy to NVIDIA buffers, and I don't think I can do much better than that without getting the capture card to peer-to-peer DMA directly into the GPU buffers (which is a premium feature you'll need to buy Quadro cards for, it seems). In any case, my original six-camera case now is a walk in the park (leaving CPU for a high-quality x264 encode), which was the goal of the exercise to begin with.

So, lesson learned: Sometimes, you need to look at the absolutes, because the relative times (which is what you usually want) can fool you.

Planet DebianJonathan Carter: Running for DPL

I am running for Debian Project Leader, my official platform is published on the Debian website (currently looks a bit weird, but a fix is pending publication), with a more readable version available on my website as well as a plain-text version.

Shortly after I finished writing the first version of my platform page, I discovered an old talk from Ian Murdock at Microsoft Research where he said something that resonated well with me, and I think also my platform. Paraphrasing him:

You don’t want design by committee, but you want to tap in to the wisdom of the crowd. Lay out a vision for solving the problem, but be flexible enough to understand that you don’t have all the answers yourself, that other people are equally if not more intelligent than you are, and collectively, the crowd is the most intelligent of all.

– paraphrasing of Ian Murdock during a talk at Microsoft Research.

I’m feeling oddly calm about all of this and I like it. It has been a good exercise taking the time to read what everyone has said across all the mediums and trying to piece together all the pertinent parts and form a workable set of ideas.

I’m also glad that we have multiple candidates, I hope that it will inspire some thought, ideas and actions around the topics that occupy our collective head space.

The campaign period remains open until 2019-04-06. During this period, you can can ask me or any of the other candidates about how they envision their term as DPL.

There are 5 candidates in total, here’s the full list of candidates (in order of self-nomination with links to platforms as available):

,

Planet DebianLaura Arjona Reina: A weekend for the Debian website and friends

Last weekend (15-17 March 2019) some members of the Debian web team have met at my place in Madrid to advance work together in what we call a Debian Sprint. A report will be published in the following days, but I want to say thank you to everybody that made possible this meeting happen.

We have shared 2-3 very nice days, we have agreed in many topics and started to design an new homepage focused in newcomers (since a Debianite usually just go to the subpage they need) and showing that Debian the operating system is made by a community of people. We are committed to simplify the content of and the structure of www.debian.org, we have fixed some bugs already, and talked about many other aspects. We shared some time to know each other, and I think all of us became motivated to continue working on the website (because there is still a lot of work to do!) and make easy for new contributors to get involved too.

web_team_sprint_2019

For me, a person who rarely finds the way to attend Debian events, it was amazing to meet in person with people who I have shared work and mails and IRC chats since years in some cases, and to offer my place for the meeting. I have enjoyed a lot preparing all the logistics and I’m very happy that it went very well. Now I walk through my neighbourhood for my daily commute and every place is full of small memories of these days. Thanks, friends!

Cory DoctorowLos Angeles! I’m launching my new book Radicalized with Lexi Alexander tonight (next: San Diego, NYC, Toronto…)

Tonight is the launch for my latest book of science fiction for adults, Radicalized: I’ll be at the Barnes and Novel at The Grove in Los Angeles, in conversation with director/activist/stuntwoman/champion kickboxer Lexi Alexander, starting at 7PM.

From there, the tour takes me to San Diego tomorrow (Mysterious Galaxy, 7:30 PM; then NYC on Wednesday (The Strand, 7PM, with Anand Giridharadas); then Toronto on Thursday (Metro Reference Library, 7PM, with Barry Hertz), and then I’m in Chicago, San Francisco, Fort Vancouver, WA (Portland, essentially!), and Seattle.

I hope to see you!

TEDEducation Everywhere: A night of talks about the future of learning, in partnership with TED-Ed

TED-Ed’s Stephanie Lo (left) and TED’s own Cloe Shasha co-host the salon Education Everywhere, on January 24, 2019, at the TED World Theater in New York City. (Photo: Dian Lofton / TED)

The event: TED Salon: Education Everywhere, curated by Cloe Shasha, TED’s director of speaker development; Stephanie Lo, director of programs for TED-Ed; and Logan Smalley, director of TED-Ed

The partner: Bezos Family Foundation and ENDLESS

When and where: Thursday, January 24, 2019, at the TED World Theater in New York City

Music: Nora Brown fingerpicking the banjo

The big idea: We’re relying on educators to teach more skills than ever before — for a future we can’t quite predict.

Awesome animations: Courtesy of TED-Ed, whose videos are watched by more than two million learners around the world every day

New idea (to us anyway)Poverty is associated with a smaller cortical surface of the brain. 

Good to be reminded: Education doesn’t just happen in the classroom. It happens online, in our businesses, our social systems and beyond.

Little Nora Brown, who picked up the ukulele at age six, brings her old-time banjo sound to the TED stage. (Photo: Dian Lofton / TED)


The talks in brief:

Kimberly Noble, a neuroscientist and director of the Neurocognition, Early Experience and Development Lab at Columbia University

  • Big idea: We’ve learned that poverty has a measurable effect on the cortical surface of the brain, an area associated with intelligence. What could we do about that?
  • How: Experience can change children’s brains, and the brain is very sensitive to experience in early childhood. Noble’s lab wants to know if giving impoverished families more money might change brain function in their preschool kids.
  • Quote of the talk: “The brain is not destiny, and if a child’s brain can be changed, then anything is possible.”

Olympia Della Flora, associate superintendent for school development for Stamford Public Schools in Connecticut, and the former principal at Ohio Avenue Elementary School in Columbus, Ohio

  • Big idea: Healthy emotional hygiene means higher academic scores and happier kids.
  • How: With help from local colleges and experts, the teachers at Ohio Avenue Elementary learned new ways to improve kids’ behavior (which in turn helped with learning). Rather than just reacting to kids when they acted out, teachers built healthy coping strategies into the day — simple things like stopping for brain breaks, singing songs and even doing yoga poses — to help kids navigate their emotions in and out of the classroom.
  • Quote of the talk: “Small changes make huge differences, and it’s possible to start right now. You don’t need bigger budgets or grand, strategic plans. You simply need smarter ways to think about using what you have, where you have it.”

Marcos Silva, a TED-Ed Innovative Educator and public school teacher in McAllen, Texas; and Ana Rodriguez, a student who commutes three hours every day to school from Mexico

  • Big idea: Understanding what’s going on with students outside of school is important, too.
  • How: Silva grew up bringing the things he learned at school about American culture and the English language back to his parents, who were immigrants from Mexico. Now a teacher, he’s helping students like Ana Rodriquez to explore their culture, community and identity.
  • Quote of the talk: “Good grades are important, but it’s also important to feel confident and empowered.”

Joel Levin, a technology teacher and the cofounder of MinecraftEdu

  • Big idea: Educators can use games to teach any subject — and actually get kids excited to be in school.
  • How: Levin is a big fan of Minecraft, the game that lets players build digital worlds out of blocks with near-endless variety. In the classroom, Minecraft and similar games can be used to spark creativity, celebrate ingenuity and get kids to debate complex topics like government, poverty and power.
  • Quote of the talk: “One of my daughters even learned to spell because she wanted to communicate within the game. She spelled ‘home.'”

Jarrell E. Daniels offers a new vision for the criminal justice system centered on education and growth. (Photo: Dian Lofton / TED)

Jarrell E. Daniels, criminal justice activist and Columbia University Justice-In-Education Scholar

  • Big idea: Collaborative education can help us create more justice.
  • How: A few weeks before his release from state prison, Daniels took a unique course called Inside Criminal Justice, where he learned in a classroom alongside prosecutors and police officers, people he couldn’t imagine having anything in common with. In class, Daniels connected with and told his story to those in power — and has since found a way to make an impact on the criminal justice system through the power of conversation.
  • Quote of the talk: “It is through education that we will arrive at a truth that is inclusive and unites us all in a pursuit of justice.”

Liz Kleinrock, third-grade teacher and diversity coordinator at a charter school in Los Angeles

  • Big idea: It’s not easy to talk with kids about taboo subjects like race and equity, but having these conversations early prevents bigger problems in the future.
  • How: Like teaching students to read, speaking about tough topics requires breaking down concepts and words until they make sense. It doesn’t mean starting with an incredibly complex idea, like why mass incarceration exists — it means starting with the basics, like what’s fair and what isn’t. It requires practice, doing it every day until it’s easier to do.
  • Quote of the talk: “Teaching kids about equity is not about teaching them what to think. It’s about giving them the tools, strategies and opportunities to practice how to think.”

CryptogramCAs Reissue Over One Million Weak Certificates

Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn't a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don't allow those weak hash functions anymore. Still, it's a good thing that the CAs are reissuing the certificates. The point of a standard is that it's to be followed.

Worse Than FailurePortage and Portability

ST 225 20MB drive and WDC controller

Many moons ago, when PCs came housed within heavy cases of metal and plastic, Matt Q. and his colleague were assigned to evaluate a software package for an upcoming sales venture. Unfortunately, he and the colleague worked in different offices within the same metro area. As this was an age bereft of effective online collaboration tools, Matt had to travel regularly to the other office, carrying his PC with him. Each time, that meant unscrewing and unhooking the customary 473 peripheral cables from the back of the box, schlepping it through the halls and down the stairs, and catching the bus to reach the other office, where he got to do all those things again in reverse order. When poor scheduling forced the pair to work on the weekend, they hauled their work boxes between apartments as well.

As their work proceeded, Matt reached the limits of what his 20 MB hard drive could offer. From his home office, Matt filed a support ticket with IT. The technician assigned to his ticket—Gary—arrived at Matt's cubicle some time later, brandishing a new hard drive and a screwdriver. Gary shooed Matt away for more coffee to better focus on his patient. One minor surgery later, Matt's PC was back up and running with a bigger hard drive.

One day ahead of the project deadline, Matt was nearly done with his share of the work. He just had a few tweaks to make to his reports before copying them to the floppy disks needed by the sales team. Having hooked his PC back up within his cubicle, he switched it on—only to be greeted with a literal bang. The PC was dead and would not start.

After a panicked call to IT, Gary eventually reappeared at his desk with a screwdriver. Upon cracking open the PC case, he immediately cried, "Wait a minute! Have you been carting this PC around?"

Matt frowned. "Er, yes. Is that a problem?"

"I'll say! You weren't supposed to do that!" Gary scolded. "The hard drive's come loose and shorted out the workings!"

Matt darted over to Gary's side so he could see the computer's innards for himself. It didn't take long at all to notice that the new hard drive had been "secured" into place using Scotch tape.

"Hang on! I daresay you weren't supposed to do that!" Matt pointed to the offending tape. "Shall I check with your manager to be on the safe side?"

Gary's face crumpled. "I don't have access to the proper mountings!"

"Then find someone who does!"

Armed with his looming deadline and boss' approval, Matt escalated his support ticket even higher. It didn't take long at all for genuine mounting brackets to replace the tape. He never learned why IT techs were being deprived of necessary hardware; he assumed it was some fool's idea of a brilliant cost-cutting measure. He had to wonder how many desperate improvisations held their IT infrastructure together, and how much longer they would've gone unnoticed if it hadn't been for his PC-toting ways.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityWhy Phone Numbers Stink As Identity Proof

Phone numbers stink for security and authentication. They stink because most of us have so much invested in these digits that they’ve become de facto identities. At the same time, when you lose control over a phone number — maybe it’s hijacked by fraudsters, you got separated or divorced, or you were way late on your phone bill payments — whoever inherits that number can then be you in a lot of places online.

How exactly did we get to the point where a single, semi-public and occasionally transient data point like a phone number can unlock access to such a large part of our online experience? KrebsOnSecurity spoke about this at length with Allison Nixon, director of security research at New York City-based cyber intelligence firm Flashpoint.

Nixon said much of her perspective on mobile identity is colored by the lens of her work, which has her identifying some of the biggest criminals involved in hijacking phone numbers via SIM swapping attacks. Illegal SIM swaps allow fraudsters to hijack a target’s phone’s number and use it to steal financial data, passwords, cryptocurrencies and other items of value from victims.

Nixon said countless companies have essentially built their customer authentication around the phone number, and that a great many sites still let users reset their passwords with nothing more than a one-time code texted to a phone number on the account. In this attack, the fraudster doesn’t need to know the victim’s password to hijack the account: He just needs to have access to the target’s mobile phone number.

“As a consumer, I’m forced to use my phone number as an identity document, because sometimes that’s the only way to do business with a site online,” Nixon said. “But from that site’s side, when they see a password reset come in via that phone number, they have no way to know if that’s me. And there’s nothing anyone can do to stop it except to stop using phone numbers as identity documents.”

Beyond SIM-swapping attacks, there are a number of ways that phone numbers can get transferred to new owners, Nixon said. The biggest reason is lack of payment for past phone bills. But maybe someone goes through a nasty divorce or separation, and can no longer access their phone or phone accounts. The account is sent to collections and closed, and the phone number gets released back into the general pool for reassignment after a period of time.

Many major providers still let people reset their passwords with just a text message. Last week I went to regain access to a Yahoo account I hadn’t used in almost five years. Yahoo’s forgot password feature let me enter a phone number, and after entering a code sent to my phone I was able to read my email.

So, if that Yahoo account is tied to a mobile number that you can receive text messages at, then you can assume control over the account. And every other account associated with that Yahoo account. Even if that phone number no longer belongs to the person who originally established the email account.

This is exactly what happened recently to a reader who shared this account:

A while ago I bought a new phone number. I went on Yahoo! mail and typed in the phone number in the login. It asked me if I wanted to receive an SMS to gain access. I said yes, and it sent me a verification key or access code via SMS. I typed the code I received. I was surprised that I didn’t access my own email, but the email I accessed was actually the email of the previous owner of my new number.

Yahoo! didn’t even ask me to type the email address, or the first and last name. It simply sent me the SMS, I typed the code I received, and without asking me to type an email or first and last name, it gave me access to the email of my number’s PREVIOUS OWNER. Didn’t ask for credentials or email address. This seriously needs to be revised. At minimum Yahoo! should ask me to type the email address or the first and last name before sending me an SMS which contains an access code.

Brian Krebs (BK): You have your own experiences like this. Or sort of. You tell.

Allison Nixon (AN): Any threat intelligence company will have some kind of business function that requires purchasing burner phones fairly frequently, which involves getting new phone numbers. When you get new numbers, they are recycled from previous owners because there probably aren’t any new ones anymore. I get a lot of various text messages for password resets. One I kept getting was texts from this guy’s bank. Every time he got a deposit, I would get a text saying how much was deposited and some basic information about the account.

I approached the bank because I was concerned that maybe this random person would be endangered by the security research we were going to be doing with this new number. I asked them to take him off the number, but they said there wasn’t anything they could do about it.

One time I accidentally hijacked a random person’s account. I was trying to get my own account back at an online service provider, and I put a burner phone number into the site, went through the SMS password reset process, got the link and it said ‘Welcome Back’ to some username I didn’t know. Then I clicked okay and was suddenly reading the private messages of the account.

I realized I’d hijacked the account of the previous owner of the phone. It was unintentional, but also very clear that there was no technical reason I couldn’t hijack even more accounts associated with this number. This is a problem affecting a ton of service providers. This could have happened at many, many other web sites.

BK: We weren’t always so tied to our phone numbers, right? What happened?

AN: The whole concept of a phone number goes back over a hundred years. The operator would punch in a number you know was associated with your friend and you could call that person and talk to them. Back then, a phone wasn’t tied any one person’s identity, and possession of that phone number never proved that person’s identity.

But these days, phone numbers are tied to peoples’ identities, even though we’re recycling them and this recycling is a fundamental part of how the phone system works. Despite the fact that phone number recycling has always existed, we still have all these Internet companies who’ve decided they’re going to accept the phone number as an identity document and that’s terrible.

BK: How does the phone number compare to more traditional, physical identity documents?

AN: Take the traditional concept of identity documents — where you have to physically show up and present ID at some type of business or office, and then from there they would look up your account and you can conduct a transaction. Online, it’s totally different and you can’t physically show your ID and can’t show your face.

In the Internet ecosystem, there are different companies and services that sell things online who have settled on various factors that are considered a good enough proxy for an identity document. You supply a username, password, and sometimes you provide your email address or phone number. Often times when you set up your account you have some kind of agreed-upon way of proofing that over time. Based on that pre-established protocol, the user can log in and do transactions.

It’s not a good system and the way the whole thing works just enables fraud. When you’re bottlenecked into physically showing up in a place, there’s only so much fraud you can do. A lot of attacks against phone companies are not attacking the inherent value of a phone number, but its use as an identity document.

BK: You said phone number recycling is a fundamental part of how the phone system works. Talk more about that, how common that is.

AN: You could be divorced, or thrown into sudden poverty after losing a job. But that number can be given away, and if it goes to someone else you don’t get it back. There all kinds of life situations where a phone number is not a good identifier.

Maybe part of the reason the whole phone number recycling issue doesn’t get much attention is people who can’t pay their bills probably don’t have a lot of money to steal anyways, but it’s pretty terrible that this situation can be abused to kick people when they’re down. I don’t think a lot of money can be stolen in this way, but I do think the fact that this happens really can undermine the entire system.

BK: It seems to me that it would be a good thing if more online merchants made it easier to log in to their sites without using passwords, but instead with an app that just asks hey was that you just now trying to log in? Yes? Okay. Boom, you’re logged in. Seems like this kind of “push” login can leverage the user’s smart phone while not relying on the number — or passwords, for that matter.

If phone numbers are bad, what should we look to as more reliable and resilient identifiers?

AN: That’s something I’ve been thinking a lot about lately. It seems like all of the other options are either bad or really controversial. On the one hand, I want my bank to know who I am, and I want to expose my email and phone number to them so they can verify it’s me and know how to get in touch with me if needed. But if I’m setting up an email account, I don’t want to have to give them all of my information. I’m not attached to any one alternative idea, I just don’t like what we’re doing now.

For more on what you can do to reduce your dependence on mobile phone numbers, check out the “What Can You Do?” section of Hanging Up on Mobile in the Name of Security.

Update, March 18, 1:25 p.m. ET: On March 14, Google published instructions describing how to disable SMS or voice in 2-step verification on G Suite accounts.

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.8: Small updates

A new version 0.4.8 of RQuantLib reached CRAN and Debian. This release was triggered by a CRAN request for an update to the configure.ac script which was easy enough (and which, as it happens, did not result in changes in the configure script produced). I also belatedly updated the internals of RQuantLib to follow suit to an upstream change in QuantLib. We now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

In other news, we finally have a macOS binary package on CRAN. After several rather frustrating months of inaction on the pull request put together to enable this, it finally happened last week. Yay. So CRAN currently has an 0.4.7 macOS binary and should get one based on this release shortly. With Windows restored with the 0.4.7 release, we are in the best shape we have been in years. Yay and three cheers for Open Source and open collaboration models!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.8 (2019-03-17)

  • Changes in RQuantLib code:

    • Source code supports Boost shared_ptr and C+11 shared_ptr via QuantLib::ext namespace like upstream.
  • Changes in RQuantLib build system:

    • The configure.ac file no longer upsets R CMD check; the change does not actually change configure.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: Rcpp 1.0.1: Updates

Following up on the 10th anniversary and the 1.0.0. release, we excited to share the news of the first update release 1.0.1 of Rcpp. package turned ten on Monday—and we used to opportunity to mark the current version as 1.0.0! It arrived at CRAN overnight, Windows binaries have already been built and I will follow up shortly with the Debian binary.

We had four years of regular bi-monthly release leading up to 1.0.0, and having now taken four months since the big 1.0.0 one. Maybe three (or even just two) releases a year will establish itself a natural cadence. Time will tell.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1598 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 152 in BioConductor release 3.8. Per the (partial) logs of CRAN downloads, we currently average 921,000 downloads a month.

This release feature a number of different pull requests detailed below.

Changes in Rcpp version 1.0.1 (2019-03-17)

  • Changes in Rcpp API:

    • Subsetting is no longer limited by an integer range (William Nolan in #920 fixing #919).

    • Error messages from subsetting are now more informative (Qiang and Dirk).

    • Shelter increases count only on non-null objects (Dirk in #940 as suggested by Stepan Sindelar in #935).

    • AttributeProxy::set() and a few related setters get Shield<> to ensure rchk is happy (Romain in #947 fixing #946).

  • Changes in Rcpp Attributes:

    • A new plugin was added for C++20 (Dirk in #927)

    • Fixed an issue where 'stale' symbols could become registered in RcppExports.cpp, leading to linker errors and other related issues (Kevin in #939 fixing #733 and #934).

    • The wrapper macro gets an UNPROTECT to ensure rchk is happy (Romain in #949) fixing #948).

  • Changes in Rcpp Documentation:

    • Three small corrections were added in the 'Rcpp Quickref' vignette (Zhuoer Dong in #933 fixing #932).

    • The Rcpp-modules vignette now has documentation for .factory (Ralf Stubner in #938 fixing #937).

  • Changes in Rcpp Deployment:

    • Travis CI again reports to CodeCov.io (Dirk and Ralf Stubner in #942 fixing #941).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianAndy Simpkins: Race for science: A Prostate Cancer Research Centre charity challenge

This post may feel like an advert – to an extent it is.  I won’t plug events very often; however as a charity event in aid of prostate cancer and I genuinely think it will be great fun for anybody able to take part, so hence the unashamed plug.  The Race for Science will happen on 30th March in and around Cambridge. There is still time to enter, have fun and raise some money for a deserving charity at the same time.

A few weeks ago friends of ours, Jo (Randombird) and Josh were celebrating their birthday’s, we split into a couple of groups and had a go at a couple of escape room challenges at Lock House Games.  Four of us, Josh, Isy, Jane and myself had a go at the Egyptian Tomb.  Whilst this was the 2nd time Isy has tried an escape room the rest of us were N00bs.   I won’t describe the puzzles inside because that would spoil anyone’s enjoyment who would later goon to play the game.   We did make it out of the room inside our allotted time with only a couple of hints.  It was great fun, and is suitable for all ages our group ranged from 12 to 46, and this would work well for all adult teams as well as more mature children..

We escaped the tomb…

Anyway having had a good time, and whilst we waited for other group to finish their challenge, I spotted a request for people to Beta test some puzzles for the Prostate Cancer Research Centre‘s 2019 Race For Science to happen a couple of days later.

I volunteered my time and re-arranged my Monday so that I could spend the afternoon to trial the escape room elements of the challenge.  Some great people involved and the puzzles were very good indeed.  I visited 3 different locations in Cambridge each with very different puzzles to solve.  One challenge stood head and shoulders above the others; not that the others were poor – they are great, at least as good as a commercial escape room.  The reason that this challenge that was so outstanding wasn’t the because of the challenge itself, it was because it had been set specifically for the venue that will host this challenge, using the venue as part of the puzzle (The Race For Science is a scavenger hunt and contains several challenges at different locations within the city).

Beta testing one of the puzzles

Last Thursday I went back into town in the afternoon to beta test yet another location – Once again this was outstanding, the challenge differed from those that I had trialed the previous week.  This was another fantastic puzzle to solve and took full advantage of its location, both for it’s ‘back story’ and for the puzzle itself.  After this we moved onto testing  the scavenger hunt part of the event.  This is an played on the streets of the city on foot following clues will take you in and around some of the city’s museums and landmarks – and will unlock access to the escape room challenges I had been testing earlier.  My only concern is that it is played using a browser on a mobile device (i.e. phone).  I had to move around a bit in some locations to ensure that I had adequate signal.  You may want to make sure that you have fully charged battery!

The event is open to teams of up-to 6 people and will take the form of an “immersive scavenger hunt adventure”.  Unfortunately I can not take part as I have already played the game, but there is still time for you to register and take part.  Anyway if you are able to get to Cambridge at the end of the month please enter the Race for Science

Planet Linux AustraliaMichael Still: I, Robot

Share

Not the book of the movie, but the collection of short stories by Isaac Asimov. I’ve read this book several times before and enjoyed it, although this time I found it to be more dated than I remembered, both in its characterisations of technology as well as it’s handling of gender. Still enjoyable, but not the best book I’ve read recently.

I, Robot Book Cover I, Robot
Isaac Asimov
Fiction
Spectra
2004
224

The development of robot technology to a state of perfection by future civilizations is explored in nine science fiction stories.

Share

Planet DebianDirk Eddelbuettel: littler 0.3.7: Small tweaks

max-heap image

The eight release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to rather recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!).

A few examples as highlighted at the Github repo, as well as in the examples vignette.

This release brings an small update (thanks to Gergely) to scripts install2.r and installGithub.r allow more flexible setting of repositories, and fixes a minor nag from CRAN concerning autoconf programming style.

The NEWS file entry is below.

Changes in littler version 0.3.6 (2019-01-26)

  • Changes in examples

    • The scripts installGithub.r and install2.r get a new option -r | --repos (Gergely Daroczi in #67)
  • Changes in build system

    • The AC_DEFINE macro use rewritten to please R CMD check.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianArnaud Rebillout: Building your Pelican website with schroot

Lately I moved my blog to Pelican. I really like how simple and flexible it is. So in this post I'd like to highlight one particular aspect of my Pelican's workflow: how to setup a Debian-based environment to build your Pelican's website, and how to leverage Pelican's Makefile to transparently use this build environment. Overall, this post has more to do with the Debian tooling, and little with Pelican.

Introduction

First thing first, why would you setup a build environment for your project?

Imagine that you run Debian stable on your machine, then you want to build your website with a fancy theme, that requires the latest bleeding edge features from Pelican. But hey, in Debian stable you don't have these shiny new things, and the version of Pelican you need is only available in Debian unstable. How do you handle that? Will you start messing around with apt configuration and pinning, and try to install an unstable package in your stable system? Wrong, please stop.

Another scenario, the opposite: you run Debian unstable on your system. You have all the new shiny things, but sometimes an update of your system might break things. What if you update, and then can't build your website because of this or that? Will you wait a few days until another update comes and fixes everything? How many days before you can build your blog again? Or will you dive in the issues, and debug, which is nice and fun, but can also keep you busy all night, and is not exactly what you wanted to do in the first place, right?

So, for both of these issues, there's one simple answer: setup a build environment for your project. The most simple way is to use a chroot, which is roughly another filesystem hierarchy that you create and install somewhere, and in which you will run your build process. A more elaborate build environment is a container, which brings more isolation from the host system, and many more features, but for something as simple as building your website on your own machine, it's kind of overkill.

So that's what I want to detail here, I'll show you the way to setup and use a chroot. There are many tools for the job, and for example Pelican's official documentation recommends virtualenv, which is kind of the standard Python solution for that. However, I'm not too much of a Python developer, and I'm more familiar with the Debian tools, so I'll show you the Debian way instead.

Version-wise, it's 2019, we're talking about Pelican 4.x, if ever it matters.

Create the chroot

To create a basic, minimal Debian system, the usual command is debootstrap. Then in order to actually use this new system, we'll use schroot. So be sure to have these two packages installed on your machine.

sudo apt install debootstrap schroot

It seems that the standard location for chroots is /srv/chroot, so let's create our chroot there. It also seems that the traditional naming scheme for these chroots is something like SUITE-ARCH-APPLICATION, at least that's what other tools like sbuild do. While you're free to do whatever you want, in this tutorial we'll try to stick to the conventions.

Let's go and create a buster chroot:

SYSROOT=/srv/chroot/buster-amd64-pelican
sudo mkdir -p ${SYSROOT:?}
sudo debootstrap --variant=minbase buster ${SYSROOT:?}

And there we are, we just installed a minimal Debian system in $SYSROOT, how easy and neat is that! Just run a quick ls there, and see by yourself:

ls ${SYSROOT:?}

Now let's setup schroot to be able to use it. schroot will require a bit of a configuration file that tells it how to use this chroot. This is where things might get a bit complicated and cryptic for the newcomer.

So for now, stick with me, and create the schroot config file as follow:

cat << EOF | sudo tee /etc/schroot/chroot.d/buster-amd64-pelican.conf
[buster-amd64-pelican]
users=$LOGNAME
root-users=$LOGNAME
source-users=$LOGNAME
source-root-users=$LOGNAME
type=directory
union-type=overlay
directory=/srv/chroot/buster-amd64-pelican
EOF

Here, we tell schroot who can use this chroot ($LOGNAME, it's you), as normal user and root user. We also say where is the chroot directory located, and that we want an overlay, which means that the chroot will actually be read-only, and during operation a writable overlay will be stacked up on top of it, so that modifications are possible, but are not saved when you exit the chroot.

In our case, it makes sense because we have no intention to modify the build environment. The basic idea with a build environment is that it's identical for every build, we don't want anything to change, we hate surprises. So we make it read-only, but we also need a writable overlay on top of it, in case some process might want to write things in /var, for example. We don't care about these changes, so we're fine discarding this data after each build, when we leave the chroot.

And now, for the last step, let's install Pelican in our chroot:

schroot -c source:buster-amd64-pelican -u root -- \
  bash -c "apt update && apt install --yes make pelican && apt clean"

In this command, we log into the source chroot as root, and we install the two packages make and pelican. We also clean up after ourselves, to save a bit of space on the disk.

At this point, our chroot is ready to be used. If you're new to all of this, then read the next section, I'll try to explain a bit more how it works.

A quick introduction to schroot

In this part, let me try to explain a bit more how schroot works. If you're already acquainted, you can skip this part.

So now that the chroot is ready, let's experiment a bit. For example, you might want to start by listing the chroots available:

$ schroot -l
chroot:buster-amd64-pelican
source:buster-amd64-pelican

Interestingly, there are two of them... So, this is due to the overlay thing that I mentioned just above. Using the regular chroot (chroot:) gives you the read-only version, for daily use, while the source chroot (source:) allows you to make persistent modifications to the filesystem, for install and maintenance basically. In effect, the source chroot has no overlay mounted on top of it, and is writable.

So you can experiment some more. For example, to have a shell into your regular chroot, run:

$ schroot -c chroot:buster-amd64-pelican

Notice that the namespace (eg. chroot: or source:) is optional, if you omit it, schroot will be smart and choose the right namespace. So the command above is equivalent to:

$ schroot -c buster-amd64-pelican

Let's try to see the overlay thing in action. For example, once inside the chroot, you could create a file in some writable place of the filesystem.

(chroot)$ touch /var/tmp/this-is-an-empty-file
(chroot)$ ls /var/tmp
this-is-an-empty-file

Then log out with <Ctrl-D>, and log in again. Have a look in /var/tmp: the file is gone. The overlay in action.

Now, there's a bit more to that. If you look into the current directory, you will see that you're not within any isolated environment, you can still see all your files, for example:

(chroot)$ pwd
/home/arno/my-pelican-blog
(chroot)$ ls
content  Makefile  pelicanconf.py  ...

Not only are all your files available in the chroot, you can also create new files, delete existing ones, and so on. It doesn't even matter if you're inside or outside the chroot, and the reason is simple: by default, schroot will mount the /home directory inside the chroot, so that you can access all your files transparently. For more details, just type mount inside the chroot, and see what's listed.

So, this default of schroot is actually what makes it super convenient to use, because you don't have to bother about bind-mounting every directory you care about inside the chroot, which is actually quite annoying. Having /home directly available saves time, because what you want to isolate are the tools you need for the job (so basically /usr), but what you need is the data you work with (which is supposedly in /home). And schroot gives you just that, out of the box, without having to fiddle too much with the configuration.

If you're not familiar with chroots, containers, VMs, or more generally bind mounts, maybe it's still very confusing. But you'd better get used to it, as virtual environment are very standard in software development nowadays.

But anyway, let's get back to the topic. How to make use of this chroot to build our Pelican website?

Chroot usage with Pelican

Pelican provides two helpers to build and manage your project: one is a Makefile, and the other is a Python script called fabfile.py. As I said before, I'm not really a seasoned Pythonista, but it happens that I'm quite a fan of make, hence I will focus on the Makefile for this part.

So, here's how your daily blogging workflow might look like, now that everything is in place.

Open a first terminal, and edit your blog posts with your favorite editor:

$ nano content/bla-bla-bla.md

Then open a second terminal, enter the chroot, build your blog and serve it:

$ schroot -c buster-amd64-pelican
(chroot)$ make html
(chroot)$ make serve

And finally, open your web browser at http://localhost:8000 and enjoy yourself.

This is easy and neat, but guess what, we can even do better. Open the Makefile and have a look at the very first lines:

PY?=python3
PELICAN?=pelican

It turns out that the Pelican developers know how to write Makefiles, and they were kind enough to allow their users to easily override the default commands. In our case, it means that we can just replace these two lines with these ones:

PY?=schroot -c buster-amd64-pelican -- python3
PELICAN?=schroot -c buster-amd64-pelican -- pelican

And after these changes, we can now completely forget about the chroot, and simply type make html and make serve. The chroot invocation is now handled automatically in the Makefile. How neat!

Maintenance

So you might want to update your chroot from time to time, and you do that with apt, like for any Debian system. Remember the distinction between regular chroot and source chroot due to the overlay? If you want to actually modify your chroot, what you want is the source chroot. And here's the one-liner:

schroot -c source:$PROJECT -u root -- \
  bash -c "apt update && apt --yes dist-upgrade && apt clean"

If one day you stop using it, just delete the chroot directory, and the schroot configuration file:

sudo rm /etc/schroot/chroot.d/buster-amd64-pelican.conf
sudo rm -fr /srv/chroot/buster-amd64-pelican

And that's about it.

Last words

The general idea of keeping your build environments separated from your host environment is a very important one if you're a software developer, and especially if you're doing consulting and working on several projects at the same time. Installing all the build tools and dependencies directly on your system can work at the beginning, but it won't get you very far.

schroot is only one of the many tools that exist to address this, and I think it's Debian specific. Maybe you have never heard of it, as chroots in general are far from the container hype, even though they have some common use-cases. schroot has been around for a while, it works great, it's simple, flexible, what else? Just give it a try!

It's also well integrated with other Debian tools, for example you might use it through sbuild to build Debian packages (another daily task that is better done in a dedicated build environment), so I think it's a tool worth knowing if you're doing some Debian work.

That's about it, in the end it was mostly a post about schroot, hope you liked it.

,

CryptogramFriday Squid Blogging: A Squid-Related Vacation Tour in Hawaii

You can hunt for the Hawaiian bobtail squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Cory DoctorowRadicalized is one of The Verge’s picks for March!

Well, this is awesome: Andrew Liptak picked my next book, Radicalized as one of The Verge’s picks for March! The tour starts Monday!

Cory Doctorow is best known for novels, but his new book is a little different: it’s made up of four novellas set in the near future — “Unauthorized Bread,” about a woman who goes to great lengths to jailbreak the appliances in her subsidized housing; “Model Minority,” in which a superhero tries to tackle police corruption; “Radicalized,” about a man who sparks a violent uprising against his insurance company; and “Masque of the Red Death,” which ties in with Doctorow’s novel Walkaway. Publishers Weekly says that each story’s “characters are well wrought and complex, and the worldbuilding is careful.”

You can read or listen to an excerpt from “Unauthorized Bread” (the film / TV rights of which have been picked up by Topic Studios).


Planet DebianRomain Perier: Hello planet !

Introducing myself


My name is Romain, I have been nominated to the status of  Debian Maintainer recently. I am part of the debian-kernel team (still a padawan) since few months, and, as a DM, I will co-maintain the package raspi3-firmware with Gunnar Wolf.

Current contributions

As a kernel and linux en enginner, I focus on embedded stuffs and kernel development. This is a summary of what I have done in the previous months.

Kernel team

As a contributor, I work a various things, I try to work where help is the more needed. I have wrote a python script for generating debian changelog in firmware-nonfree, I have bumped the package for new releases. I bump the linux kernel for new upstream releases, I help to close and resolve bugs, I backport new features when it makes sense for doing so, enable new hardware and recently I have added  new flavour for the RPI 1 and RPI Zero in armel ! (spoil)

Raspi3-firmware

I have recently added a new mode in the configuration file of the package that let you device what you would like to boot from the firmware. You can either boot a linux kernel directly, passing the adress of the initramfs to use, a baremetal application, or a second level bootloader like u-boot or barebox (personnally I prefer u-boot). From u-boot then, you can use extlinux and get a nice generated menu by uboot menu. I have also added the support for using the devicetree-blob of the RPI 1 and the RPI Zero W when the firmware boots the kernel directly. I am also participating for reducing lintian warnings, new upstream release and improvements in general.

U-Boot

I have recently sent a MR for enabling support for the RPI Zero W in uboot for armel and it was accepted (thanks to Vagrant). As I use U-Boot everyday on my boards, I will probably send others MR ;)

Raspberry Pi Zero

As written described above, I have added a flavour for enabling support for the RPI1 and RPI Zero in armel for Linux 4.19.x. Like the Raspberry PI 3, there are no official images for this, but you can use debos or vmdb2 for building a buster image for your PI Zero. I have personally tried it, at home. I was able to run an LXDE session, with llvmpipe (I am still investigating if vc4 in gallium works for this SoC or not, while it's working perfectly fine for the PI3, it fallback to llvmpipe on the PI Zero).

Raspberry Pi 3

As posted on planet recently by Gunnar, you can find an unofficial image for the PI 3 if you want to try it. On buster you will be able to run a kernel 4.19.x LTS with an excellent DRM/KMS support and Gallium support in mesa. I was able to run a LXDE session with VC4 gallium here !

Future work


I will try my best to get an excellent support for all Raspberry PI in Debian (with unofficial images at the beginning). Including kernel support, kernel bugs fixes or improvements, debos and/or vmdb2 recipes for generating buster images easily, and even graphical stack hacks :) . I will continue my work in the kernel-team, because there are a tons of things to do, and of courses as co-maintainer, maintain raspi3-firmware (that will be probably renamed to something more generic, *spoil*).

CryptogramI Was Cited in a Court Decision

An article I co-wrote -- my first law journal article -- was cited by the Massachusetts Supreme Judicial Court -- the state supreme court -- in a case on compelled decryption.

Here's the first, in footnote 1:

We understand the word "password" to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or "passcode." Each term refers to the personalized combination of letters or digits that, when manually entered by the user, "unlocks" a cell phone. For simplicity, we use "password" throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).

And here's the second, in footnote 5:

We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password "unlocks" a cell phone, the password itself is not the "encryption key" that decrypts the cell phone's contents. See Kerr & Schneier, supra at 995. Rather, "entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user." Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

I'm teaching a live online class called "Spotlight on Cloud: The Future of Internet Security with Bruce Schneier" on O'Reilly's learning platform, Thursday, April 4, at 10:00 AM PT/1:00 PM ET.

The list is maintained on this page.

Planet DebianJohn Goerzen: The Rightward, Establishment Bias of Lazy Journalism

Note: I also posted this post on medium.

I remember clearly the moment I’d had enough of NPR for the day. It was early morning January 25 of this year, still pretty dark outside. An NPR anchor was interviewing an NPR reporter — they seem to do that a lot these days — and asked the following simple but important question:

“So if we know that Roger Stone was in communications with WikiLeaks and we know U.S. intelligence agencies have said WikiLeaks was operating at the behest of Russia, does that mean that Roger Stone has been now connected directly to Russia’s efforts to interfere in the U.S. election?”

The factual answer, based on both data and logic, would have been “yes”. NPR, in fact, had spent much airtime covering this; for instance, a June 2018 story goes into detail about Stone’s interactions with WikiLeaks, and less than a week before Stone’s arrest, NPR referred to “internal emails stolen by Russian hackers and posted to Wikileaks.” In November of 2018, The Atlantic wrote, “Russia used WikiLeaks as a conduit — witting or unwitting — and WikiLeaks, in turn, appears to have been in touch with Trump allies.”

Why, then, did the NPR reporter begin her answer with “well,” proceed to hedge, repeat denials from Stone and WikiLeaks, and then wind up saying “authorities seem to have some evidence” without directly answering the question? And what does this mean for bias in the media?


Let us begin with a simple principle: facts do not have a political bias. Telling me that “the sky is blue” no more reflects a Democratic bias than saying “3+3=6” reflects a Republican bias. In an ideal world, politics would shape themselves around facts; ideas most in agreement with the data would win. There are not two equally-legitimate sides to questions of fact. There is no credible argument against “the earth is round”, “climate change is real,” or “Donald Trump is an unindicted co-conspirator in crimes for which jail sentences have been given.” These are factual, not political, statements. If you feel, as I do, a bit of a quickening pulse and escalating tension as you read through these examples, then you too have felt the forces that wish you to be uncomfortable with unarguable reality.

That we perceive some factual questions as political is a sign of a deep dysfunction in our society. It’s a sign that our policies are not always guided by fact, but that a sustained effort exists to cause our facts to be guided by policy.

Facts do not have a political bias. There are not two equally-legitimate sides to questions of fact. “Climate change is real” is a factual, not a political, statement. Our policies are not always guided by fact; a sustained effort exists to cause our facts to be guided by policy.

Why did I say right-wing bias, then? Because at this particular moment in time, it is the political right that is more engaged in the effort to shape facts to policy. Whether it is denying the obvious lies of the President, the clear consensus on climate change, or the contours of various investigations, it is clear that they desire to confuse and mislead in order to shape facts to their whim.


It’s not always so consequential when the media gets it wrong. When CNN breathlessly highlights its developing story — that an airplane “will struggle to maintain altitude once the fuel tanks are empty” —it gives us room to critique the utility of 24/7 media, but not necessarily a political angle.

But ask yourself: who benefits when the media is afraid to report a simple fact about an investigation with political connotations? The obvious answer, in the NPR example I gave, is that Republicans benefit. They want the President to appear innocent, so every hedge on known facts about illegal activities of those in Trump’s orbit is a gift to the right. Every time a reporter gives equal time to climate change deniers is a gift to the right and a blow to informed discussion in a democracy.

Not only is there a rightward bias, but there is also an establishment bias that goes hand-in-hand. Consider this CNN report about Facebook’s “pivot to privacy”, in which CEO Zuckerberg is credited with “changing his tune somewhat”. To the extent to which that article highlights “problems” with this, they take Zuckerberg at face-value and start to wonder if it will be harder to clamp down on fake news in the news feed if there’s more privacy. That is a total misunderstanding of what was being proposed; a more careful reading of the situation was done by numerous outlets, resulting in headlines such as this one in The Intercept: “Mark Zuckerberg Is Trying to Play You — Again.” They correctly point out the only change actually mentioned pertained only to instant messages, not to the news feed that CNN was talking about, and even that had a vague promise to happen “over the next few years.” Who benefited from CNN’s failure to read a press release closely? The established powers — Facebook.


Pay attention to the media and you’ll notice that journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots. Much has been written about the “media narrative,” often critical, with good reason. Back in November of 2018, an excellent article on “The Ubearable Rightness of Seth Abramson” covered one particular case in delightful detail.

Journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots.

Seth Abramson himself wrote, “Trump-Russia is too complex to report. We need a new kind of journalism.” He argues the culprit is not laziness, but rather that “archive of prior relevant reporting that any reporter could review before they publish their own research is now so large and far-flung that more and more articles are frustratingly incomplete or even accidentally erroneous than was the case when there were fewer media outlets, a smaller and more readily navigable archive of past reporting for reporters to sift through, and a less internationalized media landscape.” Whether laziness or not, the effect is the same: a failure to properly contextualize facts leading to misrepresented or outright wrong outcomes that, at present, have a distinct bias towards right-wing and establishment interests.


Yes, the many scandals in Trumpland are extraordinarily complex, and in this age of shrinking newsroom budgets, it’s no wonder that reporters have trouble keeping up. Highly-paid executives like Zuckerberg and politicians in Congress have years of practice with obfuscation, and it takes skill to find the truth (if there even is any) behind a corporate press release or political talking point. One would hope, though, that reporters would be less quick to opine if they lack those skills or the necessary time to dig in.

There’s not just laziness; there’s also, no doubt, a confusion about what it means to be a balanced journalist. It is clear that there are two sides to a debate over, say, whether to give a state’s lottery money to the elementary schools or the universities. When there is the appearance of a political debate over facts, shouldn’t that also receive equal time for each side? I argue no. In fact, politicians making claims that contradict establish fact should be exposed by journalists, not covered by them.

And some of it is, no doubt, fear. Fear that if they come out and say “yes, this implicates Stone with Russian hacking” that the Fox News crowd will attack them as biased. Of course this will happen, but that attack will be wrong. The right has done an excellent job of convincing both reporters and the public that there’s a big left-leaning bias that needs to be corrected, by yelling about it every time a fact is mentioned that they don’t like. The unfortunate result is that the fact-leaning bias in the media is being whittled away.

Politicians making claims that contradict establish fact should be exposed by journalists, not covered by them. The fact-leaning bias in the media is being whittled away.

Regardless of the cause, media organizations and their reporters need to be cognizant of the biases actors of all stripes wish them to display, and refuse to play along. They need to be cognizant of the demands they put on their own reporters, and give them space to understand the context of a story before explaining it. They need to stand up to those that try to diminish facts, to those that would like them to be uninformed.

A world in which reporters know the context of their stories and boldly state facts as facts, come what may, is a world in which reporters strengthen the earth’s democracies. And, by extension, its people.

CryptogramCritical Flaw in Swiss Internet Voting System

Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted -- and that this system in particular should never be deployed, even if the found flaw is fixed -- but Cory Doctorow beat me to it:

The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post's embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world's top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn't work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

And, as everyone who's ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post's position is that since the bug only allows elections to be stolen by Swiss Post employees, it's not a big deal, because Swiss Post employees wouldn't steal an election.

But when Swiss Post agreed to run the election, they promised an e-voting system based on "zero knowledge" proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn't be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.

You might be thinking, "Well, what is the big deal? If you don't trust the people administering an election, you can't trust the election's outcome, right?" Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

Read the whole thing. It's excellent.

More info.

Planet DebianRitesh Raj Sarraf: Linux Desktop Usage 2019

If I look back now, it must be more than 20 years since I got fascinated with GNU/Linux ecosystem and started using it.

Back then, it was more curiosity of a young teenager and the excitement to learn something. There’s one thing that I have always admired/respected about Free Software’s values, is: Access for everyone to learn. This is something I never forget and still try to do my bit.

It was perfect timing and I was lucky to be part of it. Free Software was (and still is) a great platform to learn upon, if you have the willingness and desire for it.

Over the years, a lot lot lot has changed, evolved and improved. From the days of writing down the XF86Config configuration file to get the X server running, to a new world where now everything is almost dynamic, is a great milestone that we have achieved.

All through these years, I always used GNU/Linux platform as my primary computing platform. The CLI, Shell and Tools, have all been a great source of learning. Most of the stuff was (and to an extent, still is) standardized and focus was usually on a single project.

There was less competition on that front, rather there was more collaboration. For example, standard tools like: sed, awk, grep etc were single tools. Like you didn’t have 2 variants of it. So, enhancements to these tools was timely and consistent and learning these tools was an incremental task.

On the other hand, on the Desktop side of things, it started and stood for a very long time, to do things their own ways. But eventually, quite a lot of those things have standardized, thankfully.

For the larger part of my desktop usage, I have mostly been a KDE user. I have used other environments like IceWM, Enlightenment briefly but always felt the need to fallback to KDE, as it provided a full and uniform solution. For quite some time, I was more of a user preferring to only use the K* tools, as in if it wasn’t written with kdelibs, I’d try to avoid it. But, In the last 5 years, I took at detour and tried to unlearn and re-learn the other major desktop environment, GNOME.

GNOME is an equally beautiful and elegant desktop environment with a minimalistic user interface (but which at many times ends up plaguing its application’s feature set too, making it “minimalistic feature set applications”). I realized that quite a lot of time and money is invested into the GNOME project, especially by the leading Linux Distribution Vendors.

But the fact is that GNU/Linux is still not a major player on the Desktop market. Some believe that the Desktop Market itself has faded and been replaced by the Mobile market. I think Desktop Computing still has a critical place in the near foreseeable future and the Mobile Platform is more of an extension shell to it. For example, for quickies, the Mobile platform is perfect. But for a substantial amount of work to be done, we still fallback to using our workstations. Mobile platform is good for a quick chat or email, but if you need to write a review report or a blog post or prepare a presentation or update an excel sheet, you’d still prefer to use your workstation.

So…. After using GNOME platform for a couple of years, I realized that there’s a lot of work and thought put into this platform too, just like the KDE platform. BUT To really be able to dream about the “Year of the dominance of the GNU/Linux desktop platform”, all these projects need to work together and synergise their efforts.

Pain points:

  • Multiple tools, multiple efforts wasted. Could be synergised.
  • Data accessiblity
  • Integration and uniformity

Multiple tools

Kmail used to be an awesome email client. Evoltuion today is an awesome email client. Thunderbird was an awesome email client, which from what I last remember, Mozilla had lack of funds to continue maintaining it. And then there’s the never ending stream of new/old projects that come and go. Thankfully, email is pretty standardized in its data format. Otherwise, it would be a nightmare to switch between these client. But still, GNU/Linux platforms have the potential to provide a strong and viable offering if they could synergise their work together. Today, a lot of resource is just wasted and nobody wins. Definitely not the GNU/Linux platform. Who wins are: GMail, Hotmail etc.

If you even look at the browser side of things, Google realized the potential of the Web platform for its business. So they do have a Web client for GNU/Linux. But you’ll never see an equivalent for Email/PIM. Not because it is obsolete. But more because it would hurt their business instead.

Data accessibility

My biggest gripe is data accessiblity. Thankfully, for most of the stuff that we rely upon (email, documents etc), things are standardized. But there still are annoyances. For example, when KDE 4.x debacle occured, kwallet could not export its password database to the newer one. When I moved to GNOME, I had another very very very hard time extracting passwords from kwallet and feeding them to SeaHorse. Then, when recently, I switched back to KDE, I had to similarly struggle exporting back my data from SeaHorse (no, not back to KWallet). Over the years, I realized that critical data should be kept in its simplest format. And let the front-ends do all the bling they want to. I realized this more with Emails. Maildir is a good clean format to store my email in, irrespective of how I access my email. Whether it is dovecot, Evolution, Akonadi, Kmail etc, I still have my bare data intact.

I had burnt myself on the password front quite a bit, so on this migration back to KDE, I wanted an email like solution. So there’s pass, a password store, which fits the bill just like the Email use case. It would make a lot more sense for all Desktop Password Managers to instead just be a frontend interface to pass and let it keep the crucial data in bare minimal format, and accessbile at all times, irrespective of the overhauling that the Desktop projects tend to do every couple of years or so.

Data is critical. Without retaining its compatibility (both backwards and forward), no battle can you win.

I honestly feel the Linux Desktop Architects from the different projects should sit together and agree on a set of interfaces/tools (yes yes there is fd.o) and stick to it. Too much time and energy is wasted otherwise.

Integration and Uniformity

This is something I have always desired and I was quite impressed (and delighted) to see some progress on the KDE desktop in the UI department. On GNOME, I developed a liking for the Evolution email client. Infact, it is my client currently, for Email, NNTP and other PIM. And I still get to use it nicely in a KDE environment. Thank you. Evolution KDE/GNOME Integration

Sociological ImagesGender, Confidence, and Who Gets to Be an Expert

On January 31, The New York Times responded to a letter from Kimberly Probolus, an American Studies PhD candidate, with a commitment to publish gender parity in their letters to the editor (on a weekly basis) in 2019. This policy comes in the wake of many efforts to change the overwhelming overrepresentation of men in the position of “expert” in the media, from the Op-Ed project to womenalsoknowstuff.com (now with a sociology spinoff!) to #citeblackwomen.

The classic sociology article “Doing Gender,” explains that we repeatedly accomplish gender through consistent, patterned interactions. According to the popular press and imagination — such as Rebecca Solnit’s essay, Men Explain Things to Me — one of these patterns includes men stepping into the role of expert. Within the social sciences, there is research on how gender as a performance can explain gender disparities in knowledge-producing spaces.

Women are less likely to volunteer expertise in a variety of spaces, and researchers often explain this finding as a result of self-esteem or confidence. Julia Bear and Benjamin Collier find that, in 2008 for example, only 13% of contributors to Wikipedia were women. Two reasons cited for this gender disparity were a lack of confidence in their expertise and a discomfort with editing (which involves conflict). Likewise, studies of classroom participation have consistently found that men are more likely than women to talk in class — an unsurprising finding considering that classroom participation studies show that students with higher confidence are more likely to participate. Within academia, research shows that men are much more likely to cite themselves as experts within their own work.

This behavior may continue because both men and women are sanctioned for behavior that falls outside of gender performances. In research on salary negotiation, researchers found that women can face a backlash when they ask for raises because self-promotion goes against female gender norms. Men, on the other hand, may be sanctioned for being too self-effacing.

Source: Fortune Live Media, Flickr CC

Knowledge exchange on the Internet may make the sanctions for women in expert roles more plentiful. As demonstrated by the experiences of female journalists, video game enthusiasts, and women in general online, being active on the Internet carries intense risk of exposure to trolling, harassment, abuse, and misogyny. The social science research on online misogyny is also recent and plentiful.

Photo Credit: Sharon Mollerus, Flickr CC

Social media can also be a place to amplify the expertise of women or to respond to particularly egregious examples of mansplaining. And institutions like higher education and the media can continue to intervene to disrupt the social expectation that an expert is always a man. Check out the “Overlooked” obituary project for previously underappreciated scientists and thinkers, including the great sociologist Ida B. Wells.

For more on gendered confidence in specific areas, such as STEM, see more research on Gendering Intelligence.

Originally Posted at There’s Research On That

Jean Marie Maier is a graduate student in sociology at the University of Minnesota. She completed the Cultural Studies of Sport in Education MA program at the University of California, Berkeley, and looks forward to continuing research on the intersections of education, gender, and sport. Jean Marie has also worked as a Fulbright English Teaching Assistant in Gumi, South Korea and as a research intern at the American Association of University Women. She holds a BA in Political Science from Davidson College.

(View original at https://thesocietypages.org/socimages)

Planet DebianEnrico Zini: gitpython: list all files in a git commit

A little gitpython recipe to list the paths of all files in a commit:

#!/usr/bin/python3

import git
from pathlib import Path
import sys


def list_paths(root_tree, path=Path(".")):
    for blob in root_tree.blobs:
        yield path / blob.name
    for tree in root_tree.trees:
        yield from list_paths(tree, path / tree.name)


repo = git.Repo(".", search_parent_directories=True)
commit = repo.commit(sys.argv[1])
for path in list_paths(commit.tree):
    print(path)

It can be a good base, for example, for writing a script that, given two git branches, shows which django migrations are in one and not in the other, without doing any git checkout of the code.

Worse Than FailureError'd: Do Not Read The Daily WTF

Neil S. wrote, "MSN UK's reverse psychology marketing angle is pretty edgy."

 

"Batman & Lorem was a surprise flop at the box office," writes Art O.

 

Klaus asks, "Do I want to know, why Feedly associates Coffee with restrooms?"

 

"Those idiots, I clearly asked for null_-532419553!" wrote Raymond D.

 

Nigel writes, "I just hope their drugs aren't as genuine as their Windows licenses."

 

"At this point, I don't even remember how long I'd been waiting to get to this point," writes Jake B.

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianDirk Eddelbuettel: #20: Dependencies. Now with badges!

Welcome to post number twenty in the randomly redundant R rant series of posts, or R4 for short. It has been a little quiet since the previous post last June as we’ve been busy with other things but a few posts (or ideas at least) are queued.

Dependencies. We wrote about this a good year ago in post #17 which was (in part) tickled by the experience of installing one package … and getting a boatload of others pulled in. The topic and question of dependencies has seen a few posts over the year, and I won’t be able to do them all justice. Josh and I have been added a few links to the tinyverse.org page. The (currently) last one by Russ Cox titled Our Software Dependency Problem is particularly trenchant.

And just this week the topic came up in two different, and unrelated posts. First, in What I don’t like in you repo, Oleg Kovalov lists a brief but decent number of items by which a repository can be evaluated. And one is about [b]loated dependencies where he nails it with a quick When I see dozens of deps in the lock file, the first question which comes to my mind is: so, am I ready to fix any failures inside any of them? This is pretty close to what we have been saying around the tinyverse.

Second, in Beware the data science pin factory, Eric Colson brings an equation. Quoting from footnote 2: […] the number of relationships (r) grows as a function number of members (n) per this equation: r = (n^2-n) / 2. Granted, this was about human coordination and ideal team size. But let’s just run with it: For n=10, we get r=9 which is not so bad. For n=20, it is r=38. And for n=30 we are at r=87. You get the idea. “Big-Oh-N-squared”.

More dependencies means more edges between more nodes. Which eventually means more breakage.

Which gets us to announcement embedded in this post. A few months ago, in what still seems like a genuinely extra-clever weekend hack in an initial 100 or so lines, Edwin de Jonge put together a remarkable repo on GitLab. It combines Docker / Rocker via hourly cron jobs with deployment at netlify … giving us badges which visualize the direct as well as recursive dependencies of a package. All in about 100 lines, fully automated, autonomously running and deployed via CDN. Amazing work, for which we really need to praise him! So a big thanks to Edwin.

With these CRAN Dependency Badges being available, I have been adding them to my repos at GitHub over the last few months. As two quick examples you can see

  • Rcpp Rcpp
  • RcppArmadillo RcppArmadillo

to get the idea. RcppArmadillo (or RcppEigen or many other packages) will always have one: Rcpp. But many widely-used packages such as data.table also get by with a count of zero. It is worth showing this – and the badge does just that! And I even sent a PR to the badger package: if you’re into this, you can have a badge made for your via badger::badge_depdencies(pkgname).

Otherwise, more details at Edwin’s repo and of course his actual tinyverse.netlify.com site hosting the badges. It’s easy as all other badges: reference the CRAN package, get a badge.

So if you buy into the idea that lightweight is the right weight then join us and show it via the dependency badges!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianHideki Yamane: pbuilder hack with new debootstrap option

Suddenly I noticed that maybe I can use --cache-dir option that I've added to debootstrap some time ago for pbuilder, too. Then hacked it.

> original
real    3m34.811s
user    1m6.676s
sys     0m33.051s

> use aptcache for debootstrap
real    2m52.397s
user    0m59.660s
sys     0m28.631s

It cuts 40s for creating base.tgz. Nice, isn't it? :) Hope pbuilder team will accept this Merge Request and push it to buster since it's worth for stable release, IMHO.

,

Planet DebianBen Hutchings: Debian LTS work, February 2019

I was assigned 19.5 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from January. I worked only 4 hours and so will carry over 16.5 hours.

I backported various security fixes to Linux 3.16, but did not upload a new release yet.

CryptogramDARPA Is Developing an Open-Source Voting System

This sounds like a good development:

...a new $10 million contract the Defense Department's Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from special secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don't have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won't be asking people to blindly trust that their voting systems are secure -- as voting machine vendors currently do. Instead they'll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They'll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

CryptogramOn Surveillance in the Workplace

Data & Society just published a report entitled "Workplace Monitoring & Surveillance":

This explainer highlights four broad trends in employee monitoring and surveillance technologies:

  • Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.

  • Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers' ability to opt out of tracking.

  • Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.

  • Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated "nudges" or adjusting performance benchmarks based on a worker's real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.

In a blog post about this report, Cory Doctorow mentioned "the adoption curve for oppressive technology, which goes, 'refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'" I don't agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.

CryptogramVideos and Links from the Public-Interest Technology Track at the RSA Conference

Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).

I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:

  • How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)

  • Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)

  • Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)

  • Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)

  • Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)

  • The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)

I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress's Travis Moore, Ford Foundation's Jenny Toomey, CitizenLab's John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU's Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.

Here is my blog post about the event. Here's Ford Foundation's blog post on why they helped me organize the event.

We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)

Related: Here's my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.

And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.

If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I'd like to do it again next year.

Planet DebianCraig Small: WordPress 5.1.1

The Debian packages for WordPress version 5.1.1 are being updated as I write this. This is a security fix for WordPress that stops comments causing a cross-site scripting bug. It’s an important one to update.

The backports should happen soon so even if you are using Debian stable you’ll be covered.

Worse Than FailureCodeSOD: Ancient Grudges

Ignoring current events, England and the rest of Europe have had a number of historical conflicts which have left a number of grudges littered across the landscape.

Andro V lives in Norway, but is remotely collaborating with an English developer. That English developer wants their peers to know that Sweyn I and Thorkell the Tall’s invasions of southern England will never be forgotten.

/// <summary>
/// A counter that is to be read in reverse and computed in a Viking way. It usually is followed by a buffer which must be parsed in reverse order.
/// </summary>
public static int Read3F1(this BinaryReader reader, bool reverse = false)
{
    const float VIKINGS_INVADES_ENGLAND = 1009.0f;
    int counter = (reverse ? reader.ReadInt32Rev() :          
      reader.ReadInt32());
    // Debug.Assert(counter % 0x3f1 == 0, "Wrong data for 3f1 counter: 0x" + counter.ToString("X8"));
    counter = (int)Math.Round((counter / VIKINGS_INVADES_ENGLAND) - 1.0f); // The VIKINGs magic computation
    return counter;
}

The year 1009 involved a lot of Viking invasions of southern England. But, like all old grudges, it arises from an even earlier grudge, as in 1002, English king Aethelred the Unready massacred all the Danes living in England. Of course, that was in response to a few years of raids prior, and, well… it’s ancient grudges and blood fueds all the way down.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianShirish Agarwal: The road to hell is paved with good intentions

First of all I would like to share about a video which I should have talked or added about in the ‘Celebrating National Science Day at GMRT‘ blog post. It’s a documentary called ‘The Most Unknown‘ . It’s a great documentary as it gives you a glimpse of how much is there yet to discover. The reason I shared this is I have seen lot of money being removed from Government Research and put god knows where. Just a fair warning, it would be somewhat of a long conversation.

Almost all of IIT’s are in bad shape, in fact IIT Mumbai which I know and have often had the privilege to associate myself with has been going through tough times. This is when institutes such as IIT Mumbai, NCRA, GMRT, FTII and all such institutions have made loads of contributions in creating awareness and has given the public the ability to question rather than just ‘believe’ . For any innovation to happen, you have to question, investigate, prove, share the findings and the way you have done things so it could be reproduced.

Even Social Sciences as shared in the Documentary and my brief learnings and takeaways from TISS has been the same. The reason is even they are in somewhat dire-straits. I was just sharing or having a conversation with another friend few days back who is into higher education that IISER Pune where the recent Wordcamp happened had to commercialize and open its doors to events in order to sustain itself. While I and perhaps all wordcampers would forever be grateful for sharing with us such a great place as well as a studios vibe which also influenced how Wordcamp was held, I did feel sad that we intruded in their study areas which should be meant for IISER’s only.

Before I get too carried away, I should point to people that people should look at Ian Cheney’s some of the old documentaries as well (the one who just did the most Unknown) and has found his previous work compelling as well. The City Dark is a beautiful masterpiece and shares lot of insights about light pollution which India could use well to improve both our lighting as well as reduce light pollution in the atmosphere.

Meeting with Bhakts and ‘Good intentions’

The reason I shared the above was also keeping in mind the conversations I have whenever I meet Bhakts. The term bhakt comes from bhakti in Sanskrit which at one time meant spirituality and purity although now in politics it means one who choose to believe in the leader and the party absolutely. Whenever a bhakt starts losing an logical argument, one of the argument that is often meted out is whatever you say you cannot doubt Mr. Narendra Modi’s intentions which is the reason why I took the often used proverb to prove the same point and is the heading of the blog post. The problem with the whole ‘good intentions’ part is, it’s pretty much a strawman argument. The problem with intentions is everybody can either say or mask their intentions. Even ISIS says that they want to bring back the golden phase of Islam. We have seen their actions, should we believe what they say ? Or even Hitler who said ‘One people, one empire, one leader’ who claimed that the Aryans were superior to the Jewish people while history has gone to show the exact opposite. Israel, today is the eight-biggest arms supplier in the world, our military is the second-biggest buyer of arms from them as well as far more prosperous than us and many other countries. Their work on drip-irrigation and water retention, agricultural techniques, there is much we could learn from them. Same thing about manning borders and such. While I could give many such examples the easiest example to share in context of good intentions gone wrong is Demonetisation in India which deserves its own paragraph.

Demonetisation

Demonetisation was heralded by Mr. Modi with great fanfare . It was supposed to take out black money. While we learned later that black money didn’t get wiped out but has become more into your face. This we learned later was debunked by the earlier R.B.I. Governor Raghuram Rajan and then now from R.B.I. itself. This is before Mr. Narendra Modi announced demonetisation. Sharing below an excerpt from the Freakonomics Radio show which has Mr. Rajan’s interview. Makes for interesting reading or listening as the case may be.

DUBNER: Now, shortly after your departure as governor of the R.B.I., Prime Minister Modi executed a sudden, controversial plan to abolish 500- and 1,000-rupee banknotes, hoping to crack down on the shadow economy and tax evasion. I understand you had not been in favor of that idea, correct? p, li { white-space: pre-wrap; }

RAJAN: Absolutely. It didn’t make sense. I was asked for my opinion, and I said, “Look, it is taking away money that people use in transactions. It’s going to create enormous disruption unless we replace it overnight with freshly printed money.” And it’s very important that we have all that in place, difficult to maintain secrecy, and then the fundamental sort of objective of this, which was to get people to bring out the money that they hoarded in their basements and pay taxes on them — I said, “That’s probably not going to work out, because they’re going to find ways to infuse the money back into the system without paying those taxes.”

DUBNER: It’s been roughly two years now. What have been the effects of this demonetization?

RAJAN: Well, I think more than the numbers suggest, because India was growing at that time. And we had numbers which were in the 7.5 percent growth range at that point, at a time, in 2016, when the world was actually growing quite slowly. When growth picked up in 2017, instead of going along with the world, which we typically do and we exceed world growth significantly, we went down. That suggests it had a tremendous effect on growth, but that, the numbers don’t capture it all, because what actually got killed was the informal sector — the people who were doing work with notes rather than with checks, who didn’t have formal bank accounts. And when you look at the job numbers that some private-sector people estimate 10, 12 million jobs were lost in that episode. And, of course, we haven’t recovered them yet. It was one of those places where more economic thinking would have helped.

DUBNER: Was it a coincidence that Prime Minister Modi went ahead with the plan only after you’d left?

RAJAN: Well, I can’t speak on that. I can only say that I made my objections very, very clear.

Freakonomics Podcast Stephen J. Dubner interviewing Mr. Raghuram Rajan, RBI Governor 5th September 2013 – 5th September 2017. Aired on 6th February 2019 .

I would urge people to listen to Freakonomics Radio as there are lots of pearls of wisdom in there. There is also the Good ideas are not enough podcast which is very relevant to the topic at hand but would digress about the Freakonomics radio for now.

The interesting part to ask from the details known from R.B.I. are –


a. Why did Mr. Narendra Modi feel the need to have the permission of R.B.I. after 38 days ?


b. If Mr. Modi were confident of the end-result then shouldn’t he have instead of asking permission and have the PMO have taken all the responsibility.


In any case, as was seen from R.B.I. counting only 0.3% of the money did not come even though many people’s valid claims were thrown out and the expense in the whole exercise was much more than the shortfall. R.B.I. didn’t get INR 10k crore while it spent INR 13k crore for the new currency. Does anybody see any saving here ?

The bhakts counter-argument is that the bankers were bad, if everybody had done their work, then it would have all worked out as Mr. Modi wanted. The statement itself implies that they didn’t know the reality. Even if we take the statement at face-value that all the bankers were cheaters (which I don’t agree with at all) , didn’t they know it when they were the opposition. Where was the party’s economic intelligence, didn’t it tell them so many years they were in Opposition. This is what the Opposition should be doing and knowing about the state of the Economy and know the workings to say the least.

There is also this https://www.scribd.com/document/401570379/Minutes-of-RBI-s-board-meeting-on-demonetisation

These are minutes obtained by Venkatesh Nayak under the RTI tool.

To rub salt to the wounds, now the IPP is at low of 1.7 percent as well 😦 . As always, I’m sure BJP will say these are not the final numbers.

I am curious to know what RSS people would think or make of this video –

https://www.youtube.com/watch?v=DYKKrnG26YA

The other terminology people do use when they are unable to win an argument is I don’t know in detail, why don’t you come to the shaaka and meet our ‘prachaarak’ . Pracharaak while a hindi word used to mean a wise man disseminating knowledge but in RSS-speak he is a political spinner. In style, mannerisms they are very close to how born again Jesuists or Missionaries do, the ones who are forceful, don’t want to have a meaningful conversation. The most interesting video on this topic can be seen on twitter

https://twitter.com/akashbanerjee/status/1105309181240885248/

Important Legislations which need Public Scrutiny

The other interesting development or regression which I and probably many tech-savvy Indians would probably noted is the total lack of comments on any of the new regulations by Mozilla about Internet, whether it is the News and Draft Intermedairy rules, the Aadhar Amendment Bill, 2019 which was passed recently, the Draft E-commerce Bill , Data Protection Bill, all of which are important pieces of legislation which need and needed careful study. While the Government of India isn’t going to do anything apart from asking comments, people should have come forward and made better systems. One of the things that any social group could do is either have Stet or a co-ment instance so it could capture people comments and also mail it from their to Meity .

The BJP Site hack

Last and not the least was the BJP site hack which is now in ninth day where it is still under maintainance . It was a hack because there was a meme which went viral on the web. Elliot in his inimitable style also shared how they should have backed up their site . In an un-related event I was attending devops event where how web apps, websites should be done was shared. It’s not rocket science, even if one of the people had looked at ‘high-availability’ they would have got loads of web-pages and links which tell how to be secure and still serve content. Apparently, they just did not take BJP site data but probably also donor data (people who have donated wealth to BJP) . There is a high possibility if it’s a real hack that crackers, foreign agents could put BJP to ransom and have dominion on India if BJP comes back to power. While I do hope such a scenario doesn’t play out you never know. I would probably share about the devops event some other time as there was much happening there and deserves its own blog post.

On the other hand, it could be a ploy to tell to EC (Election Commission) we don’t have the data, our website got hacked when EC asks from where you got the funding for the Elections.

In either way it doesn’t seem a good time for either BJP or even India as a whole if it has such weak I.T. Government 😦

Frankly speaking, when I first heard it, I was thinking hopefully they wouldn’t have put their donor details on the same server and they probably would be taking backups. I chided myself for thinking such stupid thoughts. A guy like me could be screwy with his backups due to time, budget constraints but a political party like BJP which has unbridled wealth wouldn’t do such rookie mistakes and would have the best technical talent available. Many BJP well-wishers were thinking they would be able to punish the culprit and I had to tell them he could use any number of tools to hide his real identity. You could use VPN, tor or even plain old IP spoofing. The possibilities are just endless. This is just when I’m a infosec rookie or baby.

The other interesting part of this hack is till date BJP has neither acknowledged the hack, nor have they shared what went wrong. I think we have been to comfortable and been used to hacks on reddit, gmail, twitter where the developers feel that the users should know the extent of the hack, what was lost and not lost, what they are doing to recover and till when we can expect services to start and then later a full disclosure report as to what they could determine. Of course, there could be disinformation in such information as well but that would have been a better response method than how the BJP IT Cell has responded. Not something I would expect from a well-oiled structure.

Update – 14/03/19 – Seems fb, instagram, whatsapp is down . Also see how fb responded on twitter . Also Indian Express ran an article on the recent BJP hack. We will know what happens hopefully in the next few days .

Krebs on SecurityAd Network Sizmek Probes Account Breach

Online advertising firm Sizmek Inc. [NASDAQ: SZMK] says it is investigating a security incident in which a hacker was reselling access to a user account with the ability to modify ads and analytics for a number of big-name advertisers.

In a recent posting to a Russian-language cybercrime forum, an individual who’s been known to sell access to hacked online accounts kicked off an auction for “the admin panel of a big American ad platform.”

“You can add new users to the ad system, edit existing ones and ad offers,” the seller wrote. The starting bid was $800.

The seller included several screen shots of the ad company’s user panel. A few minutes on LinkedIn showed that many of these people are current or former employees of Sizmek.

The seller also shared a screenshot of the ad network’s Alexa site rankings:

A screenshot of the Alexa ranking for the “big American ad network,” access to which was sold on a cybercrime forum.

I checked Sizmek’s Alexa page and at the time it almost mirrored the statistics shown in the screenshot above. Sizmek’s own marketing boilerplate says the company operates its ad platform in more than 70 countries, connecting more than 20,000 advertisers and 3,600 agencies to audiences around the world. The company is listed by market analysis firm Datanyze.com as the world third-largest ad server network.

After reaching out to a number of folks at Sizmek, I heard back from George Pappachen, the company’s general counsel.

Pappachen said the account being resold on the dark web is a regular user account (not a all-powerful administrator account, despite the seller’s claim) for its Sizmek Advertising Suite (SAS). Pappachen described Sizmek’s SAS product line as “a sizable and important one” for the company and a relatively new platform that has hundreds of users.

He acknowledged that the purloined account had the ability to add or modify the advertising creatives that get run on customer ad campaigns. And Sizmek is used in ad campaigns for some of the biggest brands out there. Some of the companies shown in the screenshot of the panel shared by the dark web seller include PR firm Fleishman-Hillard, media giants Fox Broadcasting, Gannett, and Hearst Digital, as well as Kohler, and Pandora.

A screenshot shared by the dark web seller. Portions of this panel — access to a Sizmek user account — was likely translated by the Chrome Web browser, which has a built-in page translate function. As seen here, that function tends to translate items in the frame of the panel, but it leaves untouched the data inside those frames.

Crooks who exploited this access could hijack existing ad campaigns running on some of the world’s top online properties, by inserting malicious scripts into the HTML code of ads that run on popular sites. Or they could hijack referral commissions destined for others and otherwise siphon ad profits from the system.

“Or someone who is looking to sabotage our systems in a bigger way or allow malicious code to enter our systems,” Pappachen offered.

Pappachen said Sizmek forced a password reset on all internal employees (“a few hundred”), and that the company is scrubbing its SAS user database for departed employees, partners and vendors whose accounts may have been hijacked.

“We’re now doing some level of screening to see if there’s been any kind of intrusion we can detect,” Pappachen said. “It seemed like [the screenshots were accounts from] past employees. I think there were even a couple of vendors that had access to the system previously.”

The Sizmek incident carries a few lessons. For starters, it seems like an awful lot of people at Sizmek had access to sensitive controls and data a good deal longer than they should have. User inventory and management is a sometimes painful but very necessary ongoing security process at any mature organization.

Best practices in this space call for actively monitoring all accounts — users and admins — for signs of misuse or unauthorized access. And when employees or vendors sever business ties, terminate their access immediately.

Pappachen asked KrebsOnSecurity what else could have prevented this. I suggested some form of mobile-based multi-factor authentication option would prevent stolen credentials from turning into instant access. He said the company does use app/mobile based authentication for several of its new products and some internal programs, but allowed that “the legacy ones probably did not have this feature.”

PASSWORD SPRAYING

It’s not clear how this miscreant got access to Sizmek’s systems. But it is clear that attackers have moved rapidly of late toward targeting employees at key roles in companies they’d like to infiltrate, and they’re automating the guessing of passwords for employee accounts. One popular version of this attack involves what’s known as “password spraying,” which attempts to access a large number of accounts (usernames/email addresses) with a few commonly used passwords.

There are technologies like CAPTCHAs — requiring the user to solve an image challenge or retype squiggly letters — which try to weed out automated bot programs from humans. Then again, password spraying attacks often are conducted “low and slow” to help evade these types of bot challenges.

Password spraying was suspected in a compromise reported last week at Citrix, which said it heard from the FBI on March 6 that attackers had successfully compromised multiple Citrix employee accounts. A little-known security company Resecurity claimed it had evidence that Iranian hackers were responsible, had been in Citrix’s network for years, and had offloaded terabytes of data.

Resecurity drew criticism from many in the security community for not sharing enough evidence of the attacks. But earlier this week the company updated its blog post to include several Internet addresses and proxies it says the attackers used in the Citrix campaign.

Resecurity also presented evidence that it notified Citrix of the breach as early as Dec. 28, 2018. Citrix initially denied that claim, but has since acknowledged that it did receive a notification from Resecurity on Dec. 28. Citrix has declined to comment further beyond saying it is still investigating the matter.

BRUTE-FORCE LIGHT

If anything, password spraying is a fairly crude, if sometimes marginally effective attack tool. But what we’ve started to see more of over the past year has been what one might call “brute-force light” attacks on accounts. A source who has visibility into a botnet of Internet of Things devices that is being mostly used for credential stuffing attacks said he’s seeing the attackers use distributed, hacked systems like routers, security cameras and digital video recorders to anonymize their repeated queries.

This source noticed that the automated system used by the IoT botmasters typically will try several dozen variations on a password that each target had previously used at another site — adding a “1” or an exclamation point at the end of a password, or capitalizing the first letter of whole words in previous passwords, and so on.

The idea behind this method to snare not only users who are wholesale re-using the same password across multiple sites, but to also catch users who may just be re-using slight variations on the same password.

This form of credential stuffing is brilliant from the attacker’s perspective because it probably nets him quite a few more correct guesses than normal password spraying techniques.

It’s also smart because it borrows from human nature. Let’s say your average password re-user is in the habit of recycling the password “monkeybutt.” But then he gets to a site that wants him to use capitalization in his password to create an account. So what does this user pick? Yes, “Monkeybutt.” Or “Monkeybutt1”. You get the picture.

There’s an old saying in security: “Everyone gets penetration tested, whether or not they pay someone for the pleasure.” It’s kind of like that with companies and their users and passwords. How would your organization hold up to a password spraying or brute-force light attack? If you don’t know, you should probably find out, and then act on the results accordingly. I guarantee you the bad guys are going to find out even if you don’t.

TEDTED announces Carla Zanoni as first Director of Audience Development

TED, the nonprofit organization devoted to Ideas Worth Spreading, has tapped Carla Zanoni as its first-ever Director of Audience Development, effective April 1, 2019. Formerly the Editor of Audience and Analytics at the Wall Street Journal, Zanoni will lead TED’s audience acquisition and growth strategies across its global, multi-channel footprint, with an emphasis on expanding analytics, social media and digital community development. Zanoni will report to Colin Helms, TED’s Head of Media.

“With an audience reach of over 120 million people worldwide, TED has built an incredible community centered around watching, listening, sharing and discussing powerful ideas,” said Helms. “We’re evolving from being simply being known for ‘TED talks’ to a multifaceted ideas platform that includes a half-dozen hit podcasts, thousands of community-organized TEDx events, and a growing library of over 100,000 talks. This is in addition to animated TED-Ed videos and original short-form shows. With the exponential growth of our content library, it’s become vital that we deepen our audience relationships and empower their discovery of ideas worth spreading. We’re thrilled to have Carla join TED and help us imagine the future of our globally connected community.”

“TED knows audience inside out, and they know how to grow community,” said Zanoni. “I am inspired to lead the charge of this next era of their audience engagement — and to create new ways for us to come together, which is vital in today’s divided landscape. I’m thrilled to join the visionary and thoughtful team at TED.”

Zanoni brings more than a decade of experience in audience development. Prior to joining TED, she was the first global Audience & Analytics Editor to be named on the masthead of the Wall Street Journal, where she worked to transform the newsroom to be data-informed in its daily work and strategic decisions. During her tenure, she created and led the audience engagement, development, data analytics and emerging media team focused on diversifying and growing the Journal’s readership. She also launched the Wall Street Journal on multiple storytelling platforms including Snapchat Discover, the Facebook Messenger bot and Amazon Echo.

Zanoni previously led national digital and social strategy at DNAinfo.com. She wrote for numerous regional and national publications and helped launch the first newspaper dedicated to New York City politics (now called City and State). Zanoni is a graduate of Columbia University’s School of General Studies and School of Journalism. She is working on her first book.

Planet DebianJo Shields: Too many cores

Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #202

Here’s what happened in the Reproducible Builds effort between Sunday March 3 and Saturday March 9 2019:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

Chris Lamb uploaded version 113 to Debian unstable fixing a long list of issues. It included contributions already covered in previous weeks as well as new ones by Chris, including:

  • Provide explicit help when the libarchive system package is missing or “incomplete”. (#50)
  • Explicitly mention when the guestfs module is missing at runtime and we are falling back to a binary diff. (#45)

Vagrant Cascadian made the corresponding update to GNU Guix. []

Packages reviewed and fixed, and bugs filed

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made the following improvements:

  • Analyse node maintenance job runs to determine whether to mark nodes offline. []
  • Detect hanging health check runs, not just failed ones. []
  • Allow members of the jenkins UNIX group to sudo(8) to the jenkins user [] and simplify adding users to said group [].
  • Improve the “SHA1 checker” script to deal with packages with more than one version [] and to re-download buildinfo.debian.net’s files if they are older than two weeks. []
  • Node maintenance. [][][][]
  • In the version checker, correctly deal with a rare situation when several, say, diffoscope versions are available in one Debian suite at the same time. []

In addition, Alexander “lynxis” Couzens, made a number of changes to our OpenWrt support, including:

  • Add OpenWrt support to our database. []
  • Adding a reproducible_openwrt_package_parser.py script. []
  • Strip unreproducible certificates from images. []

Outreachy

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy. Outreachy provides internships to work free software. Internships are open to applicants around the world, working remotely and are not required to move. Interns are paid a stipend of $5,500 for the three month internship and have an additional $500 travel stipend to attend conferences/events.

So far, we received more than ten initial requests from candidates. The closing date for applicants is April 2nd. More information is available on the application page.


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

CryptogramJudging Facebook's Privacy Shift

Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that "frankly we don't currently have a strong reputation for building privacy protective services."

There is ample reason to question Zuckerberg's pronouncement: The company has made -- and broken -- many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook's surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

In security and privacy, the devil is always in the details -- and Zuckerberg's post provides none. But we'll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.

How Facebook treats people on its platform

Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn't go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.

Better -- and more usable -- privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook's advertisers, which are the company's real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it's hard to understand the results of different options. But much of this is deliberate; Facebook doesn't want its users to make their data private from other users.

The company could give people better control over how -- and whether -- their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don't mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.

More user protection from stalking. "Facebook stalking" is often thought of as "stalking light," or "harmless." But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.

Fully ending real-name enforcement. Facebook's real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone­ -- not just celebrities like Lady Gaga. Doing so would mark a clear shift.

How Facebook runs its platform

Increased transparency of Facebook's business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing "sharing" to discussing "transfers," "access to raw information," and "access to derived information" would be a visible improvement.

Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company's practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.

Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.

Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users' lives. On the other hand, WhatsApp -- purchased by Facebook in 2014 -- provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp's security to the consolidated system. Changing course here would be a dramatic and negative signal.

Collecting less data from outside of Facebook. Facebook doesn't just collect data about you when you're on the platform. Because its "like" button is on so many other pages, the company can collect data about you when you're not on Facebook. It even collects what it calls "shadow profiles" -- data about you even if you're not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.

Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.

How Facebook manages for privacy

Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.

Hire a senior executive responsible for trust. Facebook's current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It's unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.

Engage with regulators. Much of Facebook's posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician's offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.

User privacy is traditionally against Facebook's core business interests. Advertising is its business model, and targeted ads sell better and more profitably -- and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.

We don't expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook's leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there's plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft's then newfound commitment to security.

Worse Than FailureHow It's Made

People like hot dogs until they see how it's made. Most people don't ask, because they don't want to know and keep eating hot dogs. In software, sometimes we have to ask. It's not just about solving problems, but because what scares some programmers is the knowledge that their car's software might be little more than the equivalent of driving duct-taped toothpicks down the highway at 70MPH. Our entire field is bad at what we do.

Brett worked as a system analyst for a medical research institution, MedStitute. MedStitute used proprietary software for data storage and analysis, called MedTech. Doctors and researchers like MedTech's results, but Brett his co-worker Tyree- know how it's made.

The software has no backend access, and all software development happens in a "click-to-program" GUI. The GUI looks like it was built from someone who learned to code by copy/pasting from 1990s era websites, watching ten minutes of Jurassic Park, and searching StackOverflow until something compiled. The "language" shows the same careful design philosophy. Every if must have an else. Some modules use booleans, some return an empty string to represent false values. Documentation is unclear about which situation is which. Essentially, every if statement becomes three statements.

Brett needed to launch a new study. A study depends on some basic set of statistics and groups patients based on a randomized variable. Brett looked through the list of variables he could randomize on, and the one he wanted was missing. Brett assumed he made a mistake, and went back a few screens to check the name, copying it down for reference. He went back to the list of randomizable variables. It wasn't there. He looked closer at the list. He noticed that the list of randomized variables only included data from multiple-choice fields. The field he wanted to randomize on was based on a calculated field.

Brett knew that Tyree had worked on another project that randomized on a calculated field, so he messaged Tyree on Slack. "How did you code this random variable? In Medtech it won't let you?"

"I'm on a conference call, let me call you afterward," Tyree wrote.

A few minutes later, Tyree called Brett.

"What you have to do is start with two fields. Let's call it $variable_choice, that's a multiple choice question, and $variable_calced that's your calculated field. When you want to create a variable that randomly selects based on your calculated field, you start by telling Medtech that this random variable is based on $variable_choice. Then you delete $variable_choice, and then rename $variable_calced to be $variable_choice."

"Wait, they allow you to do that, but don't allow you to randomize calculated fields any other way? And they don't check?"

"Hopefully, they don't decide to start checking before this project is over," Tyree said.

"This study is supposed to go on for ten years. This project succeeding comes down to them never treating this workaround as a bug?"

"It was the only solution I could find. Let me know if you need anything else?"

Brett wasn't completely satisfied with the hack and went back to the documentation. He found a "better" solution: he could make a read-only multiple-choice field with only one choice, the value of the calculated field, as the default answer. Unfortunately, it was possible that the user would alter the list unintentionally by answering the multiple-choice question before the calculated field was evaluated.

Ultimately, the only choice left to Brett was to take his lunch break, go to the cafeteria, and order two hot dogs.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityPatch Tuesday, March 2019 Edition

Microsoft on Tuesday pushed out software updates to fix more than five dozen security vulnerabilities in its Windows operating systems, Internet Explorer, Edge, Office and Sharepoint. If you (ab)use Microsoft products, it’s time once again to start thinking about getting your patches on. Malware or bad guys can remotely exploit roughly one-quarter of the flaws fixed in today’s patch batch without any help from users.

One interesting patch from Microsoft this week comes in response to a zero-day vulnerability (CVE-2019-0797) reported by researchers at Kaspersky Lab, who discovered the bug could be (and is being) exploited to install malicious software.

Microsoft also addressed a zero day flaw (CVE-2019-0808) in Windows 7 and Windows Server 2008 that’s been abused in conjunction with a previously unknown weakness (CVE-2019-5786) in Google’s Chrome browser. A security alert from Google last week said attackers were chaining the Windows and Chrome vulnerabilities to drop malicious code onto vulnerable systems.

If you use Chrome, take a moment to make sure you have this update and that there isn’t an arrow to the right of your Chrome address bar signifying the availability of new update. If there is, close out and restart the browser; it should restore whatever windows you have open on restart.

This is the third month in a row Microsoft has released patches to fix high-severity, critical flaws in the Windows component responsible for assigning Internet addresses to host computers (a.k.a. “Windows DHCP client”).

These are severe “receive a bad packet of data and get owned” type vulnerabilities. But Allan Liska, senior solutions architect at security firm Recorded Future, says DHCP vulnerabilities are often difficult to take advantage of, and the access needed to do so generally means there are easier ways to deploy malware.

The bulk of the remaining critical bugs fixed this month reside in Internet Explorer, Edge and Office. All told, not the craziest Patch Tuesday. Even Adobe’s given us a month off (or at least a week) patching critical Flash Player bugs: The Flash player update shipped this week includes non-security updates.

Staying up-to-date on Windows patches is good. Updating only after you’ve backed up your important data and files is even better. A good backup means you’re not pulling your hair out if the odd buggy patch causes problems booting the system.

Windows 10 likes to install patches all in one go and reboot your computer on its own schedule. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

As always, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Further reading:

Qualys

SANS Internet Storm Center

Ask Woody

ZDNet

,

Planet DebianKees Cook: security things in Linux v5.0

Previously: v4.20.

Linux kernel v5.0 was released last week! Looking through the changes, here are some security-related things I found interesting:

read-only linear mapping, arm64
While x86 has had a read-only linear mapping (or “Low Kernel Mapping” as shown in /sys/kernel/debug/page_tables/kernel under CONFIG_X86_PTDUMP=y) for a while, Ard Biesheuvel has added them to arm64 now. This means that ranges in the linear mapping that contain executable code (e.g. modules, JIT, etc), are not directly writable any more by attackers. On arm64, this is visible as “Linear mapping” in /sys/kernel/debug/kernel_page_tables under CONFIG_ARM64_PTDUMP=y, where you can now see the page-level granularity:

---[ Linear mapping ]---
...
0xffffb07cfc402000-0xffffb07cfc403000    4K PTE   ro NX SHD AF NG    UXN MEM/NORMAL
0xffffb07cfc403000-0xffffb07cfc4d0000  820K PTE   RW NX SHD AF NG    UXN MEM/NORMAL
0xffffb07cfc4d0000-0xffffb07cfc4d1000    4K PTE   ro NX SHD AF NG    UXN MEM/NORMAL
0xffffb07cfc4d1000-0xffffb07cfc79d000 2864K PTE   RW NX SHD AF NG    UXN MEM/NORMAL

per-task stack canary, arm
ARM has supported stack buffer overflow protection for a long time (currently via the compiler’s -fstack-protector-strong option). However, on ARM, the compiler uses a global variable for comparing the canary value, __stack_chk_guard. This meant that everywhere in the kernel needed to use the same canary value. If an attacker could expose a canary value in one task, it could be spoofed during a buffer overflow in another task. On x86, the canary is in Thread Local Storage (TLS, defined as %gs:20 on 32-bit and %gs:40 on 64-bit), which means it’s possible to have a different canary for every task since the %gs segment points to per-task structures. To solve this for ARM, Ard Biesheuvel built a GCC plugin to replace the global canary checking code with a per-task relative reference to a new canary in struct thread_info. As he describes in his blog post, the plugin results in replacing:

8010fad8:       e30c4488        movw    r4, #50312      ; 0xc488
8010fadc:       e34840d0        movt    r4, #32976      ; 0x80d0
...
8010fb1c:       e51b2030        ldr     r2, [fp, #-48]  ; 0xffffffd0
8010fb20:       e5943000        ldr     r3, [r4]
8010fb24:       e1520003        cmp     r2, r3
8010fb28:       1a000020        bne     8010fbb0
...
8010fbb0:       eb006738        bl      80129898 <__stack_chk_fail>

with:

8010fc18:       e1a0300d        mov     r3, sp
8010fc1c:       e3c34d7f        bic     r4, r3, #8128   ; 0x1fc0
...
8010fc60:       e51b2030        ldr     r2, [fp, #-48]  ; 0xffffffd0
8010fc64:       e5943018        ldr     r3, [r4, #24]
8010fc68:       e1520003        cmp     r2, r3
8010fc6c:       1a000020        bne     8010fcf4
...
8010fcf4:       eb006757        bl      80129a58 <__stack_chk_fail>

r2 holds the canary saved on the stack and r3 the known-good canary to check against. In the former, r3 is loaded through r4 at a fixed address (0x80d0c488, which “readelf -s vmlinux” confirms is the global __stack_chk_guard). In the latter, it’s coming from offset 0x24 in struct thread_info (which “pahole -C thread_info vmlinux” confirms is the “stack_canary” field).

per-task stack canary, arm64
The lack of per-task canary existed on arm64 too. Ard Biesheuvel solved this differently by coordinating with GCC developer Ramana Radhakrishnan to add support for a register-based offset option (specifically “-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=...“). With this feature, the canary can be found relative to sp_el0, since that register holds the pointer to the struct task_struct, which contains the canary. I’m hoping there will be a workable Clang solution soon too (for this and 32-bit ARM). (And it’s also worth noting that, unfortunately, this support isn’t yet in a released version of GCC. It’s expected for 9.0, likely this coming May.)

top-byte-ignore, arm64
Andrey Konovalov has been laying the groundwork with his Top Byte Ignore (TBI) series which will also help support ARMv8.3’s Pointer Authentication (PAC) and ARMv8.5’s Memory Tagging (MTE). While TBI technically conflicts with PAC, both rely on using “non-VA-space” (Virtual Address) bits in memory addresses, and getting the kernel ready to deal with ignoring non-VA bits. PAC stores signatures for checking things like return addresses on the stack or stored function pointers on heap, both to stop overwrites of control flow information. MTE stores a “tag” (or, depending on your dialect, a “color” or “version”) to mark separate memory allocation regions to stop use-after-tree and linear overflows. For either of these to work, the CPU has to be put into some form of the TBI addressing mode (though for MTE, it’ll be a “check the tag” mode), otherwise the addresses would resolve into totally the wrong place in memory. Even without PAC and MTE, this byte can be used to store bits that can be checked by software (which is what the rest of Andrey’s series does: adding this logic to speed up KASan).

ongoing: implicit fall-through removal
An area of active work in the kernel is the removal of all implicit fall-through in switch statements. While the C language has a statement to indicate the end of a switch case (“break“), it doesn’t have a statement to indicate that execution should fall through to the next case statement (just the lack of a “break” is used to indicate it should fall through — but this is not always the case), and such “implicit fall-through” may lead to bugs. Gustavo Silva has been the driving force behind fixing these since at least v4.14, with well over 300 patches on the topic alone (and over 20 missing break statements found and fixed as a result of the work). The goal is to be able to add -Wimplicit-fallthrough to the build so that the kernel will stay entirely free of this class of bug going forward. From roughly 2300 warnings, the kernel is now down to about 200. It’s also worth noting that with Stephen Rothwell’s help, this bug has been kept out of linux-next by him sending warning emails to any tree maintainers where a new instance is introduced (for example, here’s a bug introduced on Feb 20th and fixed on Feb 21st).

ongoing: refcount_t conversions
There also continues to be work converting reference counters from atomic_t to refcount_t so they can gain overflow protections. There have been 18 more conversions since v4.15 from Elena Reshetova, Trond Myklebust, Kirill Tkhai, Eric Biggers, and Björn Töpel. While there are more complex cases, the minimum goal is to reduce the Coccinelle warnings from scripts/coccinelle/api/atomic_as_refcounter.cocci to zero. As of v5.0, there are 131 warnings, with the bulk of the remaining areas in fs/ (49), drivers/ (41), and kernel/ (21).

userspace PAC, arm64
Mark Rutland and Kristina Martsenko enabled kernel support for ARMv8.3 PAC in userspace. As mentioned earlier about PAC, this will give userspace the ability to block a wide variety of function pointer overwrites by “signing” function pointers before storing them to memory. The kernel manages the keys (i.e. selects random keys and sets them up), but it’s up to userspace to detect and use the new CPU instructions. The “paca” and “pacg” flags will be visible in /proc/cpuinfo for CPUs that support it.

platform keyring
Nayna Jain introduced the trusted platform keyring, which cannot be updated by userspace. This can be used to verify platform or boot-time things like firmware, initramfs, or kexec kernel signatures, etc.

Edit: added userspace PAC and platform keyring, suggested by Alexander Popov
Edit: tried to clarify TBI vs PAC vs MTE

That’s it for now; please let me know if I missed anything. The v5.1 merge window is open, so off we go! :)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Cory DoctorowI’m going out on tour with my new science fiction book RADICALIZED and I hope to see you!

Radicalized is my next science fiction book, out on March 18 from Tor Books: it contains four novellas about the hope and misery of our moment, from refugees resisting life in an automated IoT hell to health care executives being targeted by suicide bombers who have been traumatized by watching their loved ones die after being denied care. Tor Books is sending me on tour with the book in the US and Canada and I hope you can make it to one of my stops!

I’ll be doing more travel with the book to places like Berlin, Halifax and Toronto later in the year — check my upcoming appearances page for more.

Monday, March 18
Barnes and Noble The Grove with Lexi Alexander
Los Angeles, CA
7pm

Tuesday, March 19
Mysterious Galaxy
San Diego, CA
7:30pm

Wednesday, March 20
The Strand with Anand Giridharadas
NYC
7pm

Thursday, March 21
Toronto Reference Library
Toronto, Canada
7pm

Saturday, March 23
Panels/Signings at C2E2
Chicago, IL

Monday, March 25
Berkeley Arts and Letters at Hillside Club with Richard Kadrey
Berkeley, CA
7:30pm

Tuesday, March 26
Fort Vancouver Regional Library held at Clark College Campus
Fort Vancouver, WA
7pm

Thursday, March 28
Seattle Public Library
Seattle, WA
7pm

See Cory Doctorow on Tour for Radicalized! [Tor.com]

Planet DebianDaniel Lange: Wiping harddisks in 2019

Wiping hard disks is part of my company's policy when returning servers. No exceptions.

Good providers will wipe what they have received back from a customer, but we don't trust that as the hosting / cloud business is under constant budget-pressure and cutting corners (wipefs) is a likely consequence.

With modern SSDs there is "security erase" (man hdparm or see the - as always well maintained - Arch wiki) which is useful if the device is encrypt-by-default. These devices basically "forget" the encryption key but it also means trusting the devices' implementation security. Which doesn't seem warranted. Still after wiping and trimming, a secure erase can't be a bad idea :-).

Still there are three things to be aware of when wiping modern hard disks:

  1. Don't forget to add bs=4096 (blocksize) to dd as it will still default to 512 bytes and that makes writing even zeros less than half the maximum possible speed. SSDs may benefit from larger block sizes matched to their flash page structure. These are usually 128kB, 256kB, 512kB, 1MB, 2MB and 4MB these days.1
  2. All disks can usually be written to in parallel. screen is your friend.
  3. The write speed varies greatly by disk region, so use 2 hours per TB and wipe pass as a conservative estimate. This is better than extrapolating what you see initially in the fastest region of a spinning disk.
  4. The disks have become huge (we run 12TB disks in production now) but the write speed is still somewhere 100 MB/s ... 300 MB/s. So wiping servers on the last day before returning is not possible anymore with disks larger than 4 TB each (and three passes). Or 12 TB and one pass (where e.g. fully encrypted content allows to just do a final zero-wipe).

hard disk size one pass three passes
1 TB2 h6 h
2 TB4 h12 h
3 TB6 h18 h
4 TB8 h24 h (one day)
5 TB10 h30 h
6 TB12 h36 h
8 TB16 h48 h (two days)
10 TB20 h60 h
12 TB24 h72 h (three days)
14 TB28 h84 h
16 TB32 h96 h (four days)
18 TB36 h108 h
20 TB40 h120 h (five days)

Hard disk wipe animation


  1. As Douglas pointed out correctly in the comment below, these are IT Kilobytes and Megabytes, so 210 Bytes and 220 Bytes. So Kibibytes and Mebibytes for those firmly in SI territory. 

TEDImagine If: A session of talks in partnership with the U.S. Air Force

Curator Bryn Freedman invites the audience to imagine a world we all want to live in, as she kicks off the TED Salon: Imagine If, presented in partnership with the U.S. Air Force. (Photo: Ryan Lash / TED)

The event: TED Salon: Imagine If, curated by Bryn Freedman and Amanda Miller, TED Institute

The partner: U.S. Air Force

When and where: Thursday, February 21, 2019, at the TED World Theater in New York City

Music: Rapper Alia Sharrief, performing her songs “My Girls Rock” and “Girl Like Me”

The big idea: Imagination is a superpower — it allows us to push beyond perceived limits, to think beyond the ordinary and to discover a new world of possibilities.

New idea (to us anyway): We may be able to vaccinate against PTSD and other mental illnesses.

Good to be reminded: Leaders shouldn’t simply follow the pack. They need to embrace sustainability, equality, accountability — not just the whims of the market.


Brigadier General (Select) Brenda P. Cartier shares how we can balance our personalities and create more just societies. (Photo: Ryan Lash / TED)

Brenda Cartier, Director of Operations at Headquarters Air Force Special Operations Command (AFSOC) and the first female Air Commando selected for the rank of general

  • Big idea: “Precision-guided masculinity” allows us to maintain a killer instinct without discarding the empathetic, “feminine” traits that mitigate the “collateral damage” of toxic masculinity.
  • How? Viewing gender as a spectrum of femininities and masculinities allows us to select traits as they fit each situation, without tying our identities to them. As we learn to balance our personalities, we become well-rounded human beings and create more just societies.
  • Quote of the talk: “This new narrative breaks us out of a one-size-fits-all approach to gender, where we link male bodies and masculinity and female bodies and femininity. ‘Precision-guided masculinity’ begs us to ask the question: ‘Who is it that is employing those masculine traits to protect and defend?'”

Could we put a stop to mental illnesses like depression and PTSD before they develop? Rebecca Brachman explores the potential of a new class of drugs called “resilience enhancers.” (Photo: Dian Lofton / TED)

Rebecca Brachman, neuroscientist, TED Fellow and pioneer in the emerging field of preventative psychopharmacology

  • Big idea: Brachman and her team have discovered a new class of drugs called “resilience enhancers” that could change the way we treat mental illness like depression and PTSD. These drugs wouldn’t just treat symptoms of the diseases, she says — they could prevent them from developing in the first place.
  • How? Brachman’s research applies the fundamental principle of vaccination to mental illness, building up a person’s ability to recover and grow after stress. For example, imagine a Red Cross volunteer going into an earthquake zone. In addition to the typhoid vaccine, she could take a resilience-enhancing drug before she leaves to protect her against PTSD. The same applies to soldiers, firefighters, ER doctors, cancer patients, refugees — anyone exposed to trauma or major life stress. The drugs have worked in preliminary tests with mice. Next up, humans.
  • Quote of the talk: “This is a paradigm shift in psychiatry. It’s a whole new field: preventative psychopharmacology.”

Michele Wucker, finance and policy strategist, founder and CEO of Gray Rhino & Company

  • Big idea: Catastrophic events sometimes catch us by surprise, but too often we invite crises to barrel right into our lives despite countless, blaring warning signs. What keeps us from facing the reality of a situation head-on?
  • How? Semantics, semantics, semantics — and a healthy dose of honesty. Wucker urges us to replace the myth of the “black swan” — that rare, unforeseeable, unavoidable event — with the reality of the “gray rhino,” the common obvious catastrophes, like the bursting of a financial bubble or the end of a tempestuous relationship, that are predictable and preventable. She breaks down the factors that determine whether we run from problems or tackle them, and lays out some warning signs that you may be ignoring one of those charging rhinos right now.
  • Quote of the talk: “Think about the obvious challenges in your own life and how you deal with them. Do you stick your head in the ground like an ostrich and ignore the problems entirely? Do you freak out like Chicken Little over all the tiny things, but miss the big giant wolf coming at you? Or do you manage things when they’re small to keep them from going out of control?”

Curator Bryn Freedman interviews executive (and former candidate for president of Iceland) Halla Tómasdóttir about how we can transform corporate leaders and businesses for a better world. (Photo: Ryan Lash / TED)

Halla Tómasdóttir, CEO of Richard Branson’s B Team and former Icelandic presidential candidate, interviewed by curator Bryn Freedman

  • Big idea: Corporate leaders — and the businesses they run — are in a crisis of conformity that favors not rocking the boat and ignores big issues like climate change and inequality. We need new leadership to get us out of this crisis.
  • How? It’s not enough for corporate leaders to simply follow the pack and narrowly define the missions of their organizations. If CEOs want to avoid the pitchforks of the masses, they must also ensure that their businesses are global citizens that embrace sustainability, equality, accountability — not just the markets.
  • Quote of the talk: “At the end of the day, we need to ask ourselves who are we holding ourselves accountable for — and if that isn’t the next generation, I don’t know who.”

Sarah T. Stewart, planetary scientist at the University of California, Davis, and 2018 MacArthur “Genius” fellow

  • Big idea: How did the Moon form? Despite its proximity, we don’t actually know! Adding to the mystery: the Earth and Moon are composed of the same stuff, a rarity we’ve found nowhere else in the universe. In trying to solve the mystery, Sarah T. Stewart discovered an entirely new not-quite-planet.
  • How? Stewart and her team smash planets together in computer simulations to learn more about how they were created. While trying to uncover the Moon’s origin, they discovered that the early Earth may have been involved in a massive collision with a Mars-sized planet, which then created a “synestia:” a super-heated doughnut of molten material previously unknown to science, out of which the Moon was born.
  • Quote of the talk: “I discovered a new type of astronomical object. It’s not a planet; it’s made from planets.”

Why do teens seem to make so many bad decisions? Kashfia Rahman searches for an answer in psychological effects of risk-taking. (Photo: Ryan Lash / TED)

Kashfia Rahman, Intel International Science and Engineering Fair winner and Harvard freshman

  • Big idea: Teenagers aren’t necessarily chasing thrills when they make bad decisions. Rather, repeated exposure to risk actually numbs how they make choices.
  • How? After wondering why her peers were constantly making silly and irresponsible decisions, Kashfia Rahman decided to conduct an experiment testing how her fellow high school students responded to risk. She found that habituation to risk — or “getting used to it” — impacts how teenagers make choices beyond their cognitive control. With this insight, she believes we can create policies that more holistically tackle high-risk behavior among teenagers.
  • Quote of the talk: “Unforeseen opportunities often come from risk-taking — not the hazardous negative risk-taking I studied, but the good ones, the positive risks,” she says. “The more risks I took, the more I felt capable.”

Rondam RamblingsThere and Back Again

You may have noticed that the Ramblings have been quiet for a while.  It has been six weeks since my last post, which I'm pretty sure is the longest hiatus I've had since I first started writing this blog over fifteen years ago.  There have been two reasons for this.  First, I've been on the road.  Nancy and I took a two-month-long trip starting in Singapore, cruising across the Indian ocean to

Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Paulo Henrique de Lima Santana (phls)
  • Unit 193 (unit193)
  • Marcio de Souza Oliveira (marciosouza)
  • Ross Vandegrift (rvandegrift)

The following contributors were added as Debian Maintainers in the last two months:

  • Romain Perier
  • Felix Yan

Congratulations!

Worse Than FailureCodeSOD: The Value of Your Code

“We know that governmental data-systems aren’t generally considered ‘cutting edge’ or ‘well architected’, but we’ve brought together some top flight talent, and we’re building a RESTful architecture that guarantees your code gets nice, easy-to-consume JSON objects.”

That’s what Oralee B. was told when she was hired to work on it. The system owned pretty much the entirety of the municipal data management market, tracking birth certificates, marriage certificates, death certificates and pet licenses for nearly every city in the country. JSON is so simple, how can you screw it up?

The same way most people seem to screw it up: reinventing key/value pairs in your key/value data format.

{
    "code": "Registrations",
    "Configurations": [
        {
            "code": "Registration1",
            "Configurations": [
                {
                    "code": "IdRegistration",
                    "value": "94"
                },
                {
                    "code": "CaptionActive",
                    "value": "EF_PRE_CR"
                },
                {
                    "code": "DateEnd",
                    "value": "2019-01-01"
                },
                {
                    "code": "DateStart",
                    "value": "2019-01-01"
                },
                {
                    "code": "Units",
                    "Configurations": []
                }
            ]
        },
        {
            "code": "Registration2",
            "Configurations": [
                {
                    "code": "IdRegistration",
                    "value": "92"
                },
                {
                    "code": "CaptionActive",
                    "value": "EF_PRE"
                },
                {
                    "code": "DateEnd",
                    "value": "2019-01-01"
                },
                {
                    "code": "DateStart",
                    "value": "2019-01-01"
                },
                {
                    "code": "Units",
                    "Configurations": [
                        {
                            "Code": "Unit1",
                            "Value": null,
                            "Configurations": [
                                {
                                    "code": "IdUnit",
                                    "Value": "1",
                                    "Configurations": []
                                },
                                {
                                    "code": "CaptionUnit",
                                    "value": "Hour",
                                    "Configurations": []
                                }
                            ]
                        },
                        {
                            "Code": "Unit2",
                            "Configurations": [
                                {
                                    "code": "IdUnit",
                                    "Value": "2",
                                    "Configurations": []
                                },
                                {
                                    "code": "CaptionUnit",
                                    "value": "Entrance",
                                    "Configurations": []
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

It’s not just the reinvention of key/value pairs as code/value pairs. They’re also reinvented as Code/value and Code/Value pairs. The inconsistent capitalization tells its own story about how this JSON is generated and consumed by other clients, and at least on the generation side, it clearly rhymes with “bling congratulation”.

When Oralee asked, “Why on Earth do you do it this way?”

“Because,” her boss explained, “this is the way we do it.”

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianJohn Goerzen: Goodbye to a 15-year-old Debian server

It was October of 2003 that the server I’ve called “glockenspiel” was born. It was the early days of Linux-based VM hosting, using a VPS provider called memset, running under, of all things, User Mode Linux. Over the years, it has been migrated around, sometimes running on the metal and sometimes in a VM. The operating system has been upgraded in-place using standard Debian upgrades over the years, and is now happily current on stretch (albeit with a 32-bit userland). But it has never been reinstalled. When I’d migrate hosting providers, I’d use tar or rsync to stream glockenspiel across the Internet to its new home.

A lot of people reinstall an OS when a new version comes out. I’ve been doing Debian upgrades with apt for ages, and this one is a case in point. It lingers.

Root’s .profile was last modified in November 2004, and its .bashrc was last modified in December 2004. My own home directory still has a .pinerc, .gopherrc, and .arch-params file. I last edited my .vimrc in 2003 and my .emacs dates back to 2002 (having been copied over from a pre-glockenspiel FreeBSD server).

drwxr-xr-x  3 jgoerzen jgoerzen      4096 Dec  3  2003 irclogs
-rw-r--r--  1 jgoerzen jgoerzen       373 Dec  3  2003 .vimrc
-rw-r--r--  1 jgoerzen jgoerzen       651 Nov 27  2003 .reportbugrc
drwx------  3 jgoerzen jgoerzen      4096 Sep  2  2003 .arch-params
-rw-r--r--  1 jgoerzen jgoerzen      1115 Aug 23  2003 .gopherrc
drwxr-xr-x  3 jgoerzen jgoerzen      4096 Jul 18  2003 .subversion
-rw-r--r--  1 jgoerzen jgoerzen     15317 Jun 21  2003 .pinerc

Poking around /etc on glockenspiel is like a trip back in time. Various apache sites still have configuration files around, but have long since been disabled. Over the years, glockenspiel has hosted source code repositories using Subversion, arch, tla, darcs, mercurial and git. It’s hosted websites using Drupal, WordPress, Serendipity, and so forth. It’s hosted gopher sites, websites or mailing lists for various Free Software projects (such as Freeciv), and any number of local charitable organizations. Remnants of an FTP configuration still exist, when people used web design software to build websites for those organizations on their PCs and then upload them to glockenspiel.

-rw-r--r--   1 root  root                      268 Dec 25  2005 libnet.cfg
-rw-r-----   1 root  root                     1305 Nov 11  2004 mrtg.cfg
-rw-r--r--   1 root  root                      552 Jul 31  2004 pam.conf

All this has been replaced by a set of Docker containers running my docker-debian-base software. They’re all in git, I can rebuild one of the containers in a few seconds or a few minutes by typing “make”, and there is no cruft from 2002. There are a lot of benefits to this.

And yet, there is a part of me that feels it’s all so… cold. Servers having “personalities” was always a distinctly dubious thing, but these days as we work through more and more layers of virtualization and indirection and become more distant from the hardware, we lose an appreciation for what we have and the many shoulders of giants upon which we stand.

And, so with that, the final farewell to this server that’s been running since 2003:

glockenspiel:/etc# shutdown -P now
Shared connection to glockenspiel.complete.org closed.

Planet DebianDaniel Lange: Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU has been enabled by default in Debian.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0

Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 :-).

early-rng-init-tools

Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at http://fish.mirbsd.org/~tg/Debs/dists/sid/wtf/Pkgs/early-rng-init-tools/ .

First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.

Updates

14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690
> with possible solutions like installing haveged, configuring virtio-rng,
> etc. depending on the situation.

That would be an extremely user-unfriendly "solution" and would lead to 
countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

12.03.2019

Stefan Fritsch revived the thread on debian-devel again and got a few more interesting titbits out of the developer community:

Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.

Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.


  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (2018-12-17) 

Planet DebianLucas Nussbaum: On Debian frustrations

Michael Stapelberg writes about his frustrations with Debian, resulting in him reducing his involvement in the project. That’s sad: over the years, Michael has made a lot of great contributions to Debian, addressing hard problems in interesting, disruptive ways.

He makes a lot of good points about Debian, with which I’m generally in agreement. An interesting exercise would be to rank those issues: what are, today, the biggest issues to solve in Debian? I’m nowadays not following Debian closely enough to be able to do that exercise, but I would love to read others’ thoughts (bonus points if it’s in a DPL platform, given that it seems that we have a pretty quiet DPL election this year!)

Most of Michael’s points are about the need for modernization of Debian’s infrastructure and workflows, and I agree that it’s sad that we have made little progress in that area over the last decade. And I think that it’s important to realize that providing alternatives to developers have a cost, and that when a large proportion of developers or packages have switched to doing something (using git, using dh, not using 1.0-based patch systems such as dpatch, …), there are huge advantages with standardizing and pushing this on everybody.

There are a few reasons why this is harder than it sounds, though.

First, there’s Debian culture of stability and technical excellence. “Above all, do not harm” could also apply to the mindset of many Debian Developers. On one hand, that’s great, because this focus on not breaking things probably contributes a lot to our ability to produce something that works as well as Debian. But on the other hand, it means that we often seek solutions that limit short-term damage or disruption, but are far from optimal on the long term.
An example is our packaging software stack. I wrote most of the introduction to Debian packaging found in the packaging-tutorial package (which is translated in six languages now), but am still amazed by all the unjustified complexity. We tend to fix problems by adding additional layers of software on top of existing layers, rather than by fixing/refactoring the existing layers. For example, the standard way to package software today is using dh. However, dh stands on dh_* commands (even if it does not call them directly, contrary to what CDBS did), and all the documentation on dh is still structured around those commands: if you want to install an additional file in a package, probably the simplest way to do that is to add it to debian/packagename.install, but this is documented in the manpage for dh_install, which your are not going to actually call because dh abstracts that away for you! I realize that this could be better explained in packaging-tutorial… (patch welcomed)

There’s also the fact that Debian is very large, very diverse, and hard to test. It’s very easy to break things silently in Debian, because many  of our packages are niche packages, or don’t have proper test suites (because not everything can be easily tested automatically). I don’t see how the workflows for large-scale changes that Michael describes could work in Debian without first getting much better at detecting regressions.

Still, there’s a lot of innovation going on inside packaging teams, with the development of language-specific packaging helpers (listed on the AutomaticPackagingTools wiki page). However, this silo-ed organization tends to fragment the expertise of the project about what works and what doesn’t: because packaging teams don’t talk much together, they often solve the same problems in slightly different ways. We probably need more ways to discuss interesting stuff going on in teams, and consolidating what can be shared between teams. The fact that many people have stopped following debian-devel@ nowadays is probably not helping…

The addition of salsa.debian.org is probably the best thing that happened to Debian recently. How much this ends up being used for improving our workflows remain to be seen:

  • We could use Gitlab merge requests to track patches, rather than attachments in the BTS. Some tooling to provide an overview of open MRs in various dashboards is probably needed (and unfortunately GitLab’s API is very slow when dealing with large number of projects).
  • We could probably have a way to move the package upload to a gitlab-ci job (for example, by committing the signed changes file in a specific branch, similar to what pristine-tar does, but there might be a better way)
  • I would love to see a team experiment with a monorepo approach (instead of the “one git repo per package + mr to track them all” approach). For teams with lots of small packages there are probably a lot of things to win with such an organization.

 

Planet DebianShirish Agarwal: Processing Insanity

This blog post starts from where it ended a few days ago. I am fortunate and to an extent even blessed that I have usually had honest advise but sometimes even advice falls short when you see some harsh realities. There were three people who replied, you can read mark’s and frode’s reply as they have shared in the blog post.

I even shared it with a newly-found acquaintaince in the hopes that there may be some ways, some techniques or something which would make more sense as this is something I have never heard within the social circles I have been part of so was feeling more than a bit ill-prepared. When I shared with Paramitra (gentleman whom I engaged as part of another socio-techno probable intervention meetup I am hoping to meet soon) , he also shared some sound advice which helped me mentally prepare as well –

So, if you’re serious about what you can do with/for this friend of yours and his family, I do have several suggestions. 

1. To the best of my knowledge, and I have some exposure, no one goes ‘insane’ just like that. There has to be a diagnosis. Please find out from his family if he’s been taken to a psychiatrist. If not, that’s the first thing you can convince his family to do. Be with them, help them with that task.

2. If he’s been diagnosed, find out what that is. Most psychiatric disorders can be brought to a manageable level with proper medications and care. But any suggestions I can offer on that depends on the diagnosis.

3. However, definitely inform his family that tying him up, keeping him locked etc will only worsen the situation. He needs medical and family care – not incarceration, unless the doctor prescribes institutionalized treatment.

Hope this helps. Please be a friend to him and his family at this hour of crisis. As a nation, our understanding of mental health per se is poor, to say the least.

Paramita Banerjee

So armed with what I thought was sufficient knowledge I went to my friend’s home. The person whom I met could not be the same person whom I knew as a friend in college. During college, he had a reputation of a toughie and he looked and acted the part. So, in many ways it was the unlikeliest of friendships. I shared with him tips of Accountancy, Economics etc. and he was my back. He was also very quick on the re-partees so we used to have quite a fun time exchanging those. For the remainder of the exchange I will call my friend ‘Amar’ and his sister ‘Kritika’ as they have been the names I like.

The person whom I met was a mere shadow of the person I knew. Amar had no memory of who I was. He had trouble comprehending written words and mostly mumbled. Amar did say something interesting and fresh once in a while but it was like talking mostly to a statue. He stank and was mostly asleep even when he was awake. Amar couldn’t look straight at me and he had that if I touched him or he touched me he would infect me. He had long nails as well. Kritika told me that he does have baths once every few days but takes 3-4 hours to take a bath, sleeps in there as well. The same happens when he goes for shitting as well. The saving grace is they have their own toilet and bathroom within the house. I have no comprehension how they might be adjusting, all in that small space.

I learned from Kritika what I hadn’t known about him and the family over the last ten odd years. His mum died in the same room where he was and he had no comprehension that she had died, this had happened just a few weeks back. He was one of three children, the middle child, the elder daughter, who is now a widow and has three daughters who are living with them. Amar, his father and the youngest sister who is trying desperately to keep it altogether but I don’t know how and what she will be able to do. 7 mouths to feed and 6 people who all have their own needs and wants apart from basic existence. They are from a low-income group. The elder sister does have lot of body pains although I was not able to ask what from. I do know nursing is a demanding profession and from my hospital stay, at times they have to around the clock 24×7 doing things no normal person can do.

Two of the nieces are nearing teenage years and was told of sexually suggestive remarks to the nieces by one of the neighbors. The father is a drunk, the brother-in-law who died was a drunk and the brother, Amar had consumed lots of cannabis seeds. Apparently, during the final year exams where we were given different centers he went to Bombay/Mumbai to try his hands at movies, then went to Delhi and was into selling some sort of chemicals from company to the other.

Maybe it was ‘bad company’ as her mother on the phone had told me, maybe it was the work he was doing which he was not happy with which led him to cannabis addiction. I have no way of knowing anything of his past. I did ask Kritika if she can dig out any visiting cards or something. I do have enough friends in Delhi so it’s possible I can know about how things came to be this bad.

There was a small incident which also left me a bit shaken. The place where they are is a place called Pavana Nagar. This is on back of Pimpri-Chinchwad industrial township so most of the water that the town/village people consume has lot of chemical effluents and this the local councillor (called nagar sevak) knows but either can’t or won’t do anything about it. There are lot of diseases due to the pollutants in the water. The grains they buy or purchase, Kritika suspects or/and knows also use the same water but she is helpless to do anything about it.

The incident is a small one but I wanted to share a fuller picture before sharing that. I had left my bag, a sort of sling bag where I was sitting in the room . After Kritika took me to another building to show me the surrounding areas (as I was new here and had evinced interest to know the area) , when we came back, my bag was not to be found. While after searching for a while, I got the bag, there was no money in it ( I usually keep INR 100-200 in case money gets stolen from on me. I also keep some goodies (sweet and sour both) just in case I feel hungry and there’s nothing around. Both were missing. The father pretended, he had put the bag away by mistake. I didn’t say anything because it would have been loss of face for the younger sister although it’s possible that she knows or had some suspicions. With the younger kids around, it would have been awkward to say that and I didn’t really wanna make a scene. It wasn’t much, but just something I didn’t expect.

Also later I came to know that whenever the father drinks, he creates lot of drama and says whatever comes to his mind. It is usually dirty, nasty and hurtful from what I could gather.

Due to my extended stay in hopsital due to Epilepsy had come to know of couple of medical schemes which were meant for weaker sections of the society. I did share what I knew of the schemes. While I hope to talk with Kritika more, I don’t see a way out of the current mess they are in. The sense I got from her is that she is fighting too many battles and I don’t how she can win them all. I also told her about NMRI I have no clue where to go from here. Also don’t wanna generalize but there might be possibilities of many Amars and Kritikas in our midst or around us whose story we don’t know. If they could just have some decent water, no mosquitoes it probably would enhance their lives quite a bit and maybe have a bit more agency about themselves. There is one thing that Kritika shared which was also interesting. She had experience of working back-office for some IT company but now looking after the family she just couldn’t do the same thing.

Note and disclaimer – The names ‘Amar’ and ‘Kritika’ are just some names I chose. The names have been given to –

a. Give privacy to the people involved.
b. To embody sustance to the people and the experience so they are not nameless people.


Planet DebianSteve McIntyre: Debian BSP in Cambridge, 08 - 10 March 2019

Lots of snacks, lots of discusssion, lots of bugs fixed! YA BSP at my place.

BSP

,

TEDThese young women might just save the planet

One of anthropologist Margaret Mead’s most famous quotes instructs us: “Never doubt that a small group of thoughtful, committed citizens can change the world: indeed, it’s the only thing that ever has.” We might amend Mead’s observation to honor a group of thoughtful, committed teenagers across the world who are standing up for their lives (and their future lives) in extraordinarily powerful and moving ways.

Valentine’s Day marked the one-year anniversary of the Marjory Stoneman Douglas High School massacre in Parkland, Florida. In the aftermath of that tragic shooting that claimed 17 lives, surviving students rejected their representatives’ “thoughts and prayers” and organized a nationwide school walkout on March 14, 2018. Ten days later, the March for Our Lives drew over a million people from around the country to Washington to rally for safe schools and gun control. And the Parkland students have continued throughout the year to travel this country and the world, advocating for stricter gun regulations.

In Sweden, a teenage girl named Greta Thunberg observed the actions of the Parkland students and took an action of her own: deciding to skip school every Friday in order to lobby the Swedish government into action on climate change.

Since Greta began her solo protest, it’s estimated that more than 70,000 students around Europe and the world have joined her protest each week, in over 270 towns and cities. Pictures of Greta and other young activists have made their way around on social media (Greta has nearly 300,000 followers on Instagram), inspiring other teens to join her in protest.

Strikes have been organized all over Europe, the United States, India and Australia over the past five months. The movement is notable in that it is being led by teenage girls. Katrien Van der Heyden of Brussels, whose 17-year-old daughter, Anuna de Wever, organizes marches there, observed to BuzzFeed: “’It’s the very first time in Belgium that a [mass movement was] started by two women and not about feminist rights.’ When the protests drew tens of thousands, Van der Heyden said, she was stunned to see as many boys as girls in the crowds, “and yet no one ever challenged the leadership of the female organizers.”

Seventeen-year-old Jamie Margolin, the founder and executive director of Zero Hour, a group organizing the US protests for the International Day of Action planned for March 15, told BuzzFeed that climate activism has given young women a chance to be heard.

“’There aren’t very many spaces that I can be in charge of, and what I’m going to say is going to be heard,’ Margolin said. Her group is led largely by young women of color, which she said should come as no surprise, because people who are already vulnerable are going to be disproportionately hit by climate change.”

Recently, TED posted Greta’s TEDxStockholm talk which she gave in November. Her talk has been up three weeks and has already been viewed over 1.2 million times. In it, she explained why she decided to skip school and protest, saying:

“I school striked for the climate. Some people say that I should be in school instead. Some people say that I should study to become a climate scientist so that I can ‘solve the climate crisis.’ But the climate crisis has already been solved. We already have all the facts and solutions. All we have to do is to wake up and change.”

At the end of her talk, Greta says she’s not going to end on a positive, hopeful note, like most TED talks.

“Yes, we do need hope, of course we do. But the one thing we need more than hope is action. Once we start to act, hope is everywhere. So instead of looking for hope, look for action. Then, and only then, hope will come.”
                                                                                                                — Greta Thunberg

As I observe Greta and Jamie and all the other girls taking up leadership and the young boys who are marching and protesting with them,  no longer waiting for some adult with a plan or for corporations or governments to take actions, but creating their own actions, I feel more hope than I have felt in a long time that we — all of us at every age — will also take up actions to address the climate crisis before it’s too late.

My daughter in law, Laura Turner Seydel, signs off every email with the Native American proverb: We do not inherit the earth from our ancestors, we borrow it from our children. The world’s children are reminding us that we have a big debt to repay and an earth to repair and restore. Time’s up on our loan of the earth.

Inspired,

— Pat

CryptogramRussia Is Testing Online Voting

This is a bad idea:

A second innovation will allow "electronic absentee voting" within voters' home precincts. In other words, Russia is set to introduce its first online voting system. The system will be tested in a Moscow neighborhood that will elect a single member to the capital's city council in September. The details of how the experiment will work are not yet known; the State Duma's proposal on Internet voting does not include logistical specifics. The Central Election Commission's reference materials on the matter simply reference "absentee voting, blockchain technology." When Dmitry Vyatkin, one of the bill's co-sponsors, attempted to describe how exactly blockchains would be involved in the system, his explanation was entirely disconnected from the actual functions of that technology. A discussion of this new type of voting is planned for an upcoming public forum in Moscow.

Surely the Russians know that online voting is insecure. Could they not care, or do they think the surveillance is worth the risk?

Planet DebianJohn Goerzen: A (Partial) Defense of Debian

I was sad to read on his blog that Michael Stapelberg is winding down his Debian involvement. In his post, he outlined some critiques of Debian. In his post, I want to acknowledge that he is on point with some of them, but also push back on others. Some of this is also a response to some of the comments on Hacker News.

I’d first like to discuss some of the assumptions I believe his post rests on: namely that “we’ve always done it this way” isn’t a good reason to keep doing something. I completely agree. However, I would also say that “this thing is newer, so it’s better and we should use it” is also poor reasoning. Newer is not always better. Sometimes it is, sometimes it’s not, but deeper thought is generally required.

Also, when thinking about why things are a certain way or why people prefer certain approaches, we must often ask “why does that make sense to them?” So let’s dive in.

Debian’s Perspective: Stability

Stability, of course, can mean software that tends not to crash. That’s important, but there’s another aspect of it that is also important: software continuing to act the same over time. For instance, if you wrote a C program in 1985, will that program still compile and run today? Granted, that’s a bit of an extreme example, but the point is: to what extent can you count on software you need continuing to operate without forced change?

People that have been sysadmins for a long period of time will instantly recognize the value of this kind of stability. Change is expensive and difficult, and often causes outages and incidents as bugs are discovered when software is adapted to a new environment. Being able to keep up-to-date with security patches while also expecting little or no breaking changes is a huge win. Maintaining backwards compatibility for old software is also important.

Even from a developer’s perspective, lack of this kind of stability is why I have handed over maintainership of most of my Haskell software to others. Some of my Haskell projects were basically “done”, and every so often I’d get bug reports that it no longer compiles due to some change in the base library. Occasionally I’d have patches with those bug reports, but they would inevitably break compatibility with older versions (even though the language has plenty good support for something akin to a better version of #ifdefs to easily deal with this.) The culture of stability was not there.

This is not to say that this kind of stability is always good or always bad. In the Haskell case, there is value to be had in fixing designs that are realized to be poor and removing cruft. Some would say that strcpy() should be removed from libc for this reason. People that want the latest versions of gimp or whatever are probably not going to be running Debian stable. People that want to install a machine and not really be burdened by it for a couple of years are.

Debian has, for pretty much its entire life, had a large proportion of veteran sysadmins and programmers as part of the organization. Many of us have learned the value of this kind of stability from the school of hard knocks – over and over again. We recognize the value of something that just works, that is so stable that things like unattended-upgrades are safe and reliable. With many other distros, something like this isn’t even possible; when your answer to a security bug is to “just upgrade to the latest version”, just trusting a cron job to do it isn’t going to work because of the higher risk.

Recognizing Personal Preference

Writing about Debian’s bug-tracking tool, Michael says “It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).” This is representative of a personal preference. A web form might be more convenient for Michael — I have no reason to doubt this — but is it more convenient for everyone? I’d say no.

In his linked post, Michael also writes: “Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload. “

I think that’s fair for someone that wants a web-based workflow. Allow me to present the opposite: for me, I tend to push off contributions that only come through Github, and the reason is that, for me, they’re less convenient. It’s also harder for me to contribute to Github projects than Debian ones. Let’s look at this – say I want to send in a small patch for something. If it’s Github, it’s going to look like this:

  1. Go to the website for the thing, click fork
  2. Now clone that fork or add it to my .git/config, hack, and commit
  3. Push the commit, go back to the website, and submit a PR
  4. Github’s email integration is so poor that I basically have to go back to the website for most parts of the conversation. I can do little from the comfort of mu4e.
  5. Remember to clean up my fork after the patch is accepted or rejected.

Compare that to how I’d contribute with Debian:

  1. Hack (and commit if I feel like it)
  2. Type “reportbug foo”, attach my patch
  3. Followup conversation happens directly in email where it’s convenient to reply

How about as the developer? Github constantly forces me to their website. I can’t very well work on bug reports, etc. without a strong Internet connection. And it’s designed to push people into using their tools and their interface, which is inferior in a lot of ways to a local interface – but then the process to pull down someone else’s set of patches involves a lot of typing and clicking, much more that would be involved from a simple git format-patch. In short, I don’t have my shortcut keys, my environment, etc. for reviewing things – the roadblocks are there to make me use theirs.

If I get a contribution from someone in debbugs, it’s so much easier. It’s usually just git apply or patch -p1 and boom, I can see exactly what’s changed and review it. A review comment is just a reply to an email. I don’t have to ever fire up a web browser. So much more convenient.

I don’t write this to say Michael is wrong about what’s more convenient for him. I write it to say he’s wrong about what’s more convenient for me (or others). It may well be the case that debbugs is so inconvenient that it pushes him to leave while github is so inconvenient for others that it pushes them to avoid it.

I will note before leaving this conversation that there are some command-line tools available for Github and a web interface to debbugs, but it is still clear that debbugs is a lot easier to work with from within my own mail reader and tooling, and Github is a lot easier to work with from within a web browser.

The case for reportbug

I remember the days before we had reportbug. Over and over and over again, I would get bug reports from users that wouldn’t have the basic information needed to investigate. reportbug gathers information from the system: package versions, configurations, versions of dependencies, etc. A simple web form can’t do this because it doesn’t have a local agent. From a developer’s perspective, trying to educate users on how to do this over and over as an unending, frustrating, and counter-productive task. Even if it’s clearly documented, the battle will be fought over and over. From a user’s perspective, having your bug report ignored or told you’re doing it wrong is frustrating too.

So I think reportbug is much nicer than having some github-esque web-based submission form. Could it be better? Sure. I think a mode to submit the reportbug report via HTTPS instead of email would make sense, since a lot of machines no longer have local email configured.

Where Debian Should Improve

I agree that there are areas where Debian should improve.

Michael rightly identifies the “strong maintainer” concept as a source of trouble. I agree. Though we’ve been making slow progress over time with things like low-threshold NMU and maintainer teams, the core assumption that a maintainer has a lot of power over particular packages is one that needs to be thrown out.

Michael, and commentators on HN, both identify things that boil down to documentation problems. I have heard so many times that it’s terribly hard to package something up for Debian. That’s really not the case for most things; dh_make and similar tools will do the right thing for many packages, and all you have to do is add some package descriptions and such. I wrote a “concise guide” to packaging for my workplace that ran to only about 2 pages. But it is true that the documentation on debian.org doesn’t clearly offer this simple path, so people are put off and not aware of it. Then there were the comments about how hard it is to become a Debian developer, and how easy it is to submit a patch to NixOS or some such. The fact is, these are different things; one does not need to be a Debian Developer to contribute to Debian. A DD is effectively the same as a patch approver elsewhere; these are the people that can ultimately approve software for insertion into the OS, and you DO want an element of trust there. Debian could do more to offer concise guides for drive-by contributions and the building of packages that follow standard language community patterns, both of which can be done without much knowledge of packaging tools and inner workings of the project.

Finally, I have distanced myself from conversations in Debian for some time, due to lack of time to participate in what I would call excessive bikeshedding. This is hardly unique to Debian, but I am glad to see the project putting more effort into expecting good behavior from conversations of late.

Worse Than FailureCodeSOD: The God Page

Mike inherited a data-driven application. Once upon a time, it started out pretty well architected. Yes, it was in PHP, but with good architecture, solid coding practices, and experienced developers, it was simple and easy to maintain.

Time passed. The architects behind it left, new developers were hired, management changed over, and it gradually drifted into what you imagine when you hear "PHP app" in 2019.

Mike's task was to start trying to clean up that mess, and that started all the way back in the database layer with some normalization. From there, it was the arduous task of going through the existing data access code and updating it to use the refined model objects.

While there was a mix of "already uses a model" and "just a string" SQL statements in the code, there was a clear data-access layer, and almost all of the string-based queries actually used parameters. At least, that was true until Mike took a look at the "god" page of the application.

You know how it is. Thinking through features and integrating them into your application is hard and requires coordinating with other team members. Slapping together a page which can generate any arbitrary SQL if you understand how to populate the drop-downs and check boxes correctly is "easy"- at least to start. And by the time it gets really hard, you've already created this:

$numIDs=0; $statusdb1 = ""; $statusdb2 = ""; if ($cstatus=='inactive') { $statusdb1 = "WHERE customer_status='0'"; $statusdb2 = "AND customer_status='0'"; $childIDs.=" and "; } elseif ($cstatus=='active') { $statusdb1 = "WHERE customer_status='1'"; $statusdb2 = "AND customer_status='1'"; $childIDs.=" and "; } $childIDs.="("; if($isParent) { $str="select ChildID from customer_parent_child where ParentID=$custID order by ChildID;"; $result=$database->query($str); if(count($result) > 0) { $row = $result[0]; $IDs[$numIDs]=$row['ChildID']; $childIDs.="custID='$IDs[$numIDs]'"; $numIDs++; } foreach ($result as $row){ $IDs[$numIDs]=$row['ChildID']; $childIDs.=" or custID='$IDs[$numIDs]'"; $numIDs++; } } $childIDs.=")"; if($numIDs==0) { $childIDs=" and custID=''"; } //echo "childIDs:$childIDs<br />"; if ($charge) { $chargedb = " customer_child=0"; } if(isset($_POST['cmpSearchFname'])){ $search=$_POST['fname']; if (!$_POST['fname']) { $search=$_GET['fname']; } if(isset($search)){ if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where customer_firstname like '%$search%' $statusdb2 $childIDs order by customer_firstname;"; } else{ if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by customer_firstname;"; } } elseif (isset($_POST['cmpSearchcustID'])){ $search=$_POST['custID']; if (!$_POST['custID']) { $search=$_GET['custID']; } if(isset($search)){ if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where custID like '%$search%' $statusdb2 $childIDs order by custID;"; }else{ if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by custID;"; } } elseif (isset($_POST['cmpSearchLname'])){ $search=$_POST['lname']; if (!$_POST['lname']) { $search=$_GET['lname']; } if(isset($search)){ if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where customer_lastname like '%$search%' $statusdb2 $childIDs order by customer_lastname;"; }else{ if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by customer_lastname;"; } } elseif(isset($_POST['cmpSearchCmpName'])){ $search=$_POST['cmpName']; if (!$_POST['cmpName']) { $search=$_GET['cmpName']; } if(isset($search)) { if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where customer_cmp_name like '%$search%' $statusdb2 $childIDs order by customer_cmp_name;"; } else { if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by customer_cmp_name;"; } } elseif (isset($_POST['cmpSearchPhone'])){ $search1=$_POST['phone1']; if (!$_POST['phone1']) { $search1=$_GET['phone1']; } $search2=$_POST['phone2']; if (!$_POST['phone2']) { $search2=$_GET['phone2']; } $search3=$_POST['phone3']; if (!$_POST['phone3']) { $search3=$_GET['phone3']; } $phonesearch = "FALSE"; if(isset($search1)){ $search=$search1; $phonesearch = "TRUE"; } if(isset($search2)){ $search.=$search2; $phonesearch = "TRUE"; } if(isset($search3)){ $search.=$search3; $phonesearch = "TRUE"; } if($phonesearch == "TRUE"){ if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where customer_contact_phone1 like '%$search%' $statusdb2 $childIDs order by customer_contact_phone1;"; } else { if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by customer_contact_phone1;"; } } elseif(isset($_POST['cmpSearchDID'])){ $search1=$_POST['DID1']; if (!$_POST['DID1']) { $search1=$_GET['DID1']; } //echo "DID1: ".$search1."<br>"; $search2=$_POST['DID2']; if (!$_POST['DID2']) { $search2=$_GET['DID2']; } //echo "DID2: ".$search2."<br>"; $search3=$_POST['DID3']; if (!$_POST['DID3']) { $search3=$_GET['DID3']; } //echo "DID3: ".$search3."<br>"; $DIDsearch = "FALSE"; if(isset($search1)){ $search=$search1; $DIDsearch = "TRUE"; } if(isset($search2)){ $search.=$search2; $DIDsearch = "TRUE"; } if(isset($search3)){ $search.=$search3; $DIDsearch = "TRUE"; } if($DIDsearch == "TRUE"){ //echo "DID: ".$search."<br>"; $str="select custID from DID where DID like '%$search%' $childIDs;"; $result=$database->query($str); $row = $result[0]; $numRows = count($result); if($numRows > 0){ $custSearch=$row['custID']; if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where custID like '%$custSearch%' $statusdb2 $childIDs;"; } else { if(($childIDs !== "") && ($numIDs >0)) { $childIDs=" and $childIDs"; } $str="select * from customer where customer_cmp_name like \"\" $statusdb2 $childIDs order by customer_cmp_name"; } } else { if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by customer_cmp_name"; } } else { if($statusdb1=="") { $statusdb1="where "; } $str="select * from customer $statusdb1 $childIDs order by customer_cmp_name"; } $result=$database->query($str);

Mike writes:

I expect that there are SQL injection vulnerabilities in here somewhere. I just can't read through this well enough to find them.

I read through it. There are definitely SQL Injection errors. I still couldn't tell you exactly what all this does, but it definitely has SQL injection errors. Also, if you don't POST your query parameters, it'll pull them from the GET, so, y'know, do either. It doesn't matter. Nothing matters.

Mike adds:

I did the only thing I could think of to do with this code - sumbit it to TheDailyWTF.

I hope you also deleted it and removed any trace of it from source control history, but submitting here first was the right call.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Sam VargheseThree weeks on, Pell supporters retain their blinkers

“It is a capital mistake to theorise without data.” Sherlock Holmes, the creation of the late Sir Arthur Conan Doyle and still the most famous detective of fiction.

It is not surprising that nearly 20 days after after the verdict on Cardinal George Pell was announced, the Australian lobbyist Gerard Henderson keeps trying to cast doubt on the verdict. Henderson is a staunch defender of the Catholic Church and one who thinks he knows all about journalism – even though he is just a lobbyist who rallies to causes on the right of politics.

Henderson runs an organisation known as The Sydney Institute which he characterises as “a privately funded not-for-profit current affairs forum encouraging debate and discussion”. Two of the companies that supply those funds are the airline Qantas and the telco Telstra. There are other organisations that fund Henderson’s war against the left too.

A former staff member of prime minister John Howard, Henderson’s gripes are documented every week in a trite blog he calls Media Watch Dog. He repeats himself often, gripes about things that only someone who has nothing to do would notice, and generally makes a fool of himself. Strangely, The Australian, one of the few papers in the country that still publishes as a broadsheet, runs this guff every week. Henderson also writes a weekly column for the same paper.

In many respects Henderson reminds one of the famous novel Dr Jekyll and Mr Hyde. In public, he puts on a different persona from the vicious one that he appears to adopt when penning his Media Watch Dog column. One of his major gripes is that the ABC, a government-funded broadcaster, has no conservative voice. This appears to be a cry for Henderson himself to be given a slot but he will never admit it openly.

In the case of Pell, Henderson has been pushing a few themes: journalists have little legal knowledge, jury verdicts are often questioned, and the arguments mounted by Frank Brennan, a lawyer and a priest who attended some part of the trial and claims that it went the wrong way, are correct.

Henderson twists arguments to suit his own purpose. For example, he cites the fact that the broadcaster Ray Hadley criticised Howard for giving Pell a character reference. Then he says that while journalists from the ABC and the Sydney Morning Herald never take Hadley seriously, this time they did.

In this, Henderson shows his ignorance of journalism and what constitutes news. When dog bites man, it is not news as it is the natural order of things. When man bites dog, it is news. Hadley’s criticism of Howard falls into the latter category as he is normally prone to praise anything and everything to do with politicians from the right. That’s why it was worthy of coverage by the ABC and the SMH.

Melissa Davey of The Guardian Australia is one reporter who attended the entire trial and heard all the evidence that was possible to hear. Henderson’s ignorance of what she has written underlines his lack of digital literacy.

She writes: “Brennan was barely in the trial. He did not sit through most of the evidence. When he was in, he was clearly aligned with Pell from the start, talking to him and shaking his hand.

“And as for his comments about journos lacking law experience; some of my colleagues in the trial have covered courts for years. Their knowledge is incredible. We all have high-level legal contacts to ensure we get it right.”

But Henderson appears not to have known of Davey’s tweets. It underlines his ignorance of things digital – his blog does not even contain links – and underlines his lack of knowledge of news. Davey’s twitter thread is the best expression by a journalist of what the trial was all about. But he appears to have been unaware of it.

Pell will be sentenced on Wednesday [March 13] and the sentencing will be broadcast nationally. No doubt, that will give Henderson another topic to gripe about when he writes his tripe for the weekend.

Krebs on SecurityInsert Skimmer + Camera Cover PIN Stealer

Very often the most clever component of your typical ATM skimming attack is the hidden pinhole camera used to record customers entering their PINs. These little video bandits can be hidden 100 different ways, but they’re frequently disguised as ATM security features — such as an extra PIN pad privacy cover, or an all-in-one skimmer over the green flashing card acceptance slot at the ATM.

And sometimes, the scammers just hijack the security camera built into the ATM itself.

Below is the hidden back-end of a skimmer found last month placed over top of the customer-facing security camera at a drive-up bank ATM in Hurst, Texas. The camera components (shown below in green and red) were angled toward the cash’s machine’s PIN pad to record victims entering their PINs. Wish I had a picture of this thing attached to the ATM.

This hidden camera was fixed to the underside of a fake lens cover for the skimmed ATM’s built-in security camera. Image: Hurst Police.

The clever PIN grabber was paired with an “insert skimmer,” a wafer-thin, usually metallic and battery powered skimmer made to be fitted straight into the mouth of the ATM’s card acceptance slot, so that the card skimmer cannot be seen from outside of the compromised ATM.

The insert skimmer, seen as inserted into the card acceptance device in the hacked ATM. Image: Hurst PD.

For reference, here’s a similar card acceptance slot, minus the skimmer.

An unaltered ATM card acceptance slot (without insert skimmer).

Police in Hurst, Texas released a photo taken from footage showing what appears to be a young woman affixing the camera skimmer to the drive-up ATM. They said she was driving a blue Ford Expedition with silver trim on the lower portion of the vehicle.

The skimmer crooks seem to realize that far fewer people are going to cover their hand when entering a PIN at drive-up ATMs. Often the machine is either too high or too low for the driver-side window, and covering the PIN pad to guard against hidden cameras can be a difficult reach for a lot of people.

Nevertheless, covering the PIN pad with a hand, wallet or purse while you enter the PIN is one of the easiest ways to block skimming attacks. The skimmer scammers don’t just want your bank card: They want your PIN so they can create an exact copy of the card and use it at another ATM to empty your checking or savings account.

So don’t be like the parade of people in these videos from hidden cameras at hacked ATMs who never once covered the PIN pad.

Further reading: Woman Caught on Video Installing Skimmer Outside Bank’s ATM in Hurst

,

Planet DebianNoah Meyerhans: Further Discussion for DPL!

Further Discussion builds concensus within Debian!

Further Discussion gets things done!

Further Discussion welcomes diverse perspectives in Debian!

We'll grow the community with Further Discussion!

Further Discussion has been with Debian from the very beginning! Don't you think it's time we gave Further Discussion its due, after all the things Further Discussion has accomplished for the project?

Somewhat more seriously, have we really exhausted the community of people interested in serving as Debian Project Leader? That seems unfortunate. I'm not worried about it from a technical point of view, as Debian has established ways of operating without a DPL. But the lack of interest suggest some kind of stagnation within the community. Or maybe this is just the cabal trying to wrest power from the community by stifling the vote. Is there still a cabal?

Planet DebianMarkus Koschany: My Free Software Activities in February 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • February was the last month to package new upstream releases before the full freeze, if the changes were not too invasive of course :-). Atomix, gamine, simutrans, simutrans-pak64, simutrans-pak128.britain and hitori qualified.
  • I sponsored a new version of mgba, a Game Boy Advance emulator, for Reiner Herrmann and worked together with Bret Curtis on wildmidi and openmw. The latest upstream version resolved a long-standing bug and made it possible that the game engine, a reimplementation of The Elder Scrolls III: Morrowind, will be part of a Debian stable release for the first time.
  • Johann Suhter reported a bug in one of brainparty’s minigames and also provided the patch. All I had to do was uploading it. Thanks. (#922485)
  • I corrected a minor cross-build FTBFS in openssn. Patch by Helmut Grohne. (#914724)
  • I released a new version of debian-games and updated the dependency list of our games metapackages. This is almost the final version but expect another release in one or two months.

Debian Java

Misc

Debian LTS

This was my thirty-sixth month as a paid contributor and I have been paid to work 19,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 25.02.2019 until 03.03.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in sox, collabtive, libkohana2-php, ldb, libpodofo, libvirt, openssl, wordpress, twitter-bootstrap, ceph, ikiwiki, edk2, advancecomp, glibc, spice-xpi and zabbix.
  • DLA-1675-1. Issued a security update for python-gnupg fixing 1 CVE.
  • DLA-1676-1. Issued a security update for unbound fixing 1 CVE.
  • DLA-1696-1. Issued a security update for ceph fixing 2 CVE.
  • DLA-1701-1. Issued a security update for openssl fixing 1 CVE.
  • DLA-1702-1. Issued a security update for advancecomp fixing 2 CVE.
  • DLA-1703-1. Issued a security update for jackson-databind fixing 10 CVE.
  • DLA-1706-1. Issued a security update for poppler fixing 5 CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my ninth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 25.02.2019 until 03.03.2019 and I triaged CVE in file, gnutls26, nettle, libvirt, busybox and eglibc.
  • ELA-84-1. Issued a security update for gnutls26 fixing 4 CVE. I also investigated CVE-2018-16869 in nettle and also CVE-2018-16868 in gnutls26. After some consideration I decided to mark these issues as ignored because the changes were invasive and would have required intensive testing. The benefits appeared small in comparison.
  • ELA-88-1. Issued a security update for openssl fixing 1 CVE.
  • ELA-90-1. Issued a security update for libsdl1.2 fixing 11 CVE.
  • I started to work on sqlalchemy which requires a complex backport to fix a possible SQL injection vulnerability.

Thanks for reading and see you next time.

Planet DebianMichael Stapelberg: Winding down my Debian involvement

This post is hard to write, both in the emotional sense but also in the “I would have written a shorter letter, but I didn’t have the time” sense. Hence, please assume the best of intentions when reading it—it is not my intention to make anyone feel bad about their contributions, but rather to provide some insight into why my frustration level ultimately exceeded the threshold.

Debian has been in my life for well over 10 years at this point.

A few weeks ago, I have visited some old friends at the Zürich Debian meetup after a multi-year period of absence. On my bike ride home, it occurred to me that the topics of our discussions had remarkable overlap with my last visit. We had a discussion about the merits of systemd, which took a detour to respect in open source communities, returned to processes in Debian and eventually culminated in democracies and their theoretical/practical failings. Admittedly, that last one might be a Swiss thing.

I say this not to knock on the Debian meetup, but because it prompted me to reflect on what feelings Debian is invoking lately and whether it’s still a good fit for me.

So I’m finally making a decision that I should have made a long time ago: I am winding down my involvement in Debian to a minimum.

What does this mean?

Over the coming weeks, I will:

  • transition packages to be team-maintained where it makes sense
  • remove myself from the Uploaders field on packages with other maintainers
  • orphan packages where I am the sole maintainer

I will try to keep up best-effort maintenance of the manpages.debian.org service and the codesearch.debian.net service, but any help would be much appreciated.

For all intents and purposes, please treat me as permanently on vacation. I will try to be around for administrative issues (e.g. permission transfers) and questions addressed directly to me, permitted they are easy enough to answer.

Why?

When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time. Now, over 5 years of full time work later, my day job taught me a lot, both about what works in large software engineering projects and how I personally like my computer systems. I am very conscious of how I spend the little spare time that I have these days.

The following sections each deal with what I consider a major pain point, in no particular order. Some of them influence each other—for example, if changes worked better, we could have a chance at transitioning packages to be more easily machine readable.

Change process in Debian

The last few years, my current team at work conducted various smaller and larger refactorings across the entire code base (touching thousands of projects), so we have learnt a lot of valuable lessons about how to effectively do these changes. It irks me that Debian works almost the opposite way in every regard. I appreciate that every organization is different, but I think a lot of my points do actually apply to Debian.

In Debian, packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian.

While it is great to have a lint tool (for quick, local/offline feedback), it is even better to not require a lint tool at all. The team conducting the change (e.g. the C++ team introduces a new hardening flag for all packages) should be able to do their work transparent to me.

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

Notably, the cost of each change is distributed onto the package maintainers in the Debian model. At work, we have found that the opposite works better: if the team behind the change is put in power to do the change for as many users as possible, they can be significantly more efficient at it, which reduces the total cost and time a lot. Of course, exceptions (e.g. a large project abusing a language feature) should still be taken care of by the respective owners, but the important bit is that the default should be the other way around.

Debian is lacking tooling for large changes: it is hard to programmatically deal with packages and repositories (see the section below). The closest to “sending out a change for review” is to open a bug report with an attached patch. I thought the workflow for accepting a change from a bug report was too complicated and started mergebot, but only Guido ever signaled interest in the project.

Culturally, reviews and reactions are slow. There are no deadlines. I literally sometimes get emails notifying me that a patch I sent out a few years ago (!!) is now merged. This turns projects from a small number of weeks into many years, which is a huge demotivator for me.

Interestingly enough, you can see artifacts of the slow online activity manifest itself in the offline culture as well: I don’t want to be discussing systemd’s merits 10 years after I first heard about it.

Lastly, changes can easily be slowed down significantly by holdouts who refuse to collaborate. My canonical example for this is rsync, whose maintainer refused my patches to make the package use debhelper purely out of personal preference.

Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.

How would things look like in a better world?

  1. As a project, we should strive towards more unification. Uniformity still does not rule out experimentation, it just changes the trade-off from easier experimentation and harder automation to harder experimentation and easier automation.
  2. Our culture needs to shift from “this package is my domain, how dare you touch it” to a shared sense of ownership, where anyone in the project can easily contribute (reviewed) changes without necessarily even involving individual maintainers.

To learn more about how successful large changes can look like, I recommend my colleague Hyrum Wright’s talk “Large-Scale Changes at Google: Lessons Learned From 5 Yrs of Mass Migrations”.

Fragmented workflow and infrastructure

Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Of course, what you do in such a repository also varies subtly from team to team, and even within teams.

In practice, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Instead of using GitLab’s API to create a merge request, you have to design an entirely different, more complex system, which deals with intermittently (or permanently!) unreachable repositories and abstracts away differences in patch delivery (bug reports, merge requests, pull requests, email, …).

Wildly diverging workflows is not just a temporary problem either. I participated in long discussions about different git workflows during DebConf 13, and gather that there were similar discussions in the meantime.

Personally, I cannot keep enough details of the different workflows in my head. Every time I touch a package that works differently than mine, it frustrates me immensely to re-learn aspects of my day-to-day.

After noticing workflow fragmentation in the Go packaging team (which I started), I tried fixing this with the workflow changes proposal, but did not succeed in implementing it. The lack of effective automation and slow pace of changes in the surrounding tooling despite my willingness to contribute time and energy killed any motivation I had.

Old infrastructure: package uploads

When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC).

Depending on timing, I estimated that you might wait for over 7 hours (!!) before your package is actually installable.

What’s worse for me is that feedback to your upload is asynchronous. I like to do one thing, be done with it, move to the next thing. The current setup requires a many-minute wait and costly task switch for no good technical reason. You might think a few minutes aren’t a big deal, but when all the time I can spend on Debian per day is measured in minutes, this makes a huge difference in perceived productivity and fun.

The last communication I can find about speeding up this process is ganneff’s post from 2008.

How would things look like in a better world?

  1. Anonymous FTP would be replaced by a web service which ingests my package and returns an authoritative accept or reject decision in its response.
  2. For accepted packages, there would be a status page displaying the build status and when the package will be available via the mirror network.
  3. Packages should be available within a few minutes after the build completed.

Old infrastructure: bug tracker

I dread interacting with the Debian bug tracker. debbugs is a piece of software (from 1994) which is only used by Debian and the GNU project these days.

Debbugs processes emails, which is to say it is asynchronous and cumbersome to deal with. Despite running on the fastest machines we have available in Debian (or so I was told when the subject last came up), its web interface loads very slowly.

Notably, the web interface at bugs.debian.org is read-only. Setting up a working email setup for reportbug(1) or manually dealing with attachments is a rather big hurdle.

For reasons I don’t understand, every interaction with debbugs results in many different email threads.

Aside from the technical implementation, I also can never remember the different ways that Debian uses pseudo-packages for bugs and processes. I need them rarely enough to establish a mental model of how they are set up, or working memory of how they are used, but frequently enough to be annoyed by this.

How would things look like in a better world?

  1. Debian would switch from a custom bug tracker to a (any) well-established one.
  2. Debian would offer automation around processes. It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).

Old infrastructure: mailing list archives

It baffles me that in 2019, we still don’t have a conveniently browsable threaded archive of mailing list discussions. Email and threading is more widely used in Debian than anywhere else, so this is somewhat ironic. Gmane used to paper over this issue, but Gmane’s availability over the last few years has been spotty, to say the least (it is down as I write this).

I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.

Debian is hard to machine-read

While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome. I have picked just 3 quick examples to illustrate my point.

debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database.

pk4 needs to maintain its own cache to look up package metadata based on the package name. Other tools parse the apt database from scratch on every invocation. A proper database format, or at least a binary interchange format, would go a long way.

Debian Code Search wants to ingest new packages as quickly as possible. There used to be a fedmsg instance for Debian, but it no longer seems to exist. It is unclear where to get notifications from for new packages, and where best to fetch those packages.

Complicated build stack

See my “Debian package build tools” post. It really bugs me that the sprawl of tools is not seen as a problem by others.

Developer experience pretty painful

Most of the points discussed so far deal with the experience in developing Debian, but as I recently described in my post “Debugging experience in Debian”, the experience when developing using Debian leaves a lot to be desired, too.

I have more ideas

At this point, the article is getting pretty long, and hopefully you got a rough idea of my motivation.

While I described a number of specific shortcomings above, the final nail in the coffin is actually the lack of a positive outlook. I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

I intend to publish a few more posts about specific ideas for improving operating systems here. Stay tuned.

Lastly, I hope this post inspires someone, ideally a group of people, to improve the developer experience within Debian.

Planet DebianAndy Simpkins: Debian BSP: Cambridge continued

I am slowly making progress.  I am quite pleased with myself for slowly moving beyond triage, test, verify to now beginning to understand what is going on with some bugs and being able to suggest fixes :-)  That said my C++ foo is poor and add in QT as well and #917711 is beyond me.

Not only does quite a lot of work get done at a BSP; it is also very good to catch up with people, especially those who traveled to Cambridge from out of the area.  Thank you for taking your weekend to contribute to making Buster.

I must also take the opportunity to thank Sledge and Randombird for opening up their home to host the BSP and provide overnight accommodation as well.

More hacking is still going on…  Some different people from yesterday.

Differing people ++smcv –andrewsh  ++cjwatson –lamby

Planet DebianMichal Čihař: Weblate 3.5.1

Weblate 3.5.1 has been released today. Compared to the 3.5 release it brings several bug fixes and performance improvements.

Full list of changes:

  • Fixed Celery systemd unit example.
  • Fixed notifications from http repositories with login.
  • Fixed race condition in editing source string for monolingual translations.
  • Include output of failed addon execution in the logs.
  • Improved validation of choices for adding new language.
  • Allow to edit file format in component settings.
  • Update installation instructions to prefer Python 3.
  • Performance and consistency improvements for loading translations.
  • Make Microsoft Terminology service compatible with current zeep releases.
  • Localization updates.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Planet DebianAndrew Cater: Debian BSP Cambridge March 10th 2019 - post 2

Lots of very busy people chasing down bugs. A couple of folk have left. It's a good day and very productive: thanks to Steve an Jo, as ever, for food, coffee, coffee, beds and coffee.

Planet DebianAndrew Cater: Debian BSP Cambridge 10th March 2019

Folks are starting to turn up this morning. Kitchen full of people talking and cooking sausages and talking. A quiet room except for Pepper the dog chasing squeaky toys and people chasing into the kitchen for food. Folk are now gradually setting down to code and bug fix. All is good

Planet Linux AustraliaChris Smart: Running Home Assistant on Fedora with Docker

Home Assistant is a really great, open source home automation platform written in Python which supports hundreds of components. They have a containerised version called Hass.io which can run on a bunch of hardware and has a built-in marketplace to make the running of addons (like Let’s Encrypt) easy.

I’ve been running Home Assistant on a Raspberry Pi for a couple of years, but I want something that’s more poweful and where I have more control. Here’s how you can use the official Home Assistant containers on Fedora (note that this does not include their Hass.io marketplace).

First, install Fedora Server edition, which comes with the handy web UI for managing the system called Cockpit.

Once you’re up and running, install Docker and the Cockpit plugin.

sudo dnf install -y docker cockpit-docker

Now we can start and enable the Docker daemon and restart cockpit to load the Docker plugin.

sudo systemctl start docker && sudo systemctl enable docker
sudo systemctl restart cockpit

Create a location for the Home Assistant configuration and set the appropriate SELinux context. This lets you modify the configuration directly from the host and restart the container to pick up the change.

sudo mkdir -p /hass/config
sudo chcon -Rt svirt_sandbox_file_t /hass

Start up a container called hass using the Home Assistant Docker image which will start automatically thanks to the restart option. We pass through the /hass/config directory on the host as /config inside the container.

docker run --init -d \
--restart unless-stopped \
--name="hass" \
-v /hass/config/:/config \
-v /etc/localtime:/etc/localtime:ro \
--net=host \
homeassistant/home-assistant

You should be able to see the container starting up.

sudo docker ps

If you need to, you can get the logs for the container.

sudo docker logs hass

Once it’s started you should see port 8123 listening on the host

sudo ss -ltnp |grep 8123

Finally, enable port 8123 on the firewall to access the service on your network.

sudo firewall-cmd --zone=FedoraServer --add-port=8123/tcp
sudo firewall-cmd --runtime-to-permanent

Now browse to the IP address of your server on port 8123 and you should see Home Assistant. Create an account to get started!

,

Planet Linux AustraliaDonna Benjamin: Powerful Non Defensive Communication (PNDC)

How do we take the war out of our words? Sharon Strand Ellison's Powerful Non-Defensive Communication approach highlights our natural defensive reaction to feeling unsure, anxious or attacked when communicating with others.

She outlines these six different defensive reactions:

  1. Surrender-Betray
  2. Surrender-Sabotage
  3. Withdraw-Escape
  4. Withdraw-Entrap
  5. Counterattack-Justify
  6. Counterattack-Blame 

The first step toward neutralising our natural "fight, flight or freeze" response is to look within and identify our own individual "go to" response.

I know I tend to respond with counterattack-justify when I feel I'm being criticised, or challenged.  Learning to pause, and listen, is an ongoing project.
 
Ellison goes on to provide strategies we can all learn to use instead of our natural reflexes which can escalate to conflict.
 
She advocates three approaches:
  1. Questions
  2. Statements
  3. Predictions
 
Ellis book, "Taking the War out of our Words" is filled with examples, and exercises that will help you learn the skills to employ this approach.

TEDMeet the Spring 2019 class of TED Residents

Digital activist Lindsay Amer (foreground) and Kenneth Chabert (center) listen to their fellow Residents introduce themselves.

Digital activist Lindsay Amer (left) and Kenneth Chabert (center) listen to their fellow TED Residents introduce themselves. (Photo: Dian Lofton / TED)

On February 25, TED welcomed its latest class to the TED Residency program, an in-house incubator for breakthrough ideas. These 11 Residents will spend 14 weeks at TED’s New York headquarters, working and thinking together.

New Residents include:

  • A community organizer preserving disappearing languages
  • An LGBTQ+ digital activist educating kids and their parents about gender and identity
  • A social scientist chronicling how climate change impacts intimate details of our lives
  • A game developer moving in-real-life social-deduction games onto digital platforms
  • A former medical clown examining why performance art helps people get better
  • A venture capitalist predicting that adaptability will become the new watchword for success

Lindsay Amer is a content creator (pronoun: they/them) who makes educational videos for kids — and their parents. Their critically acclaimed web series, Queer Kid Stuff, gives children a vocabulary to help them express themselves. Amer is also developing a full-length screenplay for a family-friendly queer animated musical.

Daniel Bögre Udell is the cofounder and director of Wikitongues, a community organization that tackles language preservation by recording oral histories. UNESCO estimates that of the world’s 7,000 known languages, about 3,000 are at risk of being lost. He’s prototyping a toolkit to make it easier for people to get started on language preservation.

Resident alum Keith Kirkland gives the new class some words of encouragement.

Resident alum Keith Kirkland gives the new class some words of encouragement. (Photo: Dian Lofton / TED)

Young men from the Bronx, New York, balance dual lives, says Kenneth Chabert. Often, they have to establish a reputation in their neighborhoods in order to survive, while also doing well in school so they can get out. Chabert addresses their quandary through his organization, Gentleman’s Retreat, which teaches a select group of young men emotional and conversational intelligence, helps them get into into top colleges and universities, and provides experiences to extend their horizons.

New Resident Britt Wray is a Canadian science podcaster and broadcaster.

New Resident Britt Wray is a Canadian science podcaster and broadcaster. (Photo: Dian Lofton / TED)

Social entrepreneur Robert Clauser wants to pair nonprofits with the ready resources (both human and financial) of corporations, in a kind of philanthropic matchmaking.

Digital game developer and programmer Charlotte Ellett thinks that social deduction exercises such as Werewolf and Mafia aren’t just party games. As the cofounder of C63 Industries, she works on tools and competitive, mind-bending PC games to help people develop their social skills.

Venture investor and writer Natalie Fratto explores how adaptability may be a form of intelligence. To grapple with constant technological change, she believes, our Adaptability Quotient (AQ) will soon become the primary indicator of success — leaving IQ and EQ in the dust.

Jessica Ochoa Hendrix is the CEO and cofounder of Killer Snails, an educational game startup that creates award-winning tabletop, digital and virtual reality games and brings science to life in K-12 schools. To date, she has piloted her curriculum in more than 50 schools across 26 states, under the auspices of the National Science Foundation.

New Resident Priscilla Pemu joins us from Atlanta, GA.

Priscilla Pemu joins the Residency from Atlanta, GA. (Photo: Dian Lofton / TED)

Priscilla Pemu, MD, is an academic general internist who has developed a clinical platform to help patients with chronic diseases improve their health outcomes. Her patients receive digital health advice coupled with a coach from their church or community to hold them accountable — with stellar results! She now analyzes the conversations between participants and coaches to figure out what worked and how.

Michael Roberson, an adjunct professor at the New School and at Union Theological Seminary, is a longtime organizer in New York City’s ballroom community — which drew mainstream notice courtesy of Madonna’s 1990 “Vogue” video and Jennie Livingston’s 1991 documentary Paris Is Burning, and has since developed into a global subculture. Roberson is also a creative consultant for the FX series Pose. He believes that the family dynamics created within ballroom culture can teach the rest of us how to develop healthy communities.

Emmy Award winner Matt Wilson spent a decade as a medical clown at Memorial Sloan Kettering, helping children with life-threatening illnesses. He now researches how arts and performance improve health. He recently graduated from NYU with a master’s degree exploring the phenomena he has witnessed. (PS: He’s also a sword swallower!)

Cyndi Styvers speaks to Residency group

TED Residency Director Cyndi Stivers lays down the ground rules for the 14-week program. Rule #1: Show up! (Photo: Dian Lofton / TED)

Britt Wray, PhD, is a science writer and broadcaster who crafts stories about science, society and ethics. In her forthcoming book, she argues that climate change is creating intimate dilemmas in our lives, including whether and how to raise children. She’s the cohost of the BBC podcast Tomorrow’s World and is a contributing host on Canadian Broadcasting Corporation’s flagship science show The Nature of Things.

The new spring crew is joined by alumni from previous Residency classes, back to continue the important work they do. Among them is the founder of a sustainable food venture, a designer creating solutions for animals, and a playwright looking at the role technology plays in art. The returning group includes: Heidi Boisvert, Anindya Kundu, Mohammad Modarres, Kat Mustatea, Marlon Peterson, Mariana Prieto and Michael Rain.

,

CryptogramFriday Squid Blogging: Squid Proteins Can Be an Alternative to Plastic

Is there anything squids aren't good for?

Academic paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesWhat Makes “Green Book” an Unusual Oscar Winner

Last month, Green Book won Best Picture at the 91st Academy Awards. The movie tells the based-on-a-true-story of Tony Lip, a white working-class bouncer from the Bronx, who is hired to drive world-class classical pianist Dr. Don Shirley on a tour of performances in the early-1960s Deep South. Shirley and Lip butt heads over their differences, encounter Jim Crow-era racism, and, ultimately, form an unlikely friendship. With period-perfect art direction and top-notch actors in Mahershala Ali and Viggo Mortensen, the movie is competently-crafted and performed fairly well at the box office.

Still, the movie has also been controversial for at least two reasons. First, many critics have pointed out that the movie paints a too simple account of racism and racial inequality and positions them as problem in a long ago past. New York Times movie critic Wesley Morris has called Green Book the latest in a long line of “racial reconciliation fantasy” films that have gone on to be honored at the Oscars.

But Green Book stands out for another reason. It’s an unlikely movie to win the Best Picture because, well, it’s just not very good.

Source: Wikimedia Commons

Sociologists have long been interested in how Hollywood movies represent society and which types of movies the Academy does and doesn’t reward. Matthew Hughey, for example, has noted the overwhelming whiteness of Oscar award winners at the Oscars, despite the Oscars A2020 initiative aimed at improving the diversity of the Academy by 2020. But, as Maryann Erigha shows, the limited number of people of color winning at the Oscars reflects, in part, the broader under-representation and exclusion of people of color in Hollywood.

Apart from race, past research by Gabriel Rossman and Oliver Schilke has found that the Oscars tend to favor certain genres like dramas, period pieces, and movies about media workers (e.g., artists, journalists, musicians). Most winners are released in the final few months of the year and have actors or directors with multiple prior nominations. According to these considerations, Green Book had a lot going for it. Released during the holiday season, it is a historical movie about a musician, co-starring a prior Oscar winner and a prior multiple time Oscar nominee. Sounds like perfect Oscar bait.

And, yet, quality matters, too. It’s supposed to be the Best Picture after all. The problem is what makes a movie “good” is both socially-constructed and a matter of opinion. Most studies that examine questions related to movies measure quality using the average of film critics’ reviews. Sites like Metacritic compile these reviews and produce composite scores on a scale from 0 (the worst reviewed movie) to 100 (the best reviewed movie). Of course, critics’ preferences sometimes diverge from popular tastes (see: the ongoing box office success of the Transformers movies, despite being vigorously panned by critics). Still, movies with higher Metacritic scores tend to do better at the box office, holding all else constant.

If more critically-acclaimed movies do better at the box office, how does quality (or at least the average of critical opinion) translate into Academy Awards? It is certainly true that Oscar nominees tend to have higher Metacritic scores than the wider population of award-eligible movies. But the nominees are certainly not just a list of the most critically-acclaimed movies of the year. Among the films eligible for this year’s awards, movies like The Rider, Cold War, Eight Grade, The Death of Stalin, and even Paddington 2 all had higher Metacritic scores than most of the Best Picture nominees. So, while nominated movies tend to be better than most movies, they are not necessarily the “best” in the eyes of the critics.

Even among the nominees, it is not the case that the most critically-acclaimed movie always wins. In the plot below, I chart the range of Metacritic scores of the Oscars nominees since the Academy Awards reinvented the category in 2009 (by expanding the number of nominees and changing the voting method). The top of the golden area represents the highest-rated movie in the pool of nominees and the bottom represents the worst-rated film. The line captures the average of the pool of nominees and the dots point out each year’s winner.

Click to Enlarge

As we can see, the most critically-acclaimed movie doesn’t always win, but the Best Picture is usually above the average of the pool of nominees. What makes Green Book really unusual as a Best Picture winner is that it’s well below the average of this year’s pool and the worst winner since 2009. Moreover, according to MetaCritic (and LA Times’ film critic Justin Chang), Green Book is the worst winner since Crash in 2005.

Green Book’s Best Picture win has led to some renewed calls to reconsider the Academy’s ranked choice voting system in which voters indicate the order of preferences rather than voting for a single movie. The irony is that when Moonlight, a highly critically-acclaimed movie with an all-black cast, won in 2016, that win was seen as a victory made possible by ranked choice voting. Now, in 2019, we have a racially-controversial and unusually weak Best Picture winner that took home the award because it appears to have been the “least disliked” movie in the pool.

The debate over ranked choice voting for the Academy Awards may ultimately end in further voting rule changes. Until then, we should regard a relatively weak movie like Green Book winning Best Picture as the exception to the rule.   

Andrew M. Lindner is an Associate Professor at Skidmore College. His research interests include media sociology, political sociology, and sociology of sport.

(View original at https://thesocietypages.org/socimages)

TEDMeet the 2019 TED Fellows and Senior Fellows

The TED Fellows program turns 10 in 2019 — and to mark this important milestone, we’re excited to kick off the year of celebration by announcing the impressive new group of TED2019 Fellows and Senior Fellows! This year’s TED Fellows class represents 12 countries across four continents; they’re leaders in their fields — ranging from astrodynamics to policing to conservation and beyond — and they’re looking for new ways to collaborate and address today’s most complex challenges.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 472 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2019, April 15-19, in Vancouver, BC, Canada.

Alexis Gambis
Alexis Gambis (USA | France)
Filmmaker + biologist
Filmmaker and biologist creating films that merge scientific data with narrative in an effort to make stories of science more human and accessible.


Ali Al-Ibrahim
Ali Al-Ibrahim (Syria | Sweden)
Investigative journalist
Journalist reporting on the front lines of the Syrian conflict and creating films about the daily struggles of Syrians.


Amma Ghartey-Tagoe Kootin
Amma Ghartey-Tagoe Kootin (USA)
Scholar + artist
Scholar and artist working across academia and the entertainment industry to transform archival material about black identity into theatrical performances.


Arnav Kapur
Arnav Kapur (USA | India)
Technologist
Inventor creating wearable AI devices that augment human cognition and give voice to those who have lost their ability to speak.


Wild fishing cats live in the Mangrove forests of southeast Asia, feeding on fish and mangrove crab in the surrounding waters. Not much is known about this rare species. Conservationist Ashwin Naidu and his organization, Fishing Cat Conservancy, are working to protect these cats and their endangered habitat. (Photo: Anjani Kumar/Fishing Cat Conservancy)

Ashwin Naidu
Ashwin Naidu (USA | India)
Fishing cat conservationist
Conservationist and co-founder of Fishing Cat Conservancy, a nonprofit dedicated to protecting fishing cats and their endangered mangrove habitat.


Brandon Anderson
Brandon Anderson (USA)
Data entrepreneur
Human rights activist and founder of Raheem AI, a tech nonprofit working to end police violence through data collection, storytelling and community organizing.


Brandon Clifford
Brandon Clifford (USA)
Ancient technology architect
Architectural designer and co-founder of Matter Design, an interdisciplinary design studio that uses the technology of ancient civilizations to solve contemporary problems.


Bruce Friedrich
Bruce Friedrich (USA)
Food innovator
Founder of the Good Food Institute, an organization supporting the creation of plant and cell-based meat for a more healthy and sustainable food system.


Christopher Bahl
Christopher Bahl (USA)
Protein designer
Molecular engineer using computational design to develop new protein drugs that combat infectious disease.


Erika Hamden
Erika Hamden (USA)
Astrophysicist
Astrophysicist developing telescopes and new ultraviolet detection technologies to improve our ability to observe distant galaxies.


Federica Bianco
Federica Bianco (USA | Italy)
Urban astrophysicist
Astrophysicist using an interdisciplinary approach to study stellar explosions and help build resilient cities by applying astronomical data processing techniques to urban science.


Gangadhar Patil
Gangadhar Patil (India)
Journalism entrepreneur
Journalist and founder of 101Reporters, an innovative platform connecting grassroots journalists with international publishers to spotlight rural reporting.


In Tokyo Medical University for Rejected Women, multimedia artist Hiromi Ozaki explores the systematic discrimination of female applicants to medical school in Japan. (Photo: Hiromi Ozaki)

Hiromi Ozaki
Hiromi Ozaki (Japan | UK)
Artist
Artist creating music, film and multimedia installations that explore the social and ethical implications of emerging technologies.


Ivonne Roman
Ivonne Roman (USA)
Police captain
Police captain and co-founder of the Women’s Leadership Academy, an organization working to increase the recruitment and retention of women in policing.


Jess Kutch
Jess Kutch (USA)
Labor entrepreneur
Co-founder of Coworker.org, a labor organization for the 21st century helping workers solve problems and advance change through an open online platform.


Leila Pirhaji
Leila Pirhaji (Iran | USA)
Biotech entrepreneur
Computational biologist and founder of ReviveMed, a biotech company pioneering the use of artificial intelligence for drug discovery and treatment of metabolic diseases.


Morangels Mbizah
Moreangels Mbizah (Zimbabwe)
Lion conservationist
Conservation biologist developing innovative community-based conservation methods to protect lions and their habitat.


Moriba Jah
Moriba Jah (USA)
Space environmentalist
Astrodynamicist tracking and monitoring satellites and space garbage to make outer space safe, secure and sustainable for future generations.


Muthoni Ndonga
Muthoni Drummer Queen (Kenya)
Musician
Musician and cultural entrepreneur fusing traditional drum patterns and modern styles such as hip-hop and reggae to create the sound of “African cool.”


Nanfu Wang
Nanfu Wang (China | USA)
Documentary filmmaker
Documentary filmmaker uncovering stories of human rights and untold histories in China through a characteristic immersive approach.


TED2019 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Adital Ela
Adital Ela (Israel)
Sustainable materials designer
Entrepreneur developing sustainable materials and construction methods that mimic natural processes and minimize environmental impact.


Anita Doron
Anita Doron (Canada | Hungary)
Filmmaker
Filmmaker who wrote The Breadwinner, an Oscar-nominated coming-of-age story set in Taliban-controlled Afghanistan.


Jessica Ladd
Constance Hockaday (USA)
Artist
Artist creating experiential performances on public waterways that examine issues surrounding public space, political voice and belonging.


Jorge Mañes Rubio
Eman Mohammed (USA | Palestine)
Photojournalist
Photojournalist documenting contemporary issues, including race relations and immigration, often through a characteristic long-form approach.


Erine Gray
Erine Gray (USA)
Social services entrepreneur
Software developer and founder of Aunt Bertha, a platform helping people access social services such as food banks, health care, housing and educational programs.


In one of her projects, documentary photographer Kiana Hayeri took a rare, intimate look at the lives of single mothers in Afghanistan, capturing their struggles and strengths. Here, two children hang a picture of their father. (Photo: Kiana Hayeri)

Kiana Hayeri
Kiana Hayeri (Canada | Iran)
Documentary photographer
Documentary photographer exploring complex topics such as migration, adolescence and sexuality in marginalized communities.


An illustration of Tungsenia, an early relative of lungfish. Paleobiologist Lauren Sallan studies the vast fossil records to explore how extinctions of fish like this have affected biodiversity in the earth’s oceans. (Photo: Nobu Tamura)

v
Lauren Sallan (USA)
Paleobiologist
Paleobiologist using the vast fossil record as a deep time database to explore how mass extinctions, environmental change and shifting ecologies impact biodiversity.


David Sengeh
Pratik Shah (USA | India)
Health technologist
Scientist developing new artificial intelligence technologies for antibiotic discovery, faster clinical trials and tools to help doctors better diagnose patients.


Premesh Chandran
Premesh Chandran (Malaysia)
Journalism entrepreneur
Cofounder and CEO of Malaysiakini.com, the most popular independent online news organization in Malaysia, which is working to create meaningful political change.


Samuel “Blitz the Ambassador” Bazawule
Samuel “Blitz the Ambassador” Bazawule (USA | Ghana)
Musician + filmmaker
Hip-hop artist and filmmaker telling stories of the polyphonic African diaspora.

Krebs on SecurityMyEquifax.com Bypasses Credit Freeze PIN

Most people who have frozen their credit files with Equifax have been issued a numeric Personal Identification Number (PIN) which is supposed to be required before a freeze can be lifted or thawed. Unfortunately, if you don’t already have an account at the credit bureau’s new myEquifax portal, it may be simple for identity thieves to lift an existing credit freeze at Equifax and bypass the PIN armed with little more than your, name, Social Security number and birthday.

Consumers in every U.S. state can now freeze their credit files for free with Equifax and two other major bureaus (Trans Union and Experian). A freeze makes it much harder for identity thieves to open new lines of credit in your name.

In the wake of Equifax’s epic 2017 data breach impacting some 148 million Americans, many people did freeze their credit files at the big three in response. But Equifax has changed a few things since then.

Seeking to manage my own credit freeze at equifax.com as I’d done in years past, I was steered toward creating an account at myequifax.com, which I was shocked to find I did not previously possess.

Getting an account at myequifax.com was easy. In fact, it was too easy. The portal asked me for an email address and suggested a longish, randomized password, which I accepted. I chose an old email address that I knew wasn’t directly tied to my real-life identity.

The next page asked me enter my SSN and date of birth, and to share a phone number (sharing was optional, so I didn’t). SSN and DOB data is widely available for sale in the cybercrime underground on almost all U.S. citizens. This has been the reality for years, and was so well before Equifax announced its big 2017 breach.

myEquifax said it couldn’t verify that my email address belonged to the Brian Krebs at that SSN and DOB. It then asked a series of four security questions — so-called “knowledge-based authentication” or KBA questions designed to see if I could recall bits about my recent financial history.

In general, the data being asked about in these KBA quizzes is culled from public records, meaning that this information likely is publicly available in some form — either digitally or in-person. Indeed, I have long assailed the KBA industry as creating a false sense of security that is easily bypassed by fraudsters.

One potential problem with relying on KBA questions to authenticate consumers online is that so much of the information needed to successfully guess the answers to those multiple-choice questions is now indexed or exposed by search engines, social networks and third-party services online — both criminal and commercial.

The first three multiple-guess questions myEquifax asked were about loans or debts that I have never owed. Thus, the answer to the first three KBA questions asked was, “none of the above.” The final question asked for the name of our last mortgage company. Again, information that is not hard to find.

Satisfied with my answers, Equifax informed me that yes indeed I was Brian Krebs and that I could now manage my existing freeze with the company. After requesting a thaw, I was brought to a vintage Equifax page that looked nothing like myEquifax’s sunnier new online plumage.

Equifax’s site says it will require users requesting changes to an existing credit freeze to have access to their freeze PIN and be ready to supply it. But Equifax never actually asks for the PIN.

This page informed me that if I previously secured a freeze of my credit file with Equifax and been given a PIN needed to undo that status in any way, that I should be ready to provide said information if I was requesting changes via phone or email. 

In other words, credit freezes and thaws requested via myEquifax don’t require users to supply any pre-existing PIN.

Fine, I said. Let’s do this.

myEquifax then asked for the date range requested to thaw my credit freeze. Submit.

“We’ve successfully processed your security freeze request!,” the site declared.

This also was exclaimed in an email to the random old address I’d used at myEquifax, although the site never once made any attempt to validate that I had access to this inbox, something that could be done by simply sending a confirmation link that needs to be clicked to activate the account.

In addition, I noticed Equifax added my old mobile number to my account, even though I never supplied this information and was not using this phone when I created the myEquifax account.

Successfully unfreezing (temporarily thawing) my credit freeze did not require me to ever supply my previously-issued freeze PIN from Equifax. Anyone who knew the vaguest and most knowable details about me could have done the same.

myEquifax.com does not currently seek to verify the account by requesting confirmation via a phone call or text to the phone number associated with the account (also, recall that even providing a phone number was optional).

Happily, I did discover then when I used a different computer and Internet address to try to open up another account under my name, date of birth and SSN, it informed me that a profile already existed for this information. This suggests that signing up at myEquifax is probably a good idea, given that the alternative is more risky.

It was way too easy to create my account, but I’m not saying everyone will be able to create one online. In testing with several readers over the past 24 hours, myEquifax seems to be returning a lot more error pages at the KBA stage of the process now, prompting people to try again later or make a request via email or phone.

Equifax spokesperson Nancy Bistritz-Balkan said not requiring a PIN for people with existing freezes was by design.

“With myEquifax, we created an online experience that enables consumers to securely and conveniently manage security freezes and fraud alerts,” Bistritz-Balkan said..

“We deployed an experience that embraces both security standards (using a multi-factor and layered approach to verify the consumer’s identity) and reflects specific consumer feedback on managing security freezes and fraud alerts online without the use of a PIN,” she continued. “The account set-up process, which involves the creation of a username and password, relies on both user inputs and other factors to securely establish, verify, and authenticate that the consumer’s identity is connected to the consumer every time.”

I asked Bistritz-Balkan what else besides a username and a password the company may have meant by “multi-factor;” I’m still waiting for clarification. But I did not experience anything like multi-factor in setting up or logging into my myEquifax account.

This may by closer to Equifax’s idea of multi-factor: The company told me that if I still really wanted to use my freeze PIN, I could always call their 800 number (800-349-9960) or make the request via mail. Nevermind that if I’m a bad guy looking to hack others, I’m definitely going to be using the myEquifax Web site — not the options that make me have to supply a PIN.

Virtually the entire United States population in 2017 became eligible for free credit monitoring from Equifax following its 2017 breach. Credit monitoring can be useful for recovering from identity theft, but consumers should not expect these services to block new account fraud; the most they will likely do in this case is alert you after ID thieves have already opened new accounts in your name.

A credit freeze does not impact your ability to use any existing financial accounts you may have, including bank and credit/debit accounts. Nor will it protect you from fraud on those existing accounts. It is mainly a way to minimize the risk that someone may be able to create new accounts in your name.

If you haven’t done so lately, it might a good time to order a free copy of your credit report from annualcreditreport.com. This service entitles each consumer one free copy of their credit report annually from each of the three credit bureaus — either all at once or spread out over the year.

Additional reading:

NYTimes, March 8, 2019: How Equifax Complicates a Simple Task: Freezing a Child’s Credit

The Register, March 8, 2019: Tech Security at Equifax was so diabolical, senators want to pass US laws making its incompetence  illegal.

Equifax Investigation by Senate Homeland Security committee (.PDF, Sen. Carper).

Credit Freezes are Free: Let the Ice Age Begin

Plant Your Flag, Mark Your Territory

Experian Site Can Give Anyone Your Freeze PIN

Survey: Americans Spent $1.4B on Credit Freeze Fees in Wake of Equifax Breach

Equifax Breach Fallout: Your Salary History

Data Broker Giants Hacked by ID Theft Service

Experian Sold Access to ID Theft Service

CryptogramCybersecurity Insurance Not Paying for NotPetya Losses

This will complicate things:

To complicate matters, having cyber insurance might not cover everyone's losses. Zurich American Insurance Company refused to pay out a $100 million claim from Mondelez, saying that since the U.S. and other governments labeled the NotPetya attack as an action by the Russian military their claim was excluded under the "hostile or warlike action in time of peace or war" exemption.

I get that $100 million is real money, but the insurance industry needs to figure out how to properly insure commercial networks against this sort of thing.

Worse Than FailureError'd: No Matter Where You Go...an Error is There

Michael P. wrote, "Only two minutes and a couple of blocks from my destination, Waze decided I should take a 2-hour, 80-mile detour."

 

"Thanks, Fry's, but I don't think I'll be needing a raincheck if you run out of those specials," Todd C. writes.

 

"I was digging around in the settings for the Outlook Web app and, well, it's good to see that even Microsoft has trouble with dates sometimes," John W. writes.

 

Dan writes, "In a situation like this, one would hope that a well-stocked convenience store stocks replacement memory."

 

"While waiting for take off at DCA, I noticed a big screen advertisement for a client MAC address," Ken L. wrote.

 

Job wrote, "Statements that something will or won't happen in Production the minute you are actually IN Production."

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet Linux AustraliaCraige McWhirter: Secure Data in Public Configuration Management With Propellor

Steam Punk Propellor Ring by Daniel Proulx

TL;DR

List fields and contexts:

$ propellor --list-fields

Set a field for a particular context:

$ propellor --set 'SshAuthorizedKeys "myuser"' yourServers < authKeys

Dump a field from a specific context:

$ propellor --dump 'SshAuthorizedKeys "myuser"' yourServers

An Example

When using Propellor for configuration management, you can utilise GPG encryption to encrypt data sets. This enables you to leverage public git repositories for your centralised configuration management needs.

To list existing fields, you can run:

$ propellor --list-fields

which will not only list existing fields but will helpfully also list fields that would be used if set:

Missing data that would be used if set:
Field                             Context          Used by
-----                             -------          -------
'Password "myuser"'               'yourDesktops'   your.host.name
'CryptPassword "myuser"'          'yourServers'    your.server.name
'PrivFile "/etc/mail/dkim.key"'   'mailServers'    your.mail.server

You can set these fields with input from either STDIN or files prepared earlier.

For example, if you have public SSH keys you wish to distribute, you can place then into a file then use that file to populate the fields of an appropriate context. The contents of an example authorized_keys, we'll call authKeys, may look like this:

ssh-ed25519 eetohm9doJ4ta2Joo~P2geetoh6aBah9efu4ta5ievoongah5feih2eY4fie9xa1ughi you@host1
ssh-ed25519 choi7moogh<i2Jie6uejoo6ANoMei;th2ahm^aiR(e5Gohgh5Du-oqu1roh6Mie4shie you@host2
ssh-ed25519 baewah%vooPho2Huofaicahnob=i^ph;o1Meod:eugohtiuGeecho2eiwi.a7cuJain6 you@host3

To add these keys to the appropriate users for the hosts of a particular context you could run:

$ propellor --set 'SshAuthorizedKeys "myuser"' yourServers < authKeys

To verify that the fields for this context have the correct data, you can dump it:

$ propellor --dump 'SshAuthorizedKeys "myuser"' yourServers
gpg: encrypted with 256-bit ECDH key, ID 5F4CEXB7GU3AHT1E, created 2019-03-08
      "My User <myuser@my.domain.tld>"
      ssh-ed25519 eetohm9doJ4ta2Joo~P2geetoh6aBah9efu4ta5ievoongah5feih2eY4fie9xa1ughi you@host1
      ssh-ed25519 choi7moogh<i2Jie6uejoo6ANoMei;th2ahm^aiR(e5Gohgh5Du-oqu1roh6Mie4shie you@host2
      ssh-ed25519 baewah%vooPho2Huofaicahnob=i^ph;o1Meod:eugohtiuGeecho2eiwi.a7cuJain6 you@host3

When you next spin Propellor for the desired hosts, those SSH public keys with be installed into the authorized_keys_ filefor the user myuser for hosts that belong to the allServers context.

Setting and Storing Passwords

One of the most obvious and practical uses of this feature is to set secure data that needs to be distributed, such as passwords or certificates. We'll use passwords for this example.

Create a hash of the password you wish to distribute:

$ mkpasswd -m sha-512 > /tmp/deleteme
Password:
$ cat /tmp/deleteme
$6$cyxX.TmGPZWuqQu$LxhbVBaUnFmevOVi1V1NApZA0TCcSkK1241eiZwhhBQTm/PpjoLHe3OMnbjeswa6rgzNAq3pXTB4KjvfF1iXA1

Now that we have that file, we can use it as input for Propellor:

$ propellor --set 'CryptPassword "myuser"' yourServers < /tmp/deleteme
Enter private data on stdin; ctrl-D when done:
gpg: encrypted with 256-bit ECDH key, ID 5F4CEXB7GU3AHT1E, created 2019-03-08
      "My User <myuser@my.domain.tld>"
gpg: WARNING: standard input reopened
Private data set.

Tidy up:

$ rm /tmp/deletem

You're now ready to deploy that password for that user to those servers.

Planet Linux AustraliaMichael Still: Mirabella Genio smart lights with Tasmota and Home Assistant

Share

One of the things I like about Home Assistant is that it allows you to take hardware from a bunch of various vendors and stitch it together into a single consistent interface. So for example I now have five home automation vendor apps on my phone, but don’t use any of them because Home Assistant manages everything.

A concrete example — we have Philips Hue lights, but they’re not perfect. They’re expensive, require a hub, and need to talk to a Philips data centre to function (i.e. the internet needs to work at my house, which isn’t always true thanks to the failings of the Liberal Party).

I’d been meaning to look at the cheapo smart lights from Kmart for a while, and finally got around to it this week. For $15 you can pickup a dimmable white globe, and for $29 you can have a RGB one. That’s heaps cheaper than the Hue options. Even better, the globes are flashable to run the open source Tasmota stack, which means no web services required!

So here are some instructions on flashing these globes to be useful:

Buy the globes. I bought this warm while dimmable and this RBG option.

Flash to tasmota. This was a little bit fiddly, but was mostly about getting the sequence to put the globes into config mode right (turn off for 10 seconds, turn on, turn off, turn on, turn off, turn on). Wait a few seconds and then expect the lamp to blink rapidly indicating its in config mode. For Canberra people I now have a raspberry pi setup to do this easily, so we can run a flashing session sometime if people want.

Configure tasmota. This is really up to you, but the globes need to know local wifi details, where your MQTT server is, and stuff like that.

And then configure Home Assistant. The example of how to do that from my house is on github.

Share

,

CryptogramDetecting Shoplifting Behavior

This system claims to detect suspicious behavior that indicates shoplifting:

Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.

The article has no detail or analysis, so we don't know how well it works. But this kind of thing is surely the future of video surveillance.

CryptogramCybersecurity for the Public Interest

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there's no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It's an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism -- ­as practiced by the Internet companies that are already spying on everyone­ -- matters. So does society's underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals -- ­that occasionally become law­ -- that are technological disasters.

This isn't sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand­ -- and are involved in -- ­policy. We need public-interest technologists.

Let's pause at that term. The Ford Foundation defines public-interest technologists as "technology practitioners who focus on social justice, the common good, and/or the public interest." A group of academics recently wrote that public-interest technologists are people who "study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good." Tim Berners-Lee has called them "philosophical engineers." I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it's not the best term­ -- and I know not everyone likes it­ -- but it's a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn't want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn't new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it's really not. There aren't enough people doing it, there aren't enough people who know it needs to be done, and there aren't enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There's a report titled A Pivotal Moment that includes this quote: "While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress."

That quote speaks to the three places for intervention. One: the supply side. There just isn't enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with "clinics" at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they've learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama's US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ -- places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn't just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There's a pervasive myth in Silicon Valley that technology is politically neutral. It's not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn't matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: "What should be governed by the state, and what should be governed by the market?" This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: "How much of our lives should be governed by technology, and under what terms?" In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February issue of IEEE Security & Privacy.

Together with the Ford Foundation, I am hosting a one-day mini-track on public-interest technologists at the RSA Conference this week on Thursday. We've had some press coverage.

Edited to Add (3/7): More news articles.

CryptogramLetterlocking

Really good article on the now-lost art of letterlocking.

Worse Than FailureCodeSOD: Offensively Defensive Offense

Sometimes, the best defense is a good offense. Other times, the best offense is a good defense. And if you’re not sure which is which, you’ll never be a true strategic mastermind.

Tina’s co-worker understands that this is true for defensive programming. Always, always, always catch exceptions. That’s a good defense.

Project getProject() {
    Project projectToReturn = null;
    try {
      projectToReturn = new Project();
    } catch (Exception e) {
      Logger.log("could not instantiate Project")
    }
    return projectToReturn;
}

The good offense is not actually doing anything useful with the exception and returning a null. Now the calling code needs to also go on the defense to make sure that they handle that null appropriately.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Worse Than FailureSponsor Post: Free TDWTF Mug Day 2019

Long time, no mug! It's been an insanely long time since we've held a Free TDWTF Mug Day. So long that I'm sure most of you have forgotten the joy that is free mug day. Here's how it works:

I've been pretty excited about BuildMaster 6.1, in part because it returns the product to my original vision of helping developers focus on writing great software instead of worrying about how to build, test, and deploy it from source code to production. Or, CI/CD as we'd call it today.

I'd love to get your feedback on the release, and perhaps ideas on how I can work to improve the product. If you'd be willing to help me, I'll send you one of these beautiful, oversized TDWTF mugs, as modeled by Jawaad M:

Jawaad is a large man, and even this mug is almost to big for him

To get one, all you have to do is either download/install BuildMaster or spin up our pre-made virtual machine(AMI) image, then run through this quick configuration and fill out this form with your name, address, etc. It should take all of 15 minutes or so to complete.

Everything's free, and there's no credit card needed, or anything like that. In fact, you can keep using BuildMaster for free if you'd like -- there's no server, application, or even user limit.

This offer expires on March 31, 2019, and supply is limited to 250, so sign up soon! To get started, just follow this link and, in a few weeks time, you'll not only be more knowledgeable about BuildMaster, but you'll be enjoying beverages much more fashionably with these nice, hefty The Daily WTF mugs.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

TEDTED original podcast WorkLife with Adam Grant is back with Season 2 (and a sneak peek trailer)

The breakaway hit returns March 5, delving deeper into how we work and the psychology of making work not suck

Organizational psychologist, bestselling author and TED speaker Adam Grant returns March 5 with Season 2 of WorkLife with Adam Grant, a TED Original podcast series that takes you inside the minds of some of the world’s most unusual professionals to discover the keys to a better work life. Listen to a sneak peek trailer now and subscribe.

WorkLife was among Apple Podcasts’ most downloaded new shows of 2018, and the trailer gives a taste of what’s in store for 2019 – from celebrating the potential of black sheep in the workplace (as Pixar did) to bouncing back from rejection and examining whether it’s actually possible to create an a*hole-free office.

Each new WorkLife episode dives into different remarkable, and often unexpected, workplaces – among them the US Navy, Duolingo and the Norwegian Olympic alpine ski team. Adam’s immersive interviews take place in the field as well as the studio, with a mission to empower listeners with insightful and actionable ideas that they can apply to their own work.

“I’m exploring ways to make work more creative and more fun,” says Adam, the bestselling author of Originals and Give and Take. “We spend almost a quarter of our lives in our jobs, and I want to figure out how to make all that time worth your time.”

Produced by TED in partnership with Transmitter Media, WorkLife is TED’s first original podcast created in partnership with a TED speaker. Adam’s talks “Are you a giver or a taker?” and “The surprising habits of original thinkers” have together been viewed more than 15 million times in the past three years.

TED’s continued expansion of its content programming beyond its signature TED-talk format in both the audio and video space. Other recent TED original content launches include The TED Interview, a podcast hosted by Head of TED Chris Anderson that features deep dives with TED speakers; Small Thing Big Idea, a Facebook Watch video series about everyday designs that changed the world; and the Indian primetime live-audience television series TED Talks India: Nayi Soch, hosted by Bollywood star and TED speaker Shah Rukh Khan.

WorkLife with Adam Grant Season 2 debuts Tuesday, March 5 on Apple Podcasts, the TED Android app, or wherever you like to listen to podcasts. Season 2 features eight episodes, roughly 30 minutes each. It’s sponsored by Accenture, Bonobos, Hilton and JPMorgan Chase & Co. New episodes will be made available every Tuesday.

Cory DoctorowWhere to catch me this weekend at SXSW

I’m heading back to Austin for the SXSW Interactive festival and you can catch me three times this weekend: first on the Untold AI panel with Malka Older, Rashida Richardson and Christopher Noessel (5-6PM, Fairmont Manchester AB); then at the EFF Austin Party with Cindy Cohn and Bruce Sterling (7PM, 1309 Bonham Terrace); and on Sunday, I’m giving a keynote for Berlin’s Re:Publica conference, which has its own track at SXSW; I’m speaking about Europe’s new Copyright Directive and its dread Article 13 at 1PM at Buffalo Billiards, 201 East 6th Street.

CryptogramDigital Signatures in PDFs Are Broken

Researchers have demonstrated spoofing of digital signatures in PDF files.

This would matter more if PDF digital signatures were widely used. Still, the researchers have worked with the various companies that make PDF readers to close the vulnerabilities. You should update your software.

Details are here.

News article.

Worse Than FailureCodeSOD: Switching to Offshore

A lot of ink has been spilled talking about the perils and pitfalls of offshore development. Businesses love paying the wage arbitrage game, and shipping software development tasks to an outside vendor and paying half the wage they would for a dedicated employee.

Of course, the key difference is the word “dedicated”. You could have the most highly skilled offshore developer imaginable, but at the end of the day: they don’t care. They don’t care about you, or your business. They don’t care if their code is good or easy to maintain. They’re planning to copy-and-paste their way up the ranks of their business organization, and that means they want to get rotated off your contract onto whatever the next “plum” assignment is.

Jules H worked for a company which went so far as to mandate that every project needed to leverage at least one offshore resource. In practice, this meant any project of scale would shuffle through half a dozen offshore developers during its lifetime. The regular employees ended up doing more work on the project than they would have if it had been developed in-house, because each specification had to be written out in utterly exhaustive detail, down to explaining what a checkbox was, because if you simply told them to add “a checkbox”, you’d get a dropdown or a text area instead.

Jules got called in when someone noticed that the “save” functionality on one page had completely broken. Since the last change to that application had been a year ago, the feature hadn’t been working for a year.

The page used a client-side MVC framework that also tied into a .NET MVC view, and due to the way the page had been implemented, all the client widgets treated everything as stringly typed (e.g., "true", not true), but all the server-side code expected JSON data with real types.

The code which handled this conversion was attempting to access a field which didn’t exist, and that was the cause of the broken save functionality. That was easy for Jules to fix, but it also meant he had to take a look at the various techniques they used to handle this stringly-typed conversion.

var parseBoolean = function(string) {
  var bool;
  bool = (function() {
      switch (false) {
          case string.toLowerCase() !== 'true':
              return true;
          case string.toLowerCase() !== 'false':
              return false;
      }
  })();
      return bool;
};

When I first glanced at this code, I thought the biggest WTF was the switch. We switch on the value false, which forces all of our cases to do !==, only to return the opposite value.

But that’s when I noticed the return. The entire switch is, for some inexplicable reason, wrapped in an immediately invoked function expression- an anonymous function called instantly.

Jules made the bare minimum number of changes to get the save function working, but showed this to his management. This particular entry was the final straw, and after a protracted fight with upper management, Jules’s team didn’t have to use the offshore developers anymore.

Of course, that didn’t mean the contract was cancelled. Other teams were now expected to fully leverage that excess capacity.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaGary Pendergast: Authentication in WordPress

WebAuthn is now a W3C recommendation, bringing us one step closer to not having to use passwords anymore. If you’re not familiar with WebAuthn, here’s a little demo (if you don’t own a security key, it’ll probably work best on an Android phone with a fingerprint reader).

That I needed to add a disclaimer for the demo indicates the state of WebAuthn authenticator support. It’s nice when it works, but it’s clearly still in progress, and that progress varies. WebAuthn also doesn’t cover how the authenticator device works, that falls under the proposed CTAP standard. They work together to form the FIDO2 Project. Currently, the most reliable option is to purchase a security key, but quality varies wildly, and needing to carry around an extra dongle just for logging in to sites is no fun.

What WordPress Needs

Anything that replaces passwords needs to provide some extra benefit, without losing the strengths of the password model:

  • Passwords are universally understood as an authentication model.
  • They’re portable: you don’t need a special app or token to use them anywhere.
  • They’re extendable: strong passwords can be enforced as needed. Additional authentication (2FA codes, for example) can be added, too.

Magic login links are an interesting step in this direction. The WordPress mobile apps added magic login support for WordPress.com accounts a while ago, I’d love to see this working on all WordPress sites.

A WebAuthn-based model would be a wonderful future step, once the entire user experience is more polished.

The password-less future hasn’t quite arrived yet, but we’re getting closer.

,

TEDRegister for the first course of the Community Health Academy

More than a billion people in the world lack access to basic health care. It’s a hard truth that Raj Panjabi pointed to as he accepted the TED Prize in 2017 — globally, there’s a shortage of accredited health workers, and many people living in remote areas are all but cut off from care. There’s a proven way to making sure they get it: Train locals to serve as community health workers, giving them the skills to bridge between their neighbors and the health care system. Trained community health workers can extend health care to millions of people.

Panjabi’s wish was to launch the Community Health Academy, a global platform dedicated to training, connecting and empowering community health workers and health system leaders. Today, the Academy opens registration for its first leadership course, offered in partnership with HarvardX and edX: “Strengthening Community Health Worker Programs to Deliver Primary Health Care.” The course will introduce the key concepts of national community health worker programs and look at some of the common challenges in launching and building them. It includes lessons from a wide variety of instructors — from former Liberia Minister of Health Dr. Bernice Dahn to healthcare pioneer Paul Farmer — diving into their experience building national community health worker programs. Through case studies of countries where these programs have worked — including Ethiopia, Bangladesh and Liberia, where Panjabi’s Last Mile Health operates — participants will learn how to advocate for, start and optimize community health worker programs.

This course was created by health systems leaders for health systems leaders. It can be taken individually, but learners are also encouraged to gather with colleagues within or across organizations to share their insights. The goal: to set up leaders in more countries to build community health worker programs and bridge the gaps in care.

Stay tuned for more courses from the Community Health Academy. Because as Panjabi put it in his talk, “For all of human history, illness has been universal and access to care has not. But as a wise man once told me: no condition is permanent. It’s time.”

Cory DoctorowCritical praise for RADICALIZED, my next book, from Booklist and Publishers Weekly

My next book of science fiction for adults is Radicalized, which will be published on March 19 (I’ll be making tour appearances across the US, Canada and Germany starting on March 18); the early critical notices have started to come in and gosh, they are embarrassingly effusive!

From Publishers Weekly: “Doctorow (Walkaway) captures the mix of hope, fear, and uncertainty felt by those in precarious situations, set against the backdrop of intriguing futuristic landscapes. The characters are well wrought and complex, and the worldbuilding is careful. This is a fine introduction to Doctorow’s work, and his many fans will enjoy its exploration of favorite themes.”

From Booklist (starred review): “Doctorow’s combination of cutting edge speculation and deep interest in the social and political possibilities of the future make this collection a must-read for fans of Kim Stanley Robinson or of any sf where the future is always part of an engaged and passionate dialogue with the present.”

Color me extremely gratified.

Worse Than FailureCodeSOD: Virtually Careful

Inheritance is one of those object-oriented features whose importance is perhaps overstated. When I was first being introduced to OO, it was a lot of “code reuse!” and “polymorphism!” and “code reuse!” In practice, inheritance is usually a way to tightly couple two different classes and create more headaches and rework in the future.

That’s why many OO languages favor interfaces, and why the classic Gang-of-Four patterns emphasize the importance of composition in many situations. And, because inheritance can make things complicated, most languages also require single parent inheritance.

Most languages. C++, on the other hand, doesn’t. In C++, I could define a class Car, a class Truck, and then do this:

class ElCamino : Car, Truck {}

As you can imagine, this introduces all sorts of problems. What if both Car and Truck have a method with the same signature? Or worse, what happens if they have the same ancestor class, MotorVehicle? If MotorVehicle defines drive(), and Car and Truck both inherit drive(), where does ElCamino get its drive() function from?

This scenario is known as the Diamond Problem. And C++ offers a simple solution to the Diamond Problem: virtual inheritance.

If, for example, our classes are defined thus:

class Car : virtual MotorVehicle {};
class Truck : virtual MotorVehicle {};
class ElCamino : Car, Truck {};

ElCamino will inherit drive() directly from MotorVehicle, even though both its ancestors also inherited from MotorVehicle. An ElCamino is both a Car and a Truck, but is only a MotorVehicle once, not twice, so anything it inherits from MotorVehicle should only be in the class once.

Now, in practice, multiple-inheritance is one of those luxury foot-guns which allows you to blow off your own foot using anything that implements the shoot() method at the same time. It does have uses, but seeing it used is a bit of a code smell.

Mindy found some code which doesn’t use multiple inheritance. Nowhere in their codebase is there a single instance of multiple inheritance. But her co-worker wants to future-proof against it, or maybe just doesn’t understand what virtual inheritance is, so they’ve manually gone through the codebase and made every child class inherit virtually.

This hasn’t broken anything, but it’s lead to some dumb results, like this:

class SomeSpecificDerivedClass final : public virtual SomeBaseClass
{
   /* ... code ... */ 
};

The key note here is that our derived class is marked as final, so no one is ever going to inherit from it. Ever. It’s sort of a belts and braces approach, except you’re wearing a dress.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Valerie AuroraChoosing which consulting services to offer

Many consultants (including me) make a similar mistake: we offer too many services, in too many areas, with too many options. After running one mediocre consulting business, and one successful consulting business, I’ve learned to focus on services that:

  • Require hard-to-find expertise
  • Deliver far more value to the client than they cost me to provide
  • Cost me a fairly predictable amount of time and money

In practice, for a one-person consultancy, this often means offering the same service repeatedly, with only slight customization per client. The price of the service should be based on the value the client receives, not on the per-delivery cost to yourself.

I’m far from the first person to articulate these principles, but I had a hard time putting them into practice. In this post, I’ll give two concrete examples from my businesses, one in which I did not follow these principles, and one in which I did, and one example from a colleague’s successful business. Hopefully, other folks starting consultancies won’t have to start and throw away an entire business to learn them.

My first mediocre consulting business

My first consulting business offered software engineering related services in the areas of Linux and file systems. The software consulting business did okay – I made a decent living, but it was stressful because the income was unpredictable and irregular. I put over ten thousand dollars on my credit cards more than once, waiting for a check that was 60 or 90 days late. Most of my clients were happy with my work, but more clients than I liked were disappointed with the value I gave them.

My most successful contracts were for debugging critical Linux file system problems blocking shipping of products, where I could offer rare expertise that had high value to the client. Unfortunately, I could not predict how long each of these debugging sessions would take, so I didn’t feel confident pricing based on value to the client and instead charged an hourly rate. Payment was usually on time, due to the high gratitude of the client for me rescuing their income stream. These contracts are what made my business viable, but because I didn’t price my services based on the value provided to my client, they didn’t pay as much as they should have, and I had to take on other work outside that area of expertise.

My other contracts ranged from reviewing file systems related patents to developing user-level software libraries. Most of these contracts were also priced on an hourly basis, because I could not predict how much work they would take. With the one contract I priced at a fixed project cost, we underspecified the product, and the client and I argued over what features the final product should include. The client also had a variety of unusual software engineering practices that made development more time-consuming than I had expected. No surprise: software development is notoriously unpredictable.

A colleague’s successful consulting business

In retrospect, I realized that my expectations of success in software consulting were based on my observation of a colleague’s software consulting business that did follow the principles I outlined above. His business started out after he ported Linux to a CPU architecture which was in widespread use in embedded systems. At the time, operating systems for these embedded systems often cost many tens of thousands of dollars per system per year in licensing fees—sometimes costing millions of dollars per year to the vendor. From the vendor’s perspective, paying, say, $50,000 for an initial port to Linux represented enormous savings in software licensing costs.

On my colleague’s side, porting Linux to another embedded system with this CPU usually only took a few days of work because it was so similar to the porting work he had already done. Once he received a request to port Linux to a new system and completed the port before he sent back his bid for the contract. In short order, he had more money than he knew what to do with.

To recap, my colleague’s successful software business involved:

  • His unique experience porting Linux to embedded systems using this CPU
  • Delivering millions of dollars of value in return for tens of thousands of dollars of costs
  • Slight variations of the same activity (porting Linux to similar systems)

Despite having a similar level of valuable, world-unique expertise, I was unable to create a sustainable software consulting business because I took on contracts outside my main area of expertise, I priced my services based on the cost to me rather than the value to the client, and the cost of providing that service was highly unpredictable.

My second successful consulting business

When I started my diversity and inclusion consulting business, I wanted to focus on teaching the Ally Skills Workshop, but I also offered services based on my other areas of expertise: code of conduct consulting and unconference organization. The Ally Skills Workshop, as a lightly customized 3-hour class, was a fixed price per workshop, but the other two services were priced hourly. During my first year, I had significant income from all three of these services. But when I sat down with the accounts, I realized that the Ally Skills Workshop was both more fun for me to deliver and paid better per hour than my other services.

Thinking about why the Ally Skills Workshop paid more for less work made me realize that it was:

  • Priced based on the value delivered to the client, not on the cost to me
  • Customized per client but mostly the same each time I delivered it
  • In demand by clients that could afford to pay for the value it delivered

While all three of my services were in demand because I had unique expertise, only the Ally Skills Workshop had the potential to get me out of an hourly wage grind and give me the freedom to develop new products or write what I learned and share it with others.

With that realization, I started referring my code of conduct and unconference consulting clients to people who did want that work, and focused on the Ally Skills Workshop. With the time that freed up, I wrote an entire book about enforcing codes of conduct and gave it away (this is not a good business decision, do not do this).

Elements of a successful consulting business

In summary, a successful one-person consulting business will probably focus on one or two products that:

  • Require expertise rarely found in your clients’ employees
  • Deliver far more value to the client than they cost you to provide
  • Cost you a fairly predictable amount of time and money

It may feel safer to offer a range of services, so that if one service becomes unpopular, you can fill in the gaps with another one, but in practice, it’s hard for one person to do several things well enough to make a significant profit. In my experience, it’s better to do one thing extremely well, and use my free time to understand how the market is evolving and develop my next product.

,

Sociological ImagesHigh-Class Hoaxes

Those Fyre Festival documentaries were wild, weren’t they? Both movies highlighted fans’ collective glee watching the fakery play out from afar, as people with astounding amounts of disposable income fell prey to the festival’s poor execution. Who would buy all that hype, right?

The demand for exclusivity that fueled the festival is anything but fake. From Becker’s Art Worlds to Bourdieu’s Distinction, sociologists have long studied how culture industries and social capital create the tastes of the upper class. “Influencers” aren’t new, but social media makes it easier than ever to see them operate, and viral stories of high class hoaxes show this process in action.

Two great examples are these recent pranks parodying fine dining and fashion. Using a savvy social media presence, both teams were able to get a (fake) restaurant and a (fake) model a ton of buzz.

The interesting thing about these vides is how some of the humor rings hollow. It can be funny to see people chasing the next big trend get duped, but the fields they are mocking thrive on this exact kind of creativity and salesmanship. Taking the perspective of researchers like Bourdieu and others reminds us that taste is not objective, and it isn’t naturally tied to any basic level of effort or craft. At the end of the day, these pranksters still put together a “creative” look and restaurant experience, and so it is hard to tell whether they are making an effective parody, or just exploring and studying the basic rules of the game in the culture industry. Still, these videos are a fun excuse to think about how what it takes to cultivate “cool.”

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityHackers Sell Access to Bait-and-Switch Empire

Cybercriminals are auctioning off access to customer information stolen from an online data broker behind a dizzying array of bait-and-switch Web sites that sell access to a vast range of data on U.S. consumers, including DMV and arrest records, genealogy reports, phone number lookups and people searches. In an ironic twist, the marketing empire that owns the hacked online properties appears to be run by a Canadian man who’s been sued for fraud by the U.S. Federal Trade Commission, Microsoft and Oprah Winfrey, to name a few.

Earlier this week, a cybercriminal on a Dark Web forum posted an auction notice for access to a Web-based administrative panel for an unidentified “US Search center” that he claimed holds some four million customer records, including names, email addresses, passwords and phone numbers. The starting bid price for that auction was $800.

Several screen shots shared by the seller suggested the customers in question had all purchased subscriptions to a variety of sites that aggregate and sell public records, such as dmv.us.org, carhistory.us.org, police.us.org, and criminalrecords.us.org.

A (redacted) screen shot shared by the apparent hacker who was selling access to usernames and passwords for customers of multiple data-search Web sites.

A few hours of online sleuthing showed that these sites and dozens of others with similar names all at one time shared several toll-free phone numbers for customer support. The results returned by searching on those numbers suggests a singular reason this network of data-search Web sites changed their support numbers so frequently: They quickly became associated with online reports of fraud by angry customers.

That’s because countless people who were enticed to pay for reports generated by these services later complained that although the sites advertised access for just $1, they were soon hit with a series of much larger charges on their credit cards.

Using historic Web site registration records obtained from Domaintools.com (a former advertiser on this site), KrebsOnSecurity discovered that all of the sites linked back to two related companies — Las Vegas, Nev.-based Penguin Marketing, and Terra Marketing Group out of Alberta, Canada.

Both of these entities are owned by Jesse Willms, a man The Atlantic magazine described in an unflattering January 2014 profile as “The Dark Lord of the Internet” [not to be confused with The Dark Overlord].

Jesse Willms’ Linkedin profile.

The Atlantic pointed to a sprawling lawsuit filed by the Federal Trade Commission, which alleged that between 2007 and 2011, Willms defrauded consumers of some $467 million by enticing them to sign up for “risk free” product trials and then billing their cards recurring fees for a litany of automatically enrolled services they hadn’t noticed in the fine print.

“In just a few months, Willms’ companies could charge a consumer hundreds of dollars like this, and making the flurry of debits stop was such a convoluted process for those ensnared by one of his schemes that some customers just canceled their credit cards and opened new ones,” wrote The Atlantic’s Taylor Clark.

Willms’ various previous ventures reportedly extended far beyond selling access to public records. In fact, it’s likely everyone reading this story has at one time encountered an ad for one of his dodgy, bait-and-switch business schemes, The Atlantic noted:

“If you’ve used the Internet at all in the past six years, your cursor has probably lingered over ads for Willms’s Web sites more times than you’d suspect. His pitches generally fit in nicely with what have become the classics of the dubious-ad genre: tropes like photos of comely newscasters alongside fake headlines such as “Shocking Diet Secrets Exposed!”; too-good-to-be-true stories of a “local mom” who “earns $629/day working from home”; clusters of text links for miracle teeth whiteners and “loopholes” entitling you to government grants; and most notorious of all, eye-grabbing animations of disappearing “belly fat” coupled with a tagline promising the same results if you follow “1 weird old trick.” (A clue: the “trick” involves typing in 16 digits and an expiration date.)”

In a separate lawsuit, Microsoft accused Willms’ businesses of trafficking in massive quantities of counterfeit copies of its software. Oprah Winfrey also sued a Willms-affiliated site (oprahsdietscecrets.com) for linking her to products and services she claimed she had never endorsed.

KrebsOnSecurity reached out to multiple customers whose name, email address and cleartext passwords were exposed in the screenshot shared by the Dark Web auctioneer who apparently hacked Willms’ Web sites. All three of those who responded shared roughly the same experience: They said they’d ordered reports for specific criminal background checks from the sites on the promise of a $1 risk-free fee, never found what they were looking for, and were subsequently hit by the same merchant for credit card charges ranging from $20 to $38.

I also pinged several customer support email addresses tied to the data-broker Web sites that were hacked. I received a response from a “Mike Stef,” who described himself as a Web developer for Terra Marketing Group.

Stef said the screenshots appeared to be legitimate, and that the company would investigate the matter and alert affected customers if warranted. Stef told me he doubts the company has four million customers, and that the true number was probably closer to a half million. He also insisted that the panel in question did not have access to customer credit card data.

Nevertheless, it appears from the evidence above that Willms and several others who were named in the FTC’s 2012 stipulated final judgment (PDF) are still up to their old tricks. The FTC has not yet responded to requests for comment. Nor has Mr. Willms.

I can’t help express feeling a certain amount of schadenfreude (schadenfraud?) at the victim in this hacking case. But that amusement is tempered by the reality that the hundreds of thousands or possibly millions of people who got suckered into paying money to this company are quite likely to find themselves on the receiving end of additional phishing and fraud attacks (particularly credential stuffing) as a result of their data being auctioned off to the highest bidder.

Terra Marketing Group’s Web developer Mike Stef responded to my inquiries from an email address at the domain “tmgbox.com.” That message was instrumental in identifying the connection to Willms and Terra Marketing/Penguin. In the interests of better informing people who might wish to become future customers of this group, I am publishing the list of the domains associated with tmgbox.com and its parent entities. This list may be updated periodically as new information surfaces.

In case it is useful for others, KrebsOnSecurity is also publishing the results of several reverse WHOIS lookups for historic domains tied to email addresses of several people Mike Stef described as “senior customer support managers” of Terra Marketing, as these also include some interesting and related (albeit mostly dead) domains.

Reverse WHOIS on Peter Graver and Jesse Willms (rickholl2k9@gmail.com)

Reverse WHOIS on mike@tmgbox.com

Reverse WHOIS on Jason Oster (joster2008@gmail.com)

Public records search domains associated with Terra Marketing Group and Penguin Marketing:

memberreportaccess.com
publicrecords.us.org
dmvrecords.co
dmv.us.org
courtrecords.us.org
myfeeplan.com
police.us.org
warrantcheck.com
myinfobill.com
propertysearch.us.org
homevalue.us.org
carinfo2.com
backgroundchecks.us.org
arrestrecords.us.org
propertyrecord.com
criminalrecords.us.org
jailinmates.us.org
vehiclereportusa.com
dmvinfocheck.com
carrecordusa.com
carhistoryindex.com
autohistorychecks.com
mugshots.us.org
trafficticket.us.org
prison.us.org
reversephonelookup.us.org
deathrecords.us.org
deathrecord.com
deathcertificates.us.org
census.us.org
phonelookup.us.org
vehiclehistoryreports.us.org
vinsearchusa.org

KrebsOnSecurity would like to thank cybersecurity firm Intel471 for their assistance in researching this post.

Cory DoctorowTerra Nullius: Grifters, settler colonialism and “intellectual property”

Terra Nullius is my latest column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

I gave a keynote based on this essay in January at the “Grand Re-Opening of the Public Domain” event at the Internet Archive in San Francisco.

Both the venality of Aloha Poke and the genocidal brutality of Terra Nullius reveal a deep problem lurking in the Lockean conception of property: all the stuff that’s “just lying around” is actually already in relation to other people, often the kind of complex relation that doesn’t lend itself to property-like transactions where someone with deep pockets can come along and buy a thing from its existing “owner.”

The labor theory of property always begins with an act of erasure: “All the people who created, used, and improved this thing before me were doing something banal and unimportant – but my contribution is the step that moved this thing from a useless, unregarded commons to a special, proprietary, finished good.”

Criticism of this delusion of personal exceptionalism is buttressed by a kind of affronted perplexity: “Can’t you see how much of my really top-notch labor I have blended with this natural resource to improve it? Who will willingly give their own labor to future projects if, every time they do, loafers and takers come and freeride on their new property?”

This rhetorical move continues the erasure: it denies the claims of everyone who came before you as ahistorical figments: the people who coined, popularized and nurtured the word “aloha” or inhabited the Australasian landmass are stripped of their claims as though they were honeybees whose output is a naturally occurring substance that properly belongs to the beekeeper, not the swarm.

Terra Nullius [Cory Doctorow/Locus]

CryptogramThe Latest in Creepy Spyware

The Nest home alarm system shipped with a secret microphone, which -- according to the company -- was only an accidental secret:

On Tuesday, a Google spokesperson told Business Insider the company had made an "error."

"The on-device microphone was never intended to be a secret and should have been listed in the tech specs," the spokesperson said. "That was an error on our part."

Where are the consumer protection agencies? They should be all over this.

And while they're figuring out which laws Google broke, they should also look at American Airlines. Turns out that some of their seats have built-in cameras:

American Airlines spokesperson Ross Feinstein confirmed to BuzzFeed News that cameras are present on some of the airlines' in-flight entertainment systems, but said "they have never been activated, and American is not considering using them." Feinstein added, "Cameras are a standard feature on many in-flight entertainment systems used by multiple airlines. Manufacturers of those systems have included cameras for possible future uses, such as hand gestures to control in-flight entertainment."

That makes it all okay, doesn't it?

Actually, I kind of understand the airline seat camera thing. My guess is that whoever designed the in-flight entertainment system just specced a standard tablet computer, and they all came with unnecessary features like cameras. This is how we end up with refrigerators with Internet connectivity and Roombas with microphones. It's cheaper to leave the functionality in than it is to remove it.

Still, we need better disclosure laws.

Worse Than FailureCodeSOD: For Each Parallel

Parallel programming is hard. For all the advancements and tweaks we've made to our abstractions, for all the extra cores we've shoved into every CPU, deep down, software still carries the bias of the old uni-tasking model.

Aleksei P works on a software package that is heavily parallel. As such, when interviewing, he talks to candidates about their experience with .NET's Task objects and the async/await keywords.

One candidate practically exploded with enthusiasm when asked. "I've just finshed a pretty large text processing project that reads a text file in parallel!" They whipped out a laptop, pulled up the code, and proudly showed it to Aleksei (and gave Aleksei a link to their repo for bonus points).

public async Task<IDictionary<string, int>> ParseTextAsync(string filePath, ParamSortDic dic) { if (string.IsNullOrEmpty(filePath)) { throw new ArgumentNullException(nameof(filePath)); } if (Path.GetExtension(filePath)!=".txt") { throw new ArgumentException("Invalid filetype"); } Dictionary<ParamSortDic, Func<IDictionary<string, int>, IOrderedEnumerable<KeyValuePair<string, int>>>> sorting = new Dictionary<ParamSortDic, Func<IDictionary<string, int>, IOrderedEnumerable<KeyValuePair<string, int>>>> { {ParamSortDic.KeyAsc,word=> word.OrderBy(ws=>ws.Key)}, {ParamSortDic.KeyDesc,word=>word.OrderByDescending(ws=>ws.Key)}, {ParamSortDic.ValueAsc,word=>word.OrderBy(ws=>ws.Value)}, {ParamSortDic.ValueDesc,word=>word.OrderByDescending(ws=>ws.Value)}, }; var wordCount = new Dictionary<string, int>(); object lockObject = new object(); using (var fileStream = File.Open(filePath, FileMode.Open, FileAccess.Read)) { using (var streamReader = new StreamReader(fileStream)) { string line; while ((line = await streamReader.ReadLineAsync()) != null) { var lineModifyLower = Regex .Replace(line, "[^а-яА-я \\dictionary]", "") .ToLower(); var words = lineModifyLower .Split(Separators, StringSplitOptions.RemoveEmptyEntries) .Where(ws => ws.Length >= 4); Parallel.ForEach(words, word => { lock (lockObject) { if (wordCount.ContainsKey(word)) { wordCount[word] = wordCount[word] + 1; } else { wordCount.Add(word, 1); } } }); } } } return sorting[dic](wordCount).ToDictionary(k => k.Key, k => k.Value); }

There's so much to dislike about this code. Most of it is little stuff- it's painfully nested, I don't like methods which process file data also being the methods which manage file handles. Generics which look like this: Dictionary<ParamSortDic, Func<IDictionary<string, int>, IOrderedEnumerable<KeyValuePair<string, int>>>> are downright offensive.

But all of that's little stuff, in the broader context here. You'll note that the file is read using ReadLineAsync, which is asynchronous. Of course, we await the results of that method in the same line we call it. await is a blocking operation, so… not asynchronous at all.

Of course, that's a trend with this block of code. Note that the words on each line are processed in a Parallel.ForEach. And the body of that ForEach starts by creating a lock, guaranteeing that only one thread is ever going to enter that block at the same time.

Suffice to say, Aleksei didn't recommend that candidate.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaMichael Still: What If?

Share

More correctly titled “you die horribly and it probably involves plasma”, this light hearted and fun read explores serious answers to silly scientific questions. The footnotes are definitely the best bit. A really enjoyable read.

What If? Book Cover What If?
Randall Munroe
Humor
Houghton Mifflin Harcourt
September 2, 2014
320

The creator of the incredibly popular webcomic xkcd presents his heavily researched answers to his fans' oddest questions, including “What if I took a swim in a spent-nuclear-fuel pool?” and “Could you build a jetpack using downward-firing machine guns?”

Share

Planet Linux AustraliaSimon Lyall: Audiobooks – February 2019

Tamed: Ten Species that Changed our World by Alice Roberts

Plenty of content (14 hours) and not too dumbed down. About 8 of the 10 species are the ones you’d expect. 8/10

It Won’t Be Easy: An Exceedingly Honest (and Slightly Unprofessional) Love Letter to Teaching by Tom Rademacher

A breezy little book about the realities of teaching (at least in the US). Interesting to outsiders & hopefully useful to those in the profession. 7/10

The Hobbit by J. R. R Tolkien, Read by Rob Inglis

A good audio-edition of the book. Unabridged & really the default one for most people. I alternated chapters of this with the excellent Prancing Pony Podcast commentaries on those chapters. 9/10

The Life of Greece: The Story of Civilization, Volume 2 (The Story of Civilization series) by Will Durant

32 hours on the history of Ancient Greece. Seemed to cover just above everything. Written in the 1930s so probably a little out-of-date in places. 7/10

Share

,

Planet Linux AustraliaMichael Still: Problems with Dreamhost

Share

This site is hosted at Dreamhost, and for reasons I can’t explain right now isn’t accessible from large chunks of Australia. It seems to work fine from elsewhere though. Dreamhost certainly has an explaination — they allege in their emails that take 24 hours that you can’t reply to that its because wordpress is using too much RAM.

However, they don’t explain why that’s suddenly happened when its been previously fine for years, and they certainly don’t explain why it works from some places but not others and why other Dreamhost sites are also offline from the sites having issues.

Its time for a new hosting solution I think, although not bothering to have hosting might also be that solution.

Share

Planet Linux AustraliaDavid Rowe: LPCNet Quantiser – wideband speech at 1700 bits/s

I’ve been working with Neural Net (NN) speech synthesis using LPCNet.

My interest is digital voice over HF radio. To get a NN codec “on the air” I need a fully quantised version at 2000 bit/s or below. The possibility of 8kHz audio over HF radio is intriguing, so I decided to experiment with quantising the LPCNet features. These consist of 18 spectral energy samples, pitch, and the pitch gain which is effectively a measure of voicing.

So I have built a Vector Quantiser (VQ) for the DCT-ed 18 log-magnitude samples. LPCNet updates these every 10ms, which is a bit too fast for my target bit rate. So I decimate to say 30ms, then use linear interpolation to reconstruct the 10ms frames at the decoder. The spectrum changes slowly (most of the time), so I quantise the difference between frames to save a few bits.

Detailed Results

I’ve developed a script that generates a bunch of samples, plots various statistics, and builds a HTML page to summarise the results. Here is the current page, including samples for the fully quantised prototype codec at three bit rates between around 2000 and 1400 bits/s. If anyone would like more explanation of that page, just ask.

Discussion of Results

I can hear “birch” losing some quality at the 20ms decimation step. When training my own NN, I have had quite a bit of trouble with very rough speech when synthesising “canadian”. I’m learning that roughness in NN synthesis means more training required, the network just hasn’t experienced this sort of speaker before. The “canadian” sample is quite low pitch so I may need some more training material with low pitch speakers.

My quantisation scheme works really well on some of the carefully spoken Harvard sentences (oak, glue), in ideal recording conditions. However with more realistic, quickly spoken speech with real world background noise (separately, wanted) it starts to sound vocoder-ish (albeit a pretty good vocoder).

One factor is the frame rate decimation from 10 to 20-30ms, which I used to get the bit rate beneath 2000 bit/s. A better quantisation scheme, or LPCNet running on 20ms frames could improve this. Or we could just run it at greater that 2000 bit/s (say for VHF/UHF two way radio).

Comparison to Wavenet

Source Listen
Wavenet, Codec 2 encoder, 2400 bits/s Listen
LPCnet unquantised, 10ms frame rate Listen
Quantised to 1733 bits/s (44bit/30ms) Listen

The “separately” sample from the Wavenet team sounds better to me. Ironically, the these samples use my Codec 2 encoder, running at just 8kHz! It’s difficult to draw broad conclusions from this, as we don’t have access to a Wavenet system to try many different samples. All codecs tend to break down under certain conditions and samples.

However it does suggest (i) we can eventually get higher quality from NN synthesis and (ii) it is possible to encode high quality wideband speech with features covering a narrow spectral range (e.g. 200-3800Hz for the Codec 2 encoder). The 18 element vectors (covering DC to 8000Hz) I’m currently using ultimately set the bit rate of my current system. After a few VQ stages the elements are independent Gaussians and reduction in quantiser noise is very slow as bits are added.

The LPCNet engine has several awesome features: it’s open source, runs in real time on regular CPUs, and is available for us to test on wide variety of samples. The speech quality I am achieving with even my first attempts is rather good compared to any other speech codecs I have played with at these bit rates – in either the open or closed source worlds.

Tips and Observations

I’ve started training my own models, and discovered that if you get rough speech – you probably need more data. For example when I tried training on 1E6 vectors, I had a few samples sounding rough when I tested the network. However with 5E6 vectors, it works just fine.

The LPCNet dump_data –train mode program helps you by being very clever. It “fuzzes” the speech frequency, gain, and adds a little noise. If the NN hasn’t experienced a particular combination of features before, it tends to get lost – and you get rough sounding speech.

I found that 10 Epochs of 5E6 vectors gives me good speech quality on my test samples. That takes about a day with my somewhat underpowered GPU. In fact, most of the training seems to happen on the first few Epochs:

Here is a plot of the training and validation loss for my training database:

This plot shows how much the loss changes on each Epoch, not very much, but not zero. I’m unsure if these small gains lead to meaningful improvements over many Epochs:

I looked into the LPCNet pitch and voicing estimation. Like all estimators (including those in Codec 2), they tend to make occasional mistakes. That’s what happen when you try to fit neat signal processing models to real-world biological signals. Anyway, the amazing thing is that LPCNet doesn’t care very much. I have some samples where pitch is all over the place but the speech still sounds OK.

This is really surprising to me. I’ve put a lot of time into the Codec 2 pitch estimators. Pitch errors are very obvious in traditional, model based low bit rate speech codecs. This suggest that with NNs we can get away with less pitch information – which means less bits and better compression. Same with voicing. This leads to intriguing possibilities for very low bit (few 100 bit/s) speech coding.

Conclusions, Further Work and FreeDV 2020

Overall I’m pleased with my first attempt at quantisation. I’ve learnt a lot about VQ and NN synthesis and carefully documented (and even scripted) my work. The learning and experimental experience has been very satisfying.

Next I’d like to get one of these candidates on the air, see how it sounds over real world digital radio channels, and find out what happens when we get bit errors. I’m a bit nervous about predictive quantisation on radio channels, as it causes errors to propagate in time. However I have a good HF modem and FEC, and some spare bits to add some non-predictive quantisation if needed.

My design for a new, experimental “FreeDV 2020” mode employing LPCNet uses just 1600 Hz of RF bandwidth for 8kHz bandwidth speech, and should run at 10dB SNR on a moderate fading channel.

Here is a longer example of LPCNet at 1733 bit/s compared to HF SSB at a SNR of 10dB (we can send error free LPCNet through a similar HF channel). The speech sample is from the MP3 source of the Australian weekly WIA broadcast:

Source Listen
SSB simulation at 10dB SNR Listen
LPCNet Quantised to 1733 bits/s (44bit/30ms) Listen
Mixed LPCNet Quantised and SSB (thanks Peter VK2TPM!) Listen

This is really new technology, and there is a lot to explore. The work presented here represents my initial attempt at quantisation with the LPCNet synthesis engine, and is hopefully useful for other people who would like to experiment in the area.

Acknowledgements

Thanks Jean-Marc for developing the LPCnet technology, making the code open source, and answering my many questions.

Links

LPCnet introductory page.

The source code for my quantisation work (and notes on how to use it) is available as a branch on the GitHub LPCNet repo.

WaveNet and Codec 2

,

CryptogramFriday Squid Blogging: Chinese Squid-Processing Facility

China is building the largest squid processing center in the world.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramData Leakage from Encrypted Databases

Matthew Green has a super-interesting blog post about information leakage from encrypted databases. It describes the recent work by Paul Grubbs, Marie-Sarah Lacharité, Brice Minaud, and Kenneth G. Paterson.

Even the summary is too much to summarize, so read it.

Worse Than FailureError'd: Musical Beverages

"If the screen is to be believed...This beer is going to rock!" wrote Tim F.

 

Dru writes, "Wait...Mad Libs? No wonder credit card terms are so confusing!"

 

"After selecting a file to open this gem popped up. I think the dialog is as confused as I am," writes Lincoln K.

 

Jan writes, "Yes, speaking of sensitive data, Firefox is asking if I'd like to store the following, possibly sensitive, data!"

 

"Seeing how long it'll take to delete this 30GB VirtualBox snapshot, I'm going to find some way to keep myself busy for the next 40 years," Christian F. writes.

 

Anthony E. wrote, "Maybe by next century, AVG will have worked out how to build an effective anti-virus product?"

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaFrancois Marier: Connecting a VoIP phone directly to an Asterisk server

On my Asterisk server, I happen to have two on-board ethernet boards. Since I only used one of these, I decided to move my VoIP phone from the local network switch to being connected directly to the Asterisk server.

The main advantage is that this phone, running proprietary software of unknown quality, is no longer available on my general home network. Most importantly though, it no longer has access to the Internet, without my having to firewall it manually.

Here's how I configured everything.

Private network configuration

On the server, I started by giving the second network interface a static IP address in /etc/network/interfaces:

auto eth1
iface eth1 inet static
    address 192.168.2.2
    netmask 255.255.255.0

On the VoIP phone itself, I set the static IP address to 192.168.2.3 and the DNS server to 192.168.2.2. I then updated the SIP registrar IP address to 192.168.2.2.

The DNS server actually refers to an unbound daemon running on the Asterisk server. The only configuration change I had to make was to listen on the second interface and allow the VoIP phone in:

server:
    interface: 127.0.0.1
    interface: 192.168.2.2
    access-control: 0.0.0.0/0 refuse
    access-control: 127.0.0.1/32 allow
    access-control: 192.168.2.3/32 allow

Finally, I opened the right ports on the server's firewall in /etc/network/iptables.up.rules:

-A INPUT -s 192.168.2.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.2.3/32 -p udp --dport 10000:20000 -j ACCEPT

Accessing the admin page

Now that the VoIP phone is no longer available on the local network, it's not possible to access its admin page. That's a good thing from a security point of view, but it's somewhat inconvenient.

Therefore I put the following in my ~/.ssh/config to make the admin page available on http://localhost:8081 after I connect to the Asterisk server via ssh:

Host asterisk
    LocalForward 8081 192.168.2.3:80

,

TEDAnnouncing the speaker lineup for TED2019: Bigger than us

TED has unveiled its ambitious speaker lineup for the April conference, themed “Bigger than us.” Why? As Head of TED Chris Anderson puts it: “The theme ‘Bigger than us’ can mean so many things. AI. The arc of history. Ideas and things at a giant scale. Cosmology. Grand ambition. An antidote to narcissism. Moral purpose….”

Browse the lineup >>

The lineup includes path-breaking scientists and technologists (like soil scientist Asmeret Asefaw Berhe, Twitter’s Jack Dorsey, and John Hanke of Niantic), smart entertainers (like Joseph Gordon-Levitt and America Ferrera and Derren Brown and director Jon M. Chu), artists and activists (like Brittany Packnett and Jonny Sun and Sarah Sze and Judith Jamison), and the thinkers and visionaries (like Hannah Gadsby and Edward Tenner) who can help us pull it all together.

Because, as our positioning statement has it, the political and technological turmoil of the past few years is causing us to ask bigger, deeper, more challenging questions. Like … where is this heading? what really matters? is there more I should be doing?

Together, we’ll be exploring technologies that evoke wonder and tantalize with superhuman powers, mind-bending science that will drive the future as significantly as any politician, the design of cities and other powerful human systems that shape our lives, awe-inspiring, mind-expanding creativity and, most of all, the inspiring possibilities that happen when we ask what ideas are truly worth fighting for, worth living for.

Krebs on SecurityBooter Boss Interviewed in 2014 Pleads Guilty

A 20-year-old Illinois man has pleaded guilty to running multiple DDoS-for-hire services that launched millions of attacks over several years. The plea deal comes almost exactly five years after KrebsOnSecurity interviewed both the admitted felon and his father and urged the latter to take a more active interest in his son’s online activities.

Sergiy P. Usatyuk of Orland Park, Ill. pleaded guilty this week to one count of conspiracy to cause damage to Internet-connected computers and for his role in owning, administering and supporting illegal “booter” or “stresser” services designed to knock Web sites offline, including exostress[.]in, quezstresser[.]com, betabooter[.]com, databooter[.]com, instabooter[.]com, polystress[.]com and zstress[.]net.

Some of Rasbora’s posts on hackforums[.]net prior to our phone call in 2014. Most of these have since been deleted.

A U.S. Justice Department press release on the guilty plea says Usatyuk — operating under the hacker aliases “Andrew Quez” and “Brian Martinez” — admitted developing, controlling and operating the aforementioned booter services from around August 2015 through November 2017. But Usatyuk’s involvement in the DDoS-for-hire space very much predates that period.

In February 2014, KrebsOnSecurity reached out to Usatyuk’s father Peter Usatyuk, an assistant professor at the University of Illinois at Chicago. I did so because a brief amount of sleuthing on Hackforums[.]net revealed that his then 15-year-old son Sergiy — who at the time went by the nicknames “Rasbora” and “Mr. Booter Master”  — was heavily involved in helping to launch crippling DDoS attacks.

I phoned Usatyuk the elder because Sergiy’s alter egos had been posting evidence on Hackforums and elsewhere that he’d just hit KrebsOnSecurity.com with a 200 Gbps DDoS attack, which was then considered a fairly impressive DDoS assault.

“I am writing you after our phone conversation just to confirm that you may call evening time/weekend to talk to my son Sergio regarding to your reasons,” Peter Usatyuk wrote in an email to this author on Feb. 13, 2014. “I also have [a] major concern what my 15 yo son [is] doing. If you think that is any kind of illegal work, please, let me know.”

That 2014 story declined to quote Rasbora by name because he was a minor, but his father seemed alarmed enough about my inquiry that he insisted his son speak with me about the matter.

Here’s what I wrote about Sergiy at the time:

Rasbora’s most recent project just happens to be gathering, maintaining huge “top quality” lists of servers that can be used to launch amplification attacks online. Despite his insistence that he’s never launched DDoS attacks, Rasbora did eventually allow that someone reading his posts on Hackforums might conclude that he was actively involved in DDoS attacks for hire.

“I don’t see what a wall of text can really tell you about what someone does in real life though,” said Rasbora, whose real-life identity is being withheld because he’s a minor. This reply came in response to my reading him several posts that he’d made on Hackforums not 24 hours earlier that strongly suggested he was still in the business of knocking Web sites offline: In a Feb. 12 post on a thread called “Hiring a hit on a Web site” that Rasbora has since deleted, he tells a fellow Hackforums user, “If all else fails and you just want it offline, PM me.”

Rasbora has tried to clean up some of his more self-incriminating posts on Hackforums, but he remains defiantly steadfast in his claim that he doesn’t DDoS people. Who knows, maybe his dad will ground him and take away his Internet privileges.

I’m guessing young Sergiy never had his Internet privileges revoked, nor did he heed advice to use his skills for less destructive activities. His dad hung up on me when I called Wednesday evening requesting comment.

Court documents (PDF) related to his case indicate Sergiy Usatyuk and an unnamed co-conspirator earned nearly $550,000 launching some 3.8 million attacks through their various DDoS-for-hire services. The government says he ran the booter services through a Delaware corporation called “OkServers LLC,” which routinely ignored abuse complaints and as such effectively operated as a “bulletproof” hosting company — despite Sergiy’s claims to the contrary.

Here’s Sergiy’s response to multiple abuse complaints about OKServers filed in the summer of 2018 by Troy Mursch, chief research officer at Bad Packets LLC.

Sergiy’s guilty plea comes amid a major crackdown by the FBI and the Justice Department on booter services and their operators. In December 2018, the DOJ brought charges against three men as part of an unprecedented, international takedown targeting 15 different booter sites.

According to the government, the use of booter and stresser services to conduct attacks is punishable under both wire fraud laws and the Computer Fraud and Abuse Act (18 U.S.C. § 1030), and may result in arrest and prosecution, seizure of computers or other electronics, significant prison sentences, and a penalty or fine.

CryptogramCan Everybody Read the US Terrorist Watch List?

After years of claiming that the Terrorist Screening Database is kept secret within the government, we have now learned that the DHS shares it "with more than 1,400 private entities, including hospitals and universities...."

Critics say that the watchlist is wildly overbroad and mismanaged, and that large numbers of people wrongly included on the list suffer routine difficulties and indignities because of their inclusion.

The government's admission comes in a class-action lawsuit filed in federal court in Alexandria by Muslims who say they regularly experience difficulties in travel, financial transactions and interactions with law enforcement because they have been wrongly added to the list.

Of course that is the effect.

We need more transparency into this process. People need a way to challenge their inclusion on the list, and a redress process if they are being falsely accused.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV March 2019 Workshop: 30th Anniversary of the Web / Federated Social Media

Mar 16 2019 12:30
Mar 16 2019 16:30
Mar 16 2019 12:30
Mar 16 2019 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

This month we will celebreate the 30th anniversary of the World Wide Web with a discussion of its past, present and future.  Andrew Pam will also demonstrate and discuss the installation, operation and use of federated social media platforms including Diaspora and Hubzilla.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 16, 2019 - 12:30

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV March 2019 Main Meeting: ZeroTier / Ethics in the computer realm

Mar 5 2019 19:00
Mar 5 2019 21:00
Mar 5 2019 19:00
Mar 5 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE LATER START TIME

7:00 PM to 9:00 PM Tuesday, March 5, 2019
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Adrian Close: ZeroTier
  • Enno Davids: Ethics - Weasel words, weasel words, weasel words, bad thing

 

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

March 5, 2019 - 19:00

read more

Worse Than FailureCalculated Security

Carl C spent some time in the late 1980's at a software firm that developed avionics and global positioning systems for military and civilian customers. In their employ, he frequently visited Schlockdeed Corp, a customer with a contract to develop a new generation of jet fighters for the US military. Due to the top secret nature of their work, security was a big deal there.

Whenever Carl entered or left the facility, he had to pass through the security office to get clearance. They would thoroughly inspect his briefcase, jacket, lunchbox, and just about everything short of a full cavity search. Despite the meticulous nature of daily inspections at Schlockdeed, some of their "security measures" bordered on the absurd.

During this era of Sneakernet-type transfers of information, it wasn't uncommon for a programmer to take a box full of floppy disks to and from work every day. Schlockdeed had a rather lax policy regarding disk transportation even though it would be a super easy way to steal their secrets. Subcontractors like Carl would be issued a "media pass" after passing the initial background check to work with Schlockdeed. It was a card that allowed them to carry any number of floppy disks in and out of the building without question.

Carl's tenure was uneventful until he decided to bring his beloved HP-41CX calculator to the office. They were working on some complex algorithms and drawing up equations on a chalkboard was taking too long, so Carl hoped to speed up the process. During his morning inspection, Bill the security guy pulled out the HP-41CX and immediately had a concerned look come over his face.

Bill reached for the radio on his shoulder, "Paulie, we're going to need you. We have a situation." Carl became extremely confused. Had the 41CX been known to be used in bombs? Was it April Fool's Day? "Sir, we need to send you to our CIO for secondary inspection. Right this way," Bill motioned.

Carl's face flushed as he wondered what kind of trouble he was in, especially since "trouble" could quickly escalate to handcuffs and holding cells. He also wondered why a Chief Information Officer would be doing secondary security inspections. Bill led him to Paulie's office, which housed a portly man with a sweet 80's mustache. The nameplate on his desk identified him as the Calculator Inspection Officer.

"I'm gonna need to see yer adding machine there, buddy," Paulie said, holding his hand out. Bill placed the HP-41CX in his palm. He gave it a closer look and grunted, "I'll have to confiscate this from you. It's got internal memory in it, y'see, so you could potentially use it to sneak secrets out. You can have it back at the end of the day, but don't let me ever catch you bringing this here again!" Bill led a calculator-less Carl back to the main security office.

On the way, Bill explained how programmable calculators were strictly forbidden in the facility. Paulie was in charge of enforcing this policy and took his job very seriously. If Carl wanted to bring a calculator, it would have to be a very basic model. Once Paulie approved it, an "AC" (Approved Calculator) sticker would be placed on the back to allow its entry. Feeling discouraged without his HP-41CX, Carl resigned himself to inhaling chalk dust for the rest of his time at Schlockdeed. At least he had a media pass, so he could still freely take floppy disks in and out of the facility.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaCraige McWhirter: Propagating Native Plants

by Alec M. Blomberry & Betty Maloney

Propagating Australian Plants

I have over 1,000 Diospyrus Geminata (scaly ebony) seedlings growing in my shade house (in used toilet rolls). I'd collected the seeds from the (delicious) fruit in late 2017 (they appear to fruit based on rainfall - no fruit in 2018) and they're all still rather small.

All the literature stated that they were slow growing. I may have been more dismissive of this than I needed to be.

I'm growing these for landscape scale planting, it's going to be a while between gathering the seeds (mid 2017) and planting the trees (maybe mid 2019).

So I needed to look into other forms of propagation and either cutting or aerial layering appear to be the way to go, as I already have large numbers of of mature Diospyros Geminata on our property or nearby.

The catch being that I know nothing of either cutting or aerial layering and in particular I want to do this at a reasonable scale (ie: possibly thousands).

So this is where Propagating Australian Plants comes in.

Aerial Layering

It's a fairly dry and academic read, that feels like it may be more of an introductory guide for botanic students than a lay person such as myself.

Despite being last published in 1994 by a publisher that no longer exists and having a distinct antique feel to it, the information within is crisp and concise with clear and helpful illustrations.

Highly recommended if you're starting to propagate natives as I am.

Although I wish you luck picking up a copy - I got mine from an Op Shop. At least the National Library of Australia appears to have a copy.

Sam VargheseDid Pell ever consider what Jesus said about children?

“If anyone should cause one of these little ones to lose his faith in me, it would be better for that person to have a large millstone tied around his neck and be drowned in the deep sea.” The gospel according to Matthew, Chapter 18, Verse 6.

In December 2018, a jury found Cardinal George Pell, the most senior Catholic official in Australia and the third most senior official at the Vatican. guilty of sexual abuse of minors. The judgement was suppressed until February 26 as a second case against Pell had to heard and the judge felt that announcing the guilty verdict could influence the direction of the second case.

But given that Pell is a globally known individual, numerous foreign newspapers reported the verdict right away as they were not in any way bound by an Australian suppression order. Some Australian newspapers carried big headlines to the effect that a big story was being suppressed; many of these publications now face sanctions from the judge.

Now two former Liberal Prime Ministers, John Howard and Tony Abbott, have added their voices to the group supporting Pell; one has given him a character reference and the other has said he called the Cardinal after the suppression order was lifted. Both indicate that they believe an appeal will go in Pell’s favour.

The spectre of people of this status expressing support for a convicted paedophile does not do their reputations any good. There are some crimes that do not merit any sympathy and abuse of children is squarely among them.

It says a lot about Australia that people of this kind gain such high office.

The current prime minister Scott Morrison, also a Liberal, has said he will tell the Governor-General to strip Pell of his Australia Day honours if the appeal fails.

Both Abbott and Howard go to church, the former being a staunch Catholic and the latter a nominal Christian. They seem to have forgotten the injunctions of one Jesus Christ with regard to children a few centuries ago.

But then so too did Pell.

,

Krebs on SecurityCrypto Mining Service Coinhive to Call it Quits

Roughly one year ago, KrebsOnSecurity published a lengthy investigation into the individuals behind Coinhive[.]com, a cryptocurrency mining service that has been heavily abused to force hacked Web sites to mine virtual currency. On Tuesday, Coinhive announced plans to pull the plug on the project early next month.

A message posted to the Coinhive blog on Tuesday, Feb. 26, 2019.

In March 2018, Coinhive was listed by many security firms as the top malicious threat to Internet users, thanks to the tendency for Coinhive’s computer code to be surreptitiously deployed on hacked Web sites to steal the computer processing power of its visitors’ devices.

Coinhive took a whopping 30 percent of the cut of all Monero currency mined by its code, and this presented something of a conflict of interest when it came to stopping the rampant abuse of its platform. At the time, Coinhive was only responding to abuse reports when contacted by a hacked site’s owner. Moreover, when it would respond, it did so by invalidating the cryptographic key tied to the abuse.

Trouble was, killing the key did nothing to stop Coinhive’s code from continuing to mine Monero on a hacked site. Once a key was invalidated, Coinhive would simply cut out the middleman and proceed to keep 100 percent of the cryptocurrency mined by sites tied to that account from then on.

In response to that investigation, Coinhive made structural changes to its platform to ensure it was no longer profiting from this shady practice.

Troy Mursch is chief research officer at Bad Packets LLC, a company that has closely chronicled a number of high-profile Web sites that were hacked and seeded with Coinhive mining code over the years. Mursch said that after those changes by Coinhive, the mining service became far less attractive to cybercriminals.

“After that, it was not exactly enticing for miscreants to use their platform,” Mursch said. “Most of those guys just took their business elsewhere to other mining pools that don’t charge anywhere near such high fees.”

As Coinhive noted in the statement about its closure, a severe and widespread drop in the value of most major crytpocurrencies weighed heavily on its decision. At the time of my March 2018 piece on Coinhive, Monero was trading at an all-time high of USD $342 per coin, according to charts maintained by coinmarketcap.com. Today, a single Monero is worth less than $50.

In the announcement about its pending closure, Coinhive said the mining service would cease to operate on March 8, 2019, but that users would still be able to access their earnings dashboards until the end of April. However, Coinhive noted that only those users who had earned above the company’s minimum payout threshold would be able to cash out their earnings.

Mursch said it is likely that a great many people using Coinhive — legitimately on their own sites or otherwise — are going to lose some money as a result. That’s because Coinhive’s minimum payout is .02 Monero, which equals roughly USD $1.00.

“That means Coinhive is going to keep all the virtually currency from user accounts that have mined something below that threshold,” he said. “Maybe that’s just a few dollars or a few pennies here or there, but that’s kind of been their business model all along. They have made a lot of money through their platform.”

KrebsOnSecurity’s March 2018 Coinhive story traced the origins of the mining service back to Dominic Szablewski, a programmer who founded the German-language image board pr0gramm[.]com (not safe for work). The story noted that Coinhive began as a money-making experiment that was first debuted on the pr0gramm Web site.

The Coinhive story prompted an unusual fundraising campaign from the pr0gramm[.]com user community, which expressed alarm over the publication of details related to the service’s founders (even though all of the details included in that piece were drawn from publicly-searchable records). In an expression of solidarity to protest that publication, the pr0gramm board members collectively donated hundreds of thousands of euros to various charities that support curing cancer (Krebs is translated in German to “cancer” or “crab.”)

After that piece ran, Coinhive added to its Web site the contact information for Badges2Go UG, a limited liability company established in 2017 and headed by a Sylvia Klein from Frankfurt who is also head of an entity called Blockchain Future. Klein did not respond to requests for comment.

TEDContact with aliens by 2036? Astronomer Seth Shostak wants to believe — and does

The Parkes Radio Telescope at the Parkes Observatory in New South Wales, Australia. Image courtesy of Seth Shostak.

Astrophysicist and astronomer Seth Shostak made a daring bet in his 2012 TED Talk: We’ll find extraterrestrial life within 24 years or he’ll buy you a cup of coffee. This isn’t just wishful thinking — technological advances over the past few decades have amplified the scope of space exploration monumentally, allowing us to search the stars in ways we never have before. We spoke to Seth about his work at the SETI Institute, our cultural fascination with aliens and why he thinks we’re closer than ever to finally finding ET. 

This transcript has been edited for clarity.

What have you been working on lately?

I do a lot of writing, a lot of talking and, of course, the science and speculation: What would be the best strategy to find ET?

We’ve been looking at a list of about 20,000 so-called red dwarf stars. Red dwarves are just stars that are smaller than the sun, and there are a lot of them. Just like there are a lot more small animals than big ones, there are a lot more small stars than big ones. The other thing is that they take a long time to burn through their nuclear fuel, so they live for billions and billions of years, which means that on average they’re older than stars like the sun.

 

“The bottom line is, the search has become much, much, much faster. If you’re looking for a needle in a haystack, it pays to go through the hay faster.”

 

With a star, if the planets around it are billions of years older than our own solar system, maybe the chances are greater that they’d cooked up some intelligence and is sending a signal we might pick up. That’s what we’re doing at the moment in terms of our SETI work.

So, how’s the hunt? How much closer are we?

When people say, “Well, so what’s the difference now between what you guys are doing and what Frank Drake — who did the first SETI experiment back in 1960 — did?” the difference is technology and science.

The Parkes Radio Telescope at the Parkes Observatory in New South Wales, Australia. Image courtesy of Seth Shostak.

We can now build receivers that can listen to a lot more radio dials at once. Frank Drake had a receiver that could only listen to one channel at a time, sort of like your TV. We don’t know where ET might be on the dial, and we don’t know where that transmission might be, so we’ve got to really listen to lots of frequencies at once, lots of channels. The receivers we’re using today monitor 72 million channels simultaneously; you can sort of sift through the radio dial for any given star system much more quickly. The bottom line is, the search has become much, much, much faster. If you’re looking for a needle in a haystack, it pays to go through the hay faster.

The other thing that’s changed is the astronomy. When SETI began, nobody knew whether there were planets around other stars, if they were common, or maybe only one star in a thousand had planets. Nobody knew because we hadn’t found them yet. But since that time we have. We’ve found lots of planets, and what we found is that the majority of all stars have planets. Planets are as common as cheap motels. That’s good news because it means you don’t have to wait for somebody to discover planets around some other star and aim your antennas in that direction — we can just take a whole bunch of stars based on other criteria, like here are the 10,000 nearest stars or the nearest 20,000 red dwarf stars. We’re not worried too much about whether the stars have planets or not, because we know most of them will have planets. That’s a big step.

Those are the things that have changed — the technology and the science. Both of those, from my point of view, encourage me to think that we may find something within 20 years.

That’s a really exciting prediction. In your talk, you said that any civilization that we get in contact with or receive signals from will be far more advanced than us. Why haven’t we heard from them yet?

Two things: Maybe they have, and we just haven’t pointed the antennas in the right direction and to the right frequency! That’s the whole premise of SETI — that as we sit and talk, there are radio waves going through your body that would tell you about some Klingons if only you had a big antenna pointed in the right direction and you knew the right spot on the dot.

The other part is that I don’t know that they would be motivated to contact us unless they knew we were here. Maybe it’s an expensive project for them. Like, “Hey, what do you think — should we build a big transmitter and just ping the nearest million stars for 20 years at a time?” You know, that could be a big project. But if they knew that there was intelligent life here on Earth, maybe they would try and get in touch because maybe they want to sell their used cars or something.

The facts are that they probably don’t know that we’re here. How would they know that homo sapiens exist? They could start picking up our radar, television and FM radio — signals that actually go out into space. They could do that beginning in the Second World War when all that technology was developed. But that was only 70 years ago. If they’re more than half that distance — so 35 light years away — there hasn’t been enough time for those signals to get to them and for them to say “Oh, well, we’re going to answer those guys.” That means it’s very unlikely that anybody knows we’re here yet even if they want to find us. Unless they’re very close to us, they won’t succeed. They probably lost their funding and they don’t get any respect at parties.

And by the way, you might like to mention that to your friends, next time they tell you that they’ve been abducted by aliens. You could say, “Well, that’s peculiar. You know the Earth has been here for four and a half billion years, and they just now showed up to abduct you?” I mean, why now? It’s hard to believe that they might be relentlessly targeting our society — they might be, that’s the hope. That they might just have very strong transmitters that you could pick up anywhere nearby. That’s what we’re hoping for.

ʻOumuamua, the first interstellar visitor, has been a source of fascination since it was first discovered in 2017. Some speculate that it could be a sign of extraterrestrial life and last December, SETI, among others, conducted a radio search but didn’t hear anything. What do you think ʻOumuamua is?

It’s become an interesting public issue because Avi Loeb at Harvard likes to talk about these things, that it could be the Klingons and the space crafts. That’s not impossible, but it’s like you hearing a noise from the attic — I mean, it could be ghosts, but that’s probably not the most likely explanation. The other thing is that every time we find something unexplained in the heavens many people — or some people at least — will say it’s alien activity because that’s a handy explanation. It accounts for everything because you can always say “Well, the aliens can do anything, right?” There is that tendency to blame the aliens for everything.

This thing came in and it went right through our solar system, right around the sun. You could say, “All right, it’s just a random rock kicked out of somebody else’s solar system,” but what are the chances that that rock is going to actually hit ours? The chances of that are pretty small. It’s like standing in Park Slope, Brooklyn and throwing a dart up into the air and hitting a particular nickel lying on the sidewalk down by the Brooklyn or Manhattan bridges [ed note: ~3 miles away]. It could happen but it’s pretty unlikely. Unless you throw lots of darts — if you throw a gazillion darts into the air, then you’re probably going to hit that nickel. What Loeb is saying is that either there’s just lots and lots of these rocks cruising this part of the galaxy — which could be, but that seems a little unreasonable — or maybe somebody is deliberately sending them our way. If you’re deliberately aiming at that nickel, then you have a higher chance of hitting it.

 

Aliens probably don’t know that we’re here. How would they know that homo sapiens exist?”

 

To say that it can’t be a comet because we didn’t see any evidence of that is subject to criticism based on the fact that we didn’t see much of anything on this thing because it was found very, very late and it’s very small and very far away. We never saw this as more than a dot. There’s no reason at this point to say, “You know what, Bob, no two ways about it — this has got to be artificial!”

It seems hard to draw conclusions because no one can collect any more evidence — ʻOumuamua is on its way out at this point, right? It seems all we can do is speculate at this point.

It’s now somewhere between Mars and Jupiter. You can’t even see it with the biggest telescope anymore. Loeb admits that and says we’ll find more. We’re probably gonna find another one within a year or two, and this time, everybody will be on the alert to start studying it right away and if it’s possible, maybe send a rocket in its direction with a probe.

Has this discovery changed your approach at all?

There’s simply no shortage of intriguing new discoveries all the time. Two or three years ago, it was Tabby’s Star. Jason Wright at Penn State said, “It could be an alien megastructure,” so we turned our antennas in that direction. We didn’t find any evidence of an alien megastructure either. The point of ʻOumuamua is that you have one more case where you find something unusual that could conceivably be aliens. It would be hubris, of course, to sort of weed these things away and say, “It’s not likely to be E.T.” With that kind of reasoning, you’ll never find E.T.! It’s a reminder that the evidence may come out of left field and you shouldn’t dismiss it just because of where it came from.

It’s been almost 60 years now we’ve been pointing the antennas in the directions of nearby stars that may have habitable planets, all the usual stuff. It just seems more and more possible to me that the real thing to do is spend more time looking for other kinds of evidence — not radio signals because they may not be broadcasting radio signals our way. They might be doing all sorts of other things like hollowing out asteroids and sailing them around or building alien megastructures or constructing something big and brawny. They could be building something that’s noisy enough or big enough or bright enough — conspicuous in some way — that you could find it without having to count on them directing some sort of radio transmission our way.

 

“There is that tendency to blame the aliens for everything.”

 

Many of your contemporaries are going to come down hard on you when you speculate about something that might or might not be true, as opposed to writing a paper on something that you’ve just measured. When you do that they’re going to say: “Okay, you’re making up stories and you’re just doing it to get the column inches.” And I think that that’s myopic, because it’s those ideas that provoke a lot of investigation and eventually, in many cases, they actually solve the problem.

What is your favorite part of your job?

I enjoy thinking about the possibility of SETI. Because we haven’t found anything, it’s still all possibility. I talked to a film writer who’s writing a screenplay, and he wanted to get the aliens right — whatever that means — what can you say about them? I mean, we haven’t found any, so you can say whatever you want.

I give a lot of talks and I try to give at least one in ten to kids. I like them because they are completely honest. You talk to them and if they don’t find it interesting, they just put their heads down on the desk. Adults will not do that. But if they are interested they’ll ask any question. There’s no such thing as a stupid question for a kid. When you talk to kids, you notice that maybe one in fifty of them, something lights up; they hear something that gets their imaginations going that they’ve never heard before.

What do you think we’re looking for? Why do you think we’re so fascinated with this concept of extraterrestrial life?

I honestly think it’s a hardwired feature, just the way kids are interested in dinosaurs. You’d have a hard time finding kids that aren’t interested in dinosaurs — and why is that? Do they just have a need to know about sauropods? Well, that’s just part of their brain. We’re kind of hardwired to be afraid of falling. That’s undoubtedly a throwback to our simian existence in the trees, climbing around, and if you fell, it was probably the end of you. You have all sorts of mechanisms that tense up and react very quickly if you begin to fall. The same would be true in terms of paying attention to any creatures with big teeth. It probably pays for you to be interested in big teeth and other potential dangers.

I think that’s why kids are interested in dinosaurs, and I think we’re also interested in aliens for pretty much the same reason. Namely that, if you have no interest in whether somebody is living on the other side of that hill outside town, then you’re very likely to someday see them come over the hill and maybe take your land or kill you. It might pay you to pay some attention to potential competitors or, looking on the bright side, potential mates. I think that that’s why we’re all interested in aliens up to a certain age. It’s hard to find somebody who’s not interested in aliens at all.

Cryptogram"Insider Threat" Detection Software

Notice this bit from an article on the arrest of Christopher Hasson:

It was only after Hasson's arrest last Friday at his workplace that the chilling plans prosecutors assert he was crafting became apparent, detected by an internal Coast Guard program that watches for any "insider threat."

The program identified suspicious computer activity tied to Hasson, prompting the agency's investigative service to launch an investigation last fall, said Lt. Cmdr. Scott McBride, a service spokesman.

Any detection system of this kind is going to have to balance false positives with false negatives. Could it be something as simple as visiting right-wing extremist websites or watching their videos? It just has to be something more sophisticated than researching pressure cookers. I'm glad that Hasson was arrested before he killed anyone rather than after, but I worry that these systems are basically creating thoughtcrime.

Worse Than FailureA Switch for Grenk

Let’s say you’ve got a project object in your code. A project might be opened, or it might be closed. In either case, you want to register an event handler to change the status- closed projects can be opened, opened projects can be closed. Now imagine you’re Antonio’s co-worker, Grenk.

No, this time, it’s not a matter of streams. Today, it’s ternary abuse, of the “why is this even here” sort.

switch(project.getStatus())
{
    case CLOSED:
    {
        //snip: re-open the project
        break;
    }
    case OPEN:
    {
        //snip: close the project
        break;
    }
}
registerEvent(projectDB.getStatus().equals(StatusProjectEnum.CLOSED) ? TypeEventProjectEnum.ENABLED : TypeEventProjectEnum.DISABLED, project.getId(), sessionUser,
                           project.getStatus().equals(StatusProjectEnum.CLOSED) ? "Enabled project " + project.getDescription() : "Disabled project " + project.getDescription());

getDao().update(project);

Let’s trace the logic. We start with a switch on the project status. If it’s CLOSED we open it, if it’s OPEN we close it. Then, we call the registerEvent method, and use a ternary on the project status to decide what parameters to pass to the method. The result is an unreadable mess, and it’s extra confusing because we just passed by a perfectly good switch. Why not just put a call to registerEvent in each branch of the switch?

Which, by the way, is exactly what Antionio did. During the code review, Grenk objected that Antonio’s version wasn’t as “DRY” as his, but the rest of the team agreed that this was more readable.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaDavid Rowe: FreeDV QSO Party 2019

My local radio club, the Amateur Radio Experimenters Group (AREG), have organised a special FreeDV QSO Party Weekend from April 27th 0300z to April 28th 0300z 2019. This is a great chance to try out FreeDV, work Australia using open source HF digital voice, and even talk to me!

All the details including frequencies, times and the point scoring system over on the AREG site.

Krebs on SecurityFormer Russian Cybersecurity Chief Sentenced to 22 Years in Prison

A Russian court has handed down lengthy prison terms for two men convicted on treason charges for allegedly sharing information about Russian cybercriminals with U.S. law enforcement officials. The men — a former Russian cyber intelligence official and an executive at Russian security firm Kaspersky Lab — were reportedly prosecuted for their part in an investigation into Pavel Vrublevsky, a convicted cybercriminal who ran one of the world’s biggest spam networks and was a major focus of my 2014 book, Spam Nation.

Sergei Mikhailov, formerly deputy chief of Russia’s top anti-cybercrime unit, was sentenced today to 22 years in prison. The court also levied a 14-year sentence against Ruslan Stoyanov, a senior employee at Kaspersky Lab. Both men maintained their innocence throughout the trial.

Following their dramatic arrests in 2016, many news media outlets reported that the men were suspected of having tipped off American intelligence officials about those responsible for Russian hacking activities tied to the 2016 U.S. presidential election.

That’s because two others arrested for treason at the same time — Mikhailov subordinates Georgi Fomchenkov and Dmitry Dokuchaev — were reported by Russian media to have helped the FBI investigate Russian servers linked to the 2016 hacking of the Democratic National Committee. The case against Fomchenkov and Dokuchaev has not yet gone to trial.

What exactly was revealed during the trial of Mikhailov and Stoyanov is not clear, as the details surrounding it were classified. But according to information first reported by KrebsOnSecurity in January 2017, the most likely explanation for their prosecution stemmed from a long-running grudge held by Pavel Vrublevsky, a Russian businessman who ran a payment firm called ChronoPay and for years paid most of the world’s top spammers and virus writers to pump malware and hundreds of billions of junk emails into U.S. inboxes.

In 2013, Vrublevsky was convicted of hiring his most-trusted spammer and malware writer to launch a crippling distributed denial-of-service (DDoS) attack against one of his company’s chief competitors.

Prior to Vrublevsky’s conviction, massive amounts of files and emails were taken from Vrublevsky’s company and shared with this author. Those included spreadsheets chock full of bank account details tied to some of the world’s most active cybercriminals, and to a vast network of shell corporations created by Vrublevsky and his co-workers to help launder the proceeds from their various online pharmacy, spam and fake antivirus operations.

In a telephone interview with this author in 2011, Vrublevsky said he was convinced that Mikhailov was taking information gathered by Russian government cybercrime investigators and feeding it to U.S. law enforcement and intelligence agencies. Vrublevsky told me then that if ever he could prove for certain Mikhailov was involved in leaking incriminating data on ChronoPay, he would have someone “tear him a new asshole.”

An email that Vrublevsky wrote to a ChronoPay employee in 2010 eerily presages the arrests of Mikhailov and Stoyanov, voicing Vrublevsky’s suspicion that the two were closely involved in leaking ChronoPay emails and documents that were seized by Mikhailov’s own division. A copy of that email is shown in Russian in the screen shot below. A translated version of the message text is available here (PDF).

A copy of an email Vrublevsky sent to a ChronoPay co-worker about his suspicions that Mikhailov and Stoyanov were leaking government secrets.

Predictably, Vrublevsky has taken to gloating on Facebook about today’s prison’s sentences, calling them “good news.” He told the Associated Press that Mikhailov had abused his position at the FSB to go after Internet entrepreneurs like him and “turn them into cybercriminals,” thus “whipping up cyber hysteria around the world.”

This is a rather rich quote, as Vrublevsky was already a well-known and established cybercriminal long before Mikhailov came into his life. Also, I would not put it past Vrublevsky to have somehow greased the wheels of this prosecution.

As I noted in Spam Nation, emails leaked from ChronoPay suggest that Vrublevsky funneled as much as $1 million to corrupt Russian political leaders for the purpose of initiating a criminal investigation into Igor Gusev, a former co-founder of ChronoPay who went on to create a pharmacy spam operation that closely rivaled Vrublevsky’s own pharmacy spam operation — Rx Promotion.

Vrublevsky crowing on Facebook about the sentencing of Mikhailov (left) and Stoyanov.

,

CryptogramGen. Nakasone on US Cyber Command

Really interesting article by and interview with Paul M. Nakasone (Commander of US Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) in the current issue of Joint Forces Quarterly. He talks about the evolving role of US Cyber Command, and its new posture of "persistent engagement" using a "cyber-persistant force."

From the article:

We must "defend forward" in cyberspace, as we do in the physical domains. Our naval forces do not defend by staying in port, and our airpower does not remain at airfields. They patrol the seas and skies to ensure they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace. Persistent engagement of our adversaries in cyberspace cannot be successful if our actions are limited to DOD networks. To defend critical military and national interests, our forces must operate against our enemies on their virtual territory as well. Shifting from a response outlook to a persistence force that defends forward moves our cyber capabilities out of their virtual garrisons, adopting a posture that matches the cyberspace operational environment.

From the interview:

As we think about cyberspace, we should agree on a few foundational concepts. First, our nation is in constant contact with its adversaries; we're not waiting for adversaries to come to us. Our adversaries understand this, and they are always working to improve that contact. Second, our security is challenged in cyberspace. We have to actively defend; we have to conduct reconnaissance; we have to understand where our adversary is and his capabilities; and we have to understand their intent. Third, superiority in cyberspace is temporary; we may achieve it for a period of time, but it's ephemeral. That's why we must operate continuously to seize and maintain the initiative in the face of persistent threats. Why do the threats persist in cyberspace? They persist because the barriers to entry are low and the capabilities are rapidly available and can be easily repurposed. Fourth, in this domain, the advantage favors those who have initiative. If we want to have an advantage in cyberspace, we have to actively work to either improve our defenses, create new accesses, or upgrade our capabilities. This is a domain that requires constant action because we're going to get reactions from our adversary.

[...]

Persistent engagement is the concept that states we are in constant contact with our adversaries in cyberspace, and success is determined by how we enable and act. In persistent engagement, we enable other interagency partners. Whether it's the FBI or DHS, we enable them with information or intelligence to share with elements of the CIKR [critical infrastructure and key resources] or with select private-sector companies. The recent midterm elections is an example of how we enabled our partners. As part of the Russia Small Group, USCYBERCOM and the National Security Agency [NSA] enabled the FBI and DHS to prevent interference and influence operations aimed at our political processes. Enabling our partners is two-thirds of persistent engagement. The other third rests with our ability to act -- that is, how we act against our adversaries in cyberspace. Acting includes defending forward. How do we warn, how do we influence our adversaries, how do we position ourselves in case we have to achieve outcomes in the future? Acting is the concept of operating outside our borders, being outside our networks, to ensure that we understand what our adversaries are doing. If we find ourselves defending inside our own networks, we have lost the initiative and the advantage.

[...]

The concept of persistent engagement has to be teamed with "persistent presence" and "persistent innovation." Persistent presence is what the Intelligence Community is able to provide us to better understand and track our adversaries in cyberspace. The other piece is persistent innovation. In the last couple of years, we have learned that capabilities rapidly change; accesses are tenuous; and tools, techniques, and tradecraft must evolve to keep pace with our adversaries. We rely on operational structures that are enabled with the rapid development of capabilities. Let me offer an example regarding the need for rapid change in technologies. Compare the air and cyberspace domains. Weapons like JDAMs [Joint Direct Attack Munitions] are an important armament for air operations. How long are those JDAMs good for? Perhaps 5, 10, or 15 years, some-times longer given the adversary. When we buy a capability or tool for cyberspace...we rarely get a prolonged use we can measure in years. Our capabilities rarely last 6 months, let alone 6 years. This is a big difference in two important domains of future conflict. Thus, we will need formations that have ready access to developers.

Solely from a military perspective, these are obviously the right things to be doing. From a societal perspective -- from the perspective a potential arms race -- I'm much less sure. I'm also worried about the singular focus on nation-state actors in an environment where capabilities diffuse so quickly. But Cyber Command's job is not cybersecurity and resilience.

The whole thing is worth reading, regardless of whether you agree or disagree.

EDITED TO ADD (2/26): As an example US CyberCommand disrupted a Russian troll farm during the 2018 midterm elections.

CryptogramAttacking Soldiers on Social Media

A research group at NATO's Strategic Communications Center of Excellence catfished soldiers involved in an European military exercise -- we don't know what country they were from -- to demonstrate the power of the attack technique.

Over four weeks, the researchers developed fake pages and closed groups on Facebook that looked like they were associated with the military exercise, as well as profiles impersonating service members both real and imagined.

To recruit soldiers to the pages, they used targeted Facebook advertising. Those pages then promoted the closed groups the researchers had created. Inside the groups, the researchers used their phony accounts to ask the real service members questions about their battalions and their work. They also used these accounts to "friend" service members. According to the report, Facebook's Suggested Friends feature proved helpful in surfacing additional targets.

The researchers also tracked down service members' Instagram and Twitter accounts and searched for other information available online, some of which a bad actor might be able to exploit. "We managed to find quite a lot of data on individual people, which would include sensitive information," Biteniece says. "Like a serviceman having a wife and also being on dating apps."

By the end of the exercise, the researchers identified 150 soldiers, found the locations of several battalions, tracked troop movements, and compelled service members to engage in "undesirable behavior," including leaving their positions against orders.

"Every person has a button. For somebody there's a financial issue, for somebody it's a very appealing date, for somebody it's a family thing," Sarts says. "It's varied, but everybody has a button. The point is, what's openly available online is sufficient to know what that is."

This is the future of warfare. It's one of the reasons China stole all of that data from the Office of Personal Management. If indeed a country's intelligence service was behind the Equifax attack, this is why they did it.

Go back and read this scenario from the Center for Strategic and International Studies. Why wouldn't a country intent on starting a war do it that way?

Worse Than FailureBeyond Brillant

We've all had cow-orker's who couldn't do their jobs. Some people have even had the privilege of working with Paula.

Jarad should be so lucky.

He worked at Initech in a small development group, building a Windows client tool that customers used to interface with their server. One day, they decided to port the app from .NET to Java. The powers-that-be recommended a highly regarded Lead Java Developer, Kiesha, from Intelligenuity, to lead the project. "Don't worry," they said, "Intelligenuity only employs the most brillant programmers."

At the first group stand up meeting of the project, their manager announced that they would use Eclipse for the Java project. Kiesha posited "I don't have Eclipse. Could someone please send it to me?" So Jarad sent her the link. At the next stand up, he followed up to ask if she had gotten Eclipse installed. She said "I was blocked because she had been unable to download it, so I waited for the next meeting to ask for help." Their manager jumped on her machine and solved her problem by clicking on the download link for her.

Fast forward to the next meeting and she said that she was still unable to proceed because "Eclipse was having some problem with 'JDK' and could someone please send me that?" Jarad sent her that link too. Several days later at the next meeting, she said "Eclipse isn't working because it needs a 'jar' file, so could someone please send one to me?" And after that, "Could someone please send me sample code for doing classes because Eclipse kept saying  NullPointerException".

Finally the manager changed the meeting structure. They would continue their usual standups for the Windows client, but they would add a separate dedicated meeting with just Kiesha. Eventually, they found out that she and her husband were buddies with a highly placed C** executive and his wife. The separate meeting was to "guarantee that she's successful," which meant their manager was writing the code for her.

One day, Kiesha told the manager that a customer was having a critical problem with the web portal, and that it was of the utmost importance that they have a meeting with the customer as soon as possible to help resolve the issue.

Their manager set up a meeting with the customer, himself, Kiesha, Jarad, and the project manager to solve it once and for all. The day of the meeting, the customer was surprised at how many support people and managers showed up. The customer explained. "The, um… 'portal problem' is that we asked Kiesha for the URL of the web portal? This could have been an email."

Sometimes, there is justice in this world, as Kiesha finally lost her job.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaBen Martin: 5 axis cnc fun!

The 5th axis build came together surprisingly well. I had expected much more resistance getting the unit to be known to both fusion360 and LinuxCNC. There is still some tinkering to be done for sure but I can get some reasonable results already. The video below gives an overview of the design:



Shown below is a silent movie of a few test jobs I created to see how well tool contact would be maintained during motion in A and B axis while the x,y,z are moved to keep the tool in the right position. This is the flow toolpath in Fusion360 in action. Non familiarity with these CAM paths makes for a learning curve which is interesting when paired with a custom made 5th that you are trying to debug at the same time.



I haven't tested how well the setup works when cutting harder materials like alloy yet. It is much quieter and forgiving to test cutting on timber and be reasonably sure about the toolpaths and that you are not going to accidentally crash to deep into the material after a 90 degree rotation.


Planet Linux AustraliaCraige McWhirter: New Dark Age: Technology and the End of the Future

by James Bridle

New Dark Age: Technology and the End of the Future

It may be my first book for 2019 but I'm going to put it out there, this is my must read book for 2109 already. Considering it was published in 2018 to broad acclaim, it may be a safe call.

tl;dr; Read this book. It's well resourced, thoroughly referenced, well thought out with the compiled information and lines drawn potentially redrawing the way you see our industry and the world. If you're already across the issues, the hard facts will still cause you to draw breath.

I read this book in bursts over 4 weeks. Each chapter packing it's own informative punch. The narrative first grabbed my attention on page 4 where the weakness of learning to code alone was fleshed out.

"Computational thinking is predominant in the world today, driving the worst trends in our societies and interactions, and must be opposed by real systemic literacy." page 4

Where it is argued that systemic literacy is much more important than learning to code - with a humorous but fitting plumbing analogy in the mix.

One of the recurring threads in the book is the titular "Dark New Age", with points being drawn back to the various actions present in modern society that are actively reducing our knowledge.

"And so we find ourselves today connected to vast repositories of knowledge, and yet we have not learned to think. In fact, the opposite is true: that which was intended to enlighten the world in practice darkens it. The abundance of information and the plurality of world-views now accessible to us through the Internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics." page 10

Also covered are more well known instances of corporate and government censorship, the traps of the modern convenience technologies.

"When an ebook is purchased from an online service, it remains the property of the seller, it's loan subject to revocation at any time - as happened when Amazon remotely deleted thousands of copies of 1984 and Animal Farm from customers' Kindles in 2009. Streaming music and video services filter the media available by legal jurisdiction and algorithmically determine 'personal' preferences. Academic journals determine access to knowledge by institutional affiliation and financial contribution as physical, open-access libraries close down." page 39

It was the "Climate" chapter that packed the biggest punch for me, as an issue I considered myself rather well across over the last 30 years, it turns out there was a significant factor I'd missed. A hint that surprise was coming came in an interesting diversion into clear air turbulence.

"An advisory circular on preventing turbulence-related injuries, published by the US Federal Aviation Administration in 2006, states that the frequency of turbulence accidents has increased steadily for more than a decade, from 0.3 accidents per million flights in 1989 to 1.7 in 2003." page 68

The reason for this increase was laid at the feet of increased CO2 levels in the atmosphere by Paul Williams of the National Centre for Atmospheric Science and the implications were expounded upon in his paper Nature Climate Change (2013) thusly:

"...in winter, most clear air turbulence measures show a 10-40 per cent increase in the median strength...40-70 per cent increase in the frequency of occurrence of moderate or greater turbulence." page 69

The real punch in the guts came on page 73, where I first came across the concept of "Peak Knowledge" and how the climate change was playing it's defining role in that decline, where President of the American Meteorological Society William B Gail wonders if:

"we have already passed through 'peak knowledge", just as we may have already passed 'peak oil'." page 73

Wondering what that claim was based on, the next few paragraphs of information can be summarised in the following points:

  • From 1000 - 1750 CE CO2 was at 275-285 parts / million.
  • 295ppm by the start of the 20th century
  • 310ppm by 1950
  • 325ppm in 1970
  • 350ppm in 1988
  • 375ppm by 2004
  • 400ppm by 2015 - the first time in 800,000 years
  • 1,000ppm is projected to be passed by the end of this century.

"At 1,000ppm, human cognitive ability drops by 21%" page 74

Then a couple of bombshells:

"CO2 already reaches 500ppm in industrial cities"

"indoors in poorly ventilated schools, homes and workplaces it can regularly exceed 1,000ppm - substantial numbers of schools in California and Texas measured in 2012 breached 2,000ppm."

The implications of this are fairly obvious.

All this is by the end of chapter 3. It's a gritty, honest look at where we're at and where going. It's not pretty but as the old saying goes, to be forewarned is to be forearmed.

Do yourself a favour, read it.

Planet Linux AustraliaStewart Smith: CVE-2019-6260: Gaining control of BMC from the host processor

This is details for CVE-2019-6260 – which has been nicknamed “pantsdown” due to the nature of feeling that we feel that we’ve “caught chunks of the industry with their…” and combined with the fact that naming things is hard, so if you pick a bad name somebody would have to come up with a better one before we publish.

I expect OpenBMC to have a statement shortly.

The ASPEED ast2400 and ast2500 Baseboard Management Controller (BMC) hardware and firmware implement Advanced High-performance Bus (AHB) bridges, which allow arbitrary read and write access to the BMC’s physical address space from the host, or from the network if the BMC console uart is attached to a serial concentrator (this is atypical for most systems).

Common configuration of the ASPEED BMC SoC’s hardware features leaves it open to “remote” unauthenticated compromise from the host and from the BMC console. This stems from AHB bridges on the LPC and PCIe buses, another on the BMC console UART (hardware password protected), and the ability of the X-DMA engine to address all of the BMC’s M-Bus (memory bus).

This affects multiple BMC firmware stacks, including OpenBMC, AMI’s BMC, and SuperMicro. It is independent of host processor architecture, and has been observed on systems with x86_64 processors IBM POWER processors (there is no reason to suggest that other architectures wouldn’t be affected, these are just the ones we’ve been able to get access to)

The LPC, PCIe and UART AHB bridges are all explicitly features of Aspeed’s designs: They exist to recover the BMC during firmware development or to allow the host to drive the BMC hardware if the BMC has no firmware of its own. See section 1.9 of the AST2500 Software Programming Guide.

The typical consequence of external, unauthenticated, arbitrary AHB access is that the BMC fails to ensure all three of confidentiality, integrity and availability for its data and services. For instance it is possible to:

  1. Reflash or dump the firmware of a running BMC from the host
  2. Perform arbitrary reads and writes to BMC RAM
  3. Configure an in-band BMC console from the host
  4. “Brick” the BMC by disabling the CPU clock until the next AC power cycle

Using 1 we can obviously implant any malicious code we like, with the impact of BMC downtime while the flashing and reboot take place. This may take the form of minor, malicious modifications to the officially provisioned BMC image, as we can extract, modify, then repackage the image to be re-flashed on the BMC. As the BMC potentially has no secure boot facility it is likely difficult to detect such actions.

Abusing 3 may require valid login credentials, but combining 1 and 2 we can simply change the locks on the BMC by replacing all instances of the root shadow password hash in RAM with a chosen password hash – one instance of the hash is in the page cache, and from that point forward any login process will authenticate with the chosen password.

We obtain the current root password hash by using 1 to dump the current flash content, then using https://github.com/ReFirmLabs/binwalk to extract the rootfs, then simply loop-mount the rootfs to access /etc/shadow. At least one BMC stack doesn’t require this, and instead offers “Press enter for console”.

IBM has internally developed a proof-of-concept application that we intend to open-source, likely as part of the OpenBMC project, that demonstrates how to use the interfaces and probes for their availability. The intent is that it be added to platform firmware test
suites as a platform security test case. The application requires root user privilege on the host system for the LPC and PCIe bridges, or normal user privilege on a remote system to exploit the debug UART interface. Access from userspace demonstrates the vulnerability of systems in bare-metal cloud hosting lease arrangements where the BMC
is likely in a separate security domain to the host.

OpenBMC Versions affected: Up to at least 2.6, all supported Aspeed-based platforms

It only affects systems using the ASPEED ast2400, ast2500 SoCs. There has not been any investigation into other hardware.

The specific issues are listed below, along with some judgement calls on their risk.

iLPC2AHB bridge Pt I

State: Enabled at cold start
Description: A SuperIO device is exposed that provides access to the BMC’s address-space
Impact: Arbitrary reads and writes to the BMC address-space
Risk: High – known vulnerability and explicitly used as a feature in some platform designs
Mitigation: Can be disabled by configuring a bit in the BMC’s LPC controller, however see Pt II.

iLPC2AHB bridge Pt II

State: Enabled at cold start
Description: The bit disabling the iLPC2AHB bridge only removes write access – reads are still possible.
Impact: Arbitrary reads of the BMC address-space
Risk: High – we expect the capability and mitigation are not well known, and the mitigation has side-effects
Mitigation: Disable SuperIO decoding on the LPC bus (0x2E/0x4E decode). Decoding is controlled via hardware strapping and can be turned off at runtime, however disabling SuperIO decoding also removes the host’s ability to configure SUARTs, System wakeups, GPIOs and the BMC/Host mailbox

PCIe VGA P2A bridge

State: Enabled at cold start
Description: The VGA graphics device provides a host-controllable window mapping onto the BMC address-space
Impact: Arbitrary reads and writes to the BMC address-space
Risk: Medium – the capability is known to some platform integrators and may be disabled in some firmware stacks
Mitigation: Can be disabled or filter writes to coarse-grained regions of the AHB by configuring bits in the System Control Unit

DMA from/to arbitrary BMC memory via X-DMA

State: Enabled at cold start
Description: X-DMA available from VGA and BMC PCI devices
Impact: Misconfiguration can expose the entirety of the BMC’s RAM to the host
AST2400 Risk: High – SDK u-boot does not constrain X-DMA to VGA reserved memory
AST2500 Risk: Low – SDK u-boot restricts X-DMA to VGA reserved memory
Mitigation: X-DMA accesses are configured to remap into VGA reserved memory in u-boot

UART-based SoC Debug interface

State: Enabled at cold start
Description: Pasting a magic password over the configured UART exposes a hardware-provided debug shell. The capability is only exposed on one of UART1 or UART5, and interactions are only possible via the physical IO port (cannot be accessed from the host)
Impact: Misconfiguration can expose the BMC’s address-space to the network if the BMC console is made available via a serial concentrator.
Risk: Low
Mitigation: Can be disabled by configuring a bit in the System Control Unit

LPC2AHB bridge

State: Disabled at cold start
Description: Maps LPC Firmware cycles onto the BMC’s address-space
Impact: Misconfiguration can expose vulnerable parts of the BMC’s address-space to the host
Risk: Low – requires reasonable effort to configure and enable.
Mitigation: Don’t enable the feature if not required.
Note: As a counter-point, this feature is used legitimately on OpenPOWER systems to expose the boot flash device content to the host

PCIe BMC P2A bridge

State: Disabled at cold start
Description: PCI-to-BMC address-space bridge allowing memory and IO accesses
Impact: Enabling the device provides limited access to BMC address-space
Risk: Low – requires some effort to enable, constrained to specific parts of the BMC address space
Mitigation: Don’t enable the feature if not required.

Watchdog setup

State: Required system function, always available
Description: Misconfiguring the watchdog to use “System Reset” mode for BMC reboot will re-open all the “enabled at cold start” backdoors until the firmware reconfigures the hardware otherwise. Rebooting the BMC is generally possible from the host via IPMI “mc reset” command, and this may provide a window of opportunity for BMC compromise.
Impact: May allow arbitrary access to BMC address space via any of the above mechanisms
Risk: Low – “System Reset” mode is unlikely to be used for reboot due to obvious side-effects
Mitigation: Ensure BMC reboots always use “SOC Reset” mode

The CVSS score for these vulnerabilities is: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=3DAV:A/AC:L/PR:=N/UI:N/S:U/C:H/I:H/A:H/E:F/RL:U/RC:C/CR:H/IR:H/AR:M/MAV:L/MAC:L/MPR:N/MUI:N=/MS:U/MC:H/MI:H/MA:H

There is some debate on if this is a local or remote vulnerability, and it depends on if you consider the connection between the BMC and the host processor as a network or not.

The fix is platform dependent as it can involve patching both the BMC firmware and the host firmware.

For example, we have mitigated these vulnerabilities for OpenPOWER systems, both on the host and BMC side. OpenBMC has a u-boot patch that disables the features:

https://gerrit.openbmc-project.xyz/#/c/openbmc/meta-phosphor/+/13290/

Which platforms can opt into in the following way:

https://gerrit.openbmc-project.xyz/#/c/openbmc/meta-ibm/+/17146/

The process is opt-in for OpenBMC platforms because platform maintainers have the knowledge of if their platform uses affected hardware features. This is important when disabling the iLPC2AHB bridge as it can be a bit of a finicky process.

See also https://gerrit.openbmc-project.xyz/c/openbmc/docs/+/11164 for a WIP OpenBMC Security Architecture document which should eventually contain all these details.

For OpenPOWER systems, the host firmware patches are contained in op-build v2.0.11 and enabled for certain platforms. Again, this is not by default for all platforms as there is BMC work required as well as per-platform changes.

Credit for finding these problems: Andrew Jeffery, Benjamin
Herrenschmidt, Jeremy Kerr, Russell Currey, Stewart Smith. There have been many more people who have helped with this issue, and they too deserve thanks.