Planet Russell

,

Worse Than FailureError'd: Text Should Go Here

"The fact that Microsoft values my PC's health over copyediting is why I thumbed up this window," Eric wrote.

 

"Great! Thanks Steam! How am I going to contact Paypal now?"

 

"Now serving order number Not a Number. A totally normal order number, don't question it," Pierre-Luc wrote.

 

"Sure, I had a pretty rough start getting my cheapo smart power outlet up and running, but hey, on the plus side it does look like 2 indeed got changed to 'two'" writes Bob.

 

Andy writes, "I'm not sure how much processing Dreamhost needs to do when making a password with the complexity of YXmztnS5vxA6, but this screenshot was taken after 45 seconds of loading."

 

"I depend heavily on Microsoft Null, but I can't really imagine what kind of updates it might require," Matthew F. wrote.

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRuss Allbery: Review: Lent

Review: Lent, by Jo Walton

Publisher: Tor
Copyright: May 2019
ISBN: 1-4668-6572-5
Format: Kindle
Pages: 381

It is April 3rd, 1492. Brother Girolamo is a Dominican and the First Brother of San Marco in Florence. He can see and banish demons, as we find out in the first chapter when he cleanses the convent of Santa Lucia. The demons appear to be drawn by a green stone hidden in a hollowed-out copy of Pliny, a donation to the convent library from the King of Hungary. That green stone will be central to the story, but neither we nor Girolamo find out why for some time. The only hint is that the dying Lorenzo de' Medici implies that it is the stone of Titurel.

Brother Girolamo is also a prophet. He has the ability to see the future, sometimes explicitly and sometimes in symbolic terms. Sometimes the events can be changed, and sometimes they have the weight of certainty. He believes the New Cyrus will come over the Alps, leading to the sack and fall of Rome, and hopes to save Florence from the same fate by transforming it into the City of God.

If your knowledge of Italian Renaissance history is good, you may have already guessed the relevant history. The introduction of additional characters named Marsilio and Count Pico provide an additional clue before Walton mentions Brother Girolamo's last name: Savonarola.

If, like me, you haven't studied Italian history but still think this sounds vaguely familiar, that may be because Savonarola and his brief religious rule of Florence is a topic of Chapter VI of Niccolò Machiavelli's The Prince. Brother Girolamo in Walton's portrayal is not the reactionary religious fanatic he is more often shown as, but if you know this part of history, you'll find many events of the first part of the book familiar.

The rest of this book... that's where writing this review becomes difficult.

About 40% of the way through Lent, and well into spoiler territory, this becomes a very different book. Exactly how isn't something I can explain without ruining a substantial portion of the plot. That also makes it difficult to talk about what Walton is doing in this novel, and to some extent even to describe its genre. I'll try, but the result will be unsatisfyingly vague.

Lent is set in an alternate historical universe in which both theology and magic work roughly the way that 15th century Christianity thought that they worked. Demons are real, although most people can't see them. Prophecy is real in a sense, although that's a bit more complicated. When Savonarola says that Florence is besieged by demons, he means that demons are literally arrayed against the walls of the city and attempting to make their ways inside. Walton applies the concreteness of science with its discoverable rules and careful analysis to prophecy, spiritual warfare, and other aspects of theology that would be spoilers.

Using Savonarola as the sympathetic main character is a bold choice. The historical figure is normally portrayed as the sort of villain everyone, including Machiavelli, loves to hate. Walton's version of the character is still arguably a religious fanatic, but the layers behind why he is so deeply religious and what he is attempting to accomplish are deep and complex. He has a single-minded belief in a few core principles, and he's acting on the basis of prophecy that he believes completely (for more reasons than either he or the reader knows at first). But outside of those areas of uncompromising certainty, he's thoughtful and curious, befriends other thoughtful and curious people, supports philosophy, and has a deep sense of fairness and honesty. When he talks about reform of the church in Lent, he's both sincere and believable. (This would not survive a bonfire of the vanities that was a literal book burning, but Walton argues forcefully in an afterward that this popular belief contradicts accounts from primary sources.)

Lent starts as an engrossing piece of historical fiction, pulling me into the fictional thoughts of a figure I would not have expected to like nearly as much as I did. I was not at all bored by the relatively straightforward retelling of Italian history and would have happily read more of it. The shifting of gears partway through adds additional intriguing depth, and it's fun to play what-if with medieval theology and explore the implications of all of it being literally true.

The ending, unfortunately, I thought was less successful, mostly due to pacing. Story progress slows in a way that has an important effect on Savonarola, but starts to feel a touch tedious. Then, Walton makes a bit too fast of a pivot between despair and success and didn't give me quite enough emotional foundation for the resolution. She also dropped me off the end of the book more abruptly than I wanted. I'm not sure how she could have possibly continued beyond the ending, to be fair, but still, I wanted to know what would happen in the next chapter (and the theology would have been delightfully subversive). But this is also the sort of book that's exceedingly hard to end.

I would call Lent more intriguing than fully successful, but I enjoyed reading it despite not having much inherent interest in Florence, Renaissance theology, or this part of Italian history. If any of those topics attracts you more than it does me, I suspect you will find this book worth reading.

Rating: 7 out of 10

,

Planet DebianSteve Kemp: Exporting github repositories to myrepos

myrepos is an excellent tool for applying git operations to multiple repositories, and I use it extensively.

Given a configuration file like this:

..

[github.com/skx/asql]
checkout = git clone git@github.com:skx/asql.git

[github.com/skx/bookmarks.public]
checkout = git clone git@github.com:skx/bookmarks.public.git

[github.com/skx/Buffalo-220-NAS]
checkout = git clone git@github.com:skx/Buffalo-220-NAS.git

[github.com/skx/calibre-plugins]
checkout = git clone git@github.com:skx/calibre-plugins.git

...

You can clone all the repositories with one command:

mr -j5 --config .mrconfig.github checkout

Then pull/update them them easily:

mr -j5 --config .mrconfig.github update

It works with git repositories, mercurial, and more. (The -j5 argument means to run five jobs in parallel. Much speed, many fast. Big wow.)

I wrote a simple golang utility to use the github API to generate a suitable configuration including:

  • All your personal repositories.
  • All the repositories which belong to organizations you're a member of.

Currently it only supports github, but I'll update to include self-hosted and API-compatible services such as gitbucket. Is there any interest in such a tool? Or have you all written your own already?

(I have the feeling I've written this tool in Perl, Ruby, and even using curl a time or two already. This time I'll do it properly and publish it to save effort next time!)

CryptogramSecuring Tiffany's Move

Story of how Tiffany & Company moved all of its inventory from one store to another. Short summary: careful auditing and a lot of police.

Cryptogram5G Security

The security risks inherent in Chinese-made 5G networking equipment are easy to understand. Because the companies that make the equipment are subservient to the Chinese government, they could be forced to include backdoors in the hardware or software to give Beijing remote access. Eavesdropping is also a risk, although efforts to listen in would almost certainly be detectable. More insidious is the possibility that Beijing could use its access to degrade or disrupt communications services in the event of a larger geopolitical conflict. Since the internet, especially the "internet of things," is expected to rely heavily on 5G infrastructure, potential Chinese infiltration is a serious national security threat.

But keeping untrusted companies like Huawei out of Western infrastructure isn't enough to secure 5G. Neither is banning Chinese microchips, software, or programmers. Security vulnerabilities in the standards足the protocols and software for 5G足ensure that vulnerabilities will remain, regardless of who provides the hardware and software. These insecurities are a result of market forces that prioritize costs over security and of governments, including the United States, that want to preserve the option of surveillance in 5G networks. If the United States is serious about tackling the national security threats related to an insecure 5G network, it needs to rethink the extent to which it values corporate profits and government espionage over security.

To be sure, there are significant security improvements in 5G over 4G足in encryption, authentication, integrity protection, privacy, and network availability. But the enhancements aren't enough.

The 5G security problems are threefold. First, the standards are simply too complex to implement securely. This is true for all software, but the 5G protocols offer particular difficulties. Because of how it is designed, the system blurs the wireless portion of the network connecting phones with base stations and the core portion that routes data around the world. Additionally, much of the network is virtualized, meaning that it will rely on software running on dynamically configurable hardware. This design dramatically increases the points vulnerable to attack, as does the expected massive increase in both things connected to the network and the data flying about it.

Second, there's so much backward compatibility built into the 5G network that older vulnerabilities remain. 5G is an evolution of the decade-old 4G network, and most networks will mix generations. Without the ability to do a clean break from 4G to 5G, it will simply be impossible to improve security in some areas. Attackers may be able to force 5G systems to use more vulnerable 4G protocols, for example, and 5G networks will inherit many existing problems.

Third, the 5G standards committees missed many opportunities to improve security. Many of the new security features in 5G are optional, and network operators can choose not to implement them. The same happened with 4G; operators even ignored security features defined as mandatory in the standard because implementing them was expensive. But even worse, for 5G, development, performance, cost, and time to market were all prioritized over security, which was treated as an afterthought.

Already problems are being discovered. In November 2019, researchers published vulnerabilities that allow 5G users to be tracked in real time, be sent fake emergency alerts, or be disconnected from the 5G network altogether. And this wasn't the first reporting to find issues in 5G protocols and implementations.

Chinese, Iranians, North Koreans, and Russians have been breaking into U.S. networks for years without having any control over the hardware, the software, or the companies that produce the devices. (And the U.S. National Security Agency, or NSA, has been breaking into foreign networks for years without having to coerce companies into deliberately adding backdoors.) Nothing in 5G prevents these activities from continuing, even increasing, in the future.

Solutions are few and far between and not very satisfying. It's really too late to secure 5G networks. Susan Gordon, then-U.S. principal deputy director of national intelligence, had it right when she said last March: "You have to presume a dirty network." Indeed, the United States needs to accept 5G's insecurities and build secure systems on top of it. In some cases, doing so isn't hard: Adding encryption to an iPhone or a messaging system like WhatsApp provides security from eavesdropping, and distributed protocols provide security from disruption足regardless of how insecure the network they operate on is. In other cases, it's impossible. If your smartphone is vulnerable to a downloaded exploit, it doesn't matter how secure the networking protocols are. Often, the task will be somewhere in between these two extremes.

5G security is just one of the many areas in which near-term corporate profits prevailed against broader social good. In a capitalist free market economy, the only solution is to regulate companies, and the United States has not shown any serious appetite for that.

What's more, U.S. intelligence agencies like the NSA rely on inadvertent insecurities for their worldwide data collection efforts, and law enforcement agencies like the FBI have even tried to introduce new ones to make their own data collection efforts easier. Again, near-term self-interest has so far triumphed over society's long-term best interests.

In turn, rather than mustering a major effort to fix 5G, what's most likely to happen is that the United States will muddle along with the problems the network has, as it has done for decades. Maybe things will be different with 6G, which is starting to be discussed in technical standards committees. The U.S. House of Representatives just passed a bill directing the State Department to participate in the international standards-setting process so that it is just run by telecommunications operators and more interested countries, but there is no chance of that measure becoming law.

The geopolitics of 5G are complicated, involving a lot more than security. China is subsidizing the purchase of its companies' networking equipment in countries around the world. The technology will quickly become critical national infrastructure, and security problems will become life-threatening. Both criminal attacks and government cyber-operations will become more common and more damaging. Eventually, Washington will have do so something. That something will be difficult and expensive足let's hope it won't also be too late.

This essay previously appeared in Foreign Policy.

EDITED TO ADD (1/16): Slashdot thread.

Planet DebianDirk Eddelbuettel: RcppRedis 0.1.10: Switch to tinytest

Another minor release of RcppRedis just arrived on CRAN, following a fairly long break since the last release in October 2018.

RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads for several years now.

This release switches to the fabulous tinytest package, allowing for very flexible testing during development and deployment—three cheers for easily testing installed packages too.

Changes in version 0.1.10 (2020-01-16)

  • The package now uses tinytest for unit tests (Dirk in #41).

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramCritical Windows Vulnerability Discovered by NSA

Yesterday's Microsoft Windows patches included a fix for a critical vulnerability in the system's crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That's really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA's Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and -- to my shock -- took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation's cyberweapons stash -- my personal favorite theory -- she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And -- seriously -- patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability -- why wouldn't it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

Worse Than FailureCodeSOD: Switch Off

There are certain things which you see in code that, at first glance, if you haven’t already learned better, look like they might almost be clever. One of those in any construct that starts with:

switch(true) {…}

It seems tempting at various points. Your cases can be boolean conditions now, but you can also collapse cases together, getting tricky with breaks to build complex logic. It’s more compact than a chain of ifs. It’s also almost always the wrong thing to do.

Kasha stumbled across this while tracking down a bug:

    // The variable names in this code have been anonymized to protect the guilty. In the original they were actually ok.
    private function foo($a, $b)
    {
        switch (true){
            case ($a&&$b): return 'b';
                break;
            case (!$a&&!$b): return 'c';
                break;
            case $a: return 'a';
                break;
            casedefault: return 'unknown';
                break;
        }
    }

As Kasha’s comment tells us, we won’t judge by the variable names in use here. Even so, the awkward switch also contains awkward logic, and seems designed for unreadability. It’s not the most challenging logic to trace. Even Kasha writes “I comprehended the piece just fine at first while looking at it, and only later it hit me how awkward it is.” And that’s the real issue: it’s awkward. It’s not eye-bleedingly bad. It’s not cringe-worthy. It’s just the sort of thing that you see in your codebase and grumble about each time you see it.

And that’s the real WTF in this case. The developer responsible for the code produces a lot of this kind of code. They never use an if if they can compact it into a switch. They favor while(true) with breaks over sensible while loops.

And in this case, they left a crunch-induced typo which created the bug: casedefault is not part of the PHP switch syntax. Like most languages with a switch, the last condition should simply be default:.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianFrançois Marier: Sharing your WiFi connection with a NetworkManager hotspot

In-flight and hotel WiFi can be quite expensive and often insist on charging users extra to connect multiple devices. In order to avoid that, it's possible to easily create a WiFi hotspot using NetworkManager and a external USB WiFi adapter.

Creating the hotspot

The main trick is to right-click on the NetworkManager icon in the status bar and select "Edit Connections..." (not "Create New WiFi Network..." despite the promising name).

From there click the "+" button in the lower right then "WiFi" as the Connection Type. I like to use the computer name as the "Connection name".

In the WiFi tab, set the following:

  • SSID: machinename_nomap
  • Mode: hotspot
  • Device: (the device name of the USB WiFi adapter)

The _nomap suffix is there to opt out of the Google and Mozilla location services which could allow anybody to lookup sightings of your device around the World.

In the WiFi Security tab:

  • Security: WPA & WPA2 Personal
  • Password: (a 63-character random password generated using pwgen -s 63)

While you may think that such a long password is inconvenient, it's now possible to add the network automatically by simply scanning a QR code on your phone.

In the IPv4 Settings tab:

  • Method: Shared to other computers

Finally, in the IPv6 Settings tab:

  • Method: Ignore

I ended up with the following config in /etc/NetworkManager/system-connections/machinename:

[connection]
id=machinename
uuid=<long UUID string>
type=wifi
interface-name=wl...
permissions=
timestamp=1578533792

[wifi]
mac-address=<MAC>
mac-address-blacklist=
mode=ap
seen-bssids=<BSSID>
ssid=machinename_nomap

[wifi-security]
key-mgmt=wpa-psk
psk=<63-character password>

[ipv4]
dns-search=
method=shared

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
ip6-privacy=0
method=ignore

Firewall rules

In order for the packets to flow correctly, I opened up the following ports on my machine's local firewall:

-A INPUT -s 10.42.0.0/24 -j ACCEPT
-A FORWARD -d 10.42.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.42.0.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.42.0.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.42.0.1 -j ACCEPT
-A INPUT -d 10.42.0.255 -s 10.42.0.1 -j ACCEPT
-A INPUT -d 10.42.0.1 -s 10.42.0.0/24 -j ACCEPT

,

Planet DebianEnrico Zini: Himblick one day later

This is part of a series of posts on the design and technical steps of creating Himblick, a digital signage box based on the Raspberry Pi 4.

One day after the first deploy, we went to check how the system was doing, and noticed some fine tuning to do, some pretty much urgent.


Inspecting

Since the system runs on a readonly rootfs with a writable tempfs overlay, one can inspect the contents of /live/cow and see exactly what files were written since the last boot. ncdu -x /live/cow is a wonderful, wonderful thing.

In this way, we can quickly identify disk/memory usage leaks, and other possible unexpected surprises, like an unexpectedly updated apt package database.

An unexpectedly updated apt package database, with apt sources that may publish broken software, raised very loud alarm bells.

Disable apt timers

It looks like Raspbian ships with the automatic apt update/upgrade timer services enabled. In our case, that would give us a system that works when turned on, then upgrades overnight, and the next day won't play videos, until rebooted, when the tmpfs overlay will be reset and it will work again, until the next nightly upgrade, and so on.

In other words, a flaky system, that would thankfully fix itself at boot but break one day after booting. A system that would be very hard to debug. A system that would soon lose the trust of its users.

The first hotfix after deployment of Himblick was then to update the provisioning procedure to disable automatic package updates:

systemctl disable apt-daily.timer
systemctl mask apt-daily.timer
systemctl disable apt-daily-upgrade.timer
systemctl mask apt-daily-upgrade.timer

Of course, the first system to be patched was on top of a very tall ladder close to a museum ceiling.

journald disk usage

Logging takes an increasing amount of space. In theory, using a systemd.volatile setup, journald does the right thing by default. Since we need to use dracut's hack instead of systemd.volatile, we need to take manual steps to bound the amount of disk space used.

Thanfully, it looks easy to fine-tune journald's disk usage

Limit the growth of .xsession-errors

The .xsession-errors file grows indefinitely during the X session, and it cannot be rotated without restarting X. Deleting it won't help, as the X session still has the file open and keeps it allocated and growing on disk. At most, it can be occasionally truncated.

The file is created by /etc/X11/Xsession before sourcing other configuration files, so one cannot override its location with, say, /dev/null, or a pipe to some command, without editing the Xsession script itself.

Still, .xsession-errors is extremely useful for finding unexpected error output from X programs when something goes wrong.

In our case, himblick-player is the only program run in the X session. We can greatly limit the growth of .xsession-errors by making it log to a file instead of stderr, and using one of python's rotating logging handlers to limit the amount of Himblick's stored logging, or send himblick's log directly to journald, and let journald take care of disk allocation.

Once that is sorted, we can change Himblick to capture the players' stdout and stderr, and log it, to avoid it going to .xsession-errors.

Planet DebianDmitry Shachnev: Qt packages built with OpenGL ES support are now available

Some time ago, there was a thread on debian-devel where we discussed how to make Qt packages work on hardware that supports OpenGL ES, but not the desktop OpenGL.

My first proposal was to switch to OpenGL ES by default on ARM64, as that is the main affected architecture. After a lengthy discussion, it was decided to ship two versions of Qt packages instead, to support more (OpenGL variant, architecture) configurations.

So now I am announcing that we finally have the versions of Qt GUI and Qt Quick libraries that are built against OpenGL ES, and the release team helped us to rebuild the archive for compatibility with them. These packages are not co-installable together with the regular (desktop OpenGL) Qt packages, as they provide the same set of shared libraries. So most packages now have an alternative dependency like libqt5gui5 (>= 5.x) | libqt5gui5-gles (>= 5.x). Packages get such a dependency automatically if they are using ${shlibs:Depends}.

These Qt packages will be mostly needed by ARM64 users, however they may be also useful on other architectures too. Note that armel and armhf are not affected, because there Qt was built against OpenGL ES from the very beginning. So far there are no plans to make two versions of Qt on these architectures, however we are open to bug reports.

To try that on your system (running Bullseye or Sid), just run this command:

# apt install libqt5gui5-gles libqt5quick5-gles

The other Qt submodule packages do not need a second variant, because they do not use any OpenGL API directly. Most of the Qt applications are installable with these packages. At the moment, Plasma is not installable because plasma-desktop FTBFS, but that will be fixed sooner or later.

One major missing thing is PyQt5. It is linking against some Qt helper functions that only exist for desktop OpenGL build, so we will probably need to build a special version of PyQt5 for OpenGL ES.

If you want to use any OpenGL ES specific API in your package, build it against qtbase5-gles-dev package instead of qtbase5-dev. There is no qtdeclarative5-gles-dev so far, however if you need it, please let us know.

In case you have any questions, please feel free to file a bug against one of the new packages, or contact us at the pkg-kde-talk mailing list.

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.11: More polish

New year, new RQuantLib! A new release 0.4.11 of RQuantLib arrived overnight on CRAN; and a Debian upload will follow shortly.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This version does three new things. First, we fixed an oversight on our end and now allow a null calendar (as the C++ API). Second, the package switched to tinytest as a few of my other packages have done, allowing for very flexible testing during development and deployment—three cheers for easily testing installed packages too. Third, and per a kind nag from Kurt Hornik I updated a few calls which the current QuantLib 1.17 marks as deprecated. That lead to a compile issue with 1.16 so the change is conditional in one part. The complete set of changes is listed below:

Changes in RQuantLib version 0.4.11 (2020-01-15)

  • Changes in RQuantLib code:

    • The 'Null' calendar without weekends or holidays is now recognized.

    • The package now uses tinytest for unit tests (Dirk in #140).

    • Calls deprecated-in-QuantLib 1.17 were updated (Dirk in #144).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianWouter Verhelst: Running SReview in minikube

I spent the last week or so building Docker images and a set of YAML files that allows one to run SReview, my 99%-automated video review and transcode system, inside minikube, a program that sets up a mini Kubernetes cluster inside a VM for development purposes.

I wish the above paragraph would say "inside Kubernetes", but alas, unless your Kubernetes implementation has a ReadWriteMany volume that can be used from multiple nodes, this is not quite the case yet. In order to fix that, I am working on adding an abstraction layer that will transparently download files from an S3-compatible object store; but until that is ready, this work is not yet useful for large installations.

But that's fine! If you're wanting to run SReview for a small conference, you can do so with minikube. It won't have the redundancy and reliability things that proper Kubernetes provides you, but then you don't really need that for a conference of a few days.

Here's what you do:

  • Download minikube (see the link above)
  • Run minikube start, and wait for it to finish
  • Run minikube addon enable ingress
  • Clone the SReview git repository
  • From the toplevel of that repository, run perl -I lib scripts/sreview-config -a dump|sensible-pager to see an overview of the available configuration options.
  • Edit the file dockerfiles/kube/master.yaml to add your configuration variables, following the instructions near the top
  • Once the file is configured to your liking, run kubectl apply -f master.yaml -f storage-minikube.yaml
  • Add sreview.example.com to /etc/hosts, and have it point to the output of minikube ip.
  • Create preroll and postroll templates, and download them to minikube in the location that the example config file suggests. Hint: minikube ssh has wget.
  • Store your raw recorded assets under /mnt/vda1/inputdata, using the format you specified for the $inputglob and $parse_re configuration values.
  • Profit!

This doesn't explain how to add a schedule to the database. My next big project (which probably won't happen until after the next FOSDEM is to add a more advanced administrator's interface, so that you can just log in and add things from there. For now though, you have to run kubectl port-forward svc/sreview-database 5432, and then use psql to localhost to issue SQL commands. Yes, that sucks.

Having said that, if you're interested in trying this out, give it a go. Feedback welcome!

(many thanks to the people on the #debian-devel IRC channel for helping me understand how Kubernetes is supposed to work -- wouldn't have worked nearly as nice without them)

Worse Than FailureY2K15

We’re still in the early part of the year, and as little glitches show up from “sliding window” fixes to the Y2K bug, we’re seeing more and more little stories of other date rollover weirdness in our inbox.

Like, for example, the Y2K15 bug, which Encore got to get surprised with. It feels like date issues are turning into a sports game franchise: new releases of the same thing every year.

A long, long time ago, Encore’s company released a piece of industrial machinery with an embedded controller. It was so long ago and so embedded that things like floating point operations were a little to newfangled and expensive to execute, and memory was at an extreme premium.

The engineer who originally designed the device had a clever solution to storing dates. One byte of EEPROM could be dedicated to storing the last two digits of the year. In RAM, a nibble- 4 bits- would then store an offset relative to that base year.

Yes, this had Y2K issues, but that wasn’t really a concern at the time. It also had a rollover issue every 16 years. That also wasn’t really a concern, because it was attached to a giant machine which needed annual service to keep functioning properly. Every few years, the service tech could bring an EEPROM progammer device and flash the base year value in the EEPROM. And if someone missed 16 years worth of service calls, they probably had other problems.

Time passed. Some customers did miss 16 years of service calls. Over time, new features got added. The control interface got an improved LCD. Bluetooth got attached. The networking stack changed. A reporting database got bundled with the product, so all the data being produced by the device could get aggregated and reported on. The way the software interacted with the hardware changed, and it meant that the hardware ran at a lower temperature and could go longer between service calls. But at its core, the chip and the software didn’t change all that much.

In that time, there were also changeovers in the engineering team. People left the company, new engineers joined, documentation languished, never getting updated. Years might pass without anybody touching the software, then suddenly a flurry of customer requests that needed patched RIGHT NOW would come through, and anybody who vaguely understood the software got roped in to do the work, then shunted back off to other projects.

On New Year’s Day, 2016, a deluge of tickets started coming in. Encore, as the last person to have touched the software, started picking them up. They all expressed the same problem: the date had rolled over to 2000. The reporting database was confused, the users were confused, and even if they tried to set the clock to 2016 manually, it would roll back from 2015 to 2000.

Now, no one at the company, including Encore, actually knew about the date system in use at this point. The support manual did say that rollovers meant the device had gone 16 years without being properly serviced, but some of these customers had brand new devices, less than a year old. And customers with devices older than 16 years weren’t seeing this problem.

Encore investigated, and picked apart how the date handling worked. That, itself, wasn’t the problem. It took a lot more investigation to track down the problem, including going back to the board schematics to trace how various hardware components were connected. After a few hair-on-fire weeks of crisis management, Encore pieced together the series of events as they were best able.

Sometime after the year 2000, Bluetooth was added to the device. Something about how the Bluetooth module connected to the other components had broken the flasher-software that could update the base year. This meant that the devices had never had their base year set, and simply had a 0 value- 0x00, or the year 2000.

Which meant, for the next 16 years, everything was fine. Techs went out, tried to flash the EEPROM, reset the clock to the correct date, and went about their business, never aware that they hadn’t actually done anything. But come 2016, all of these devices rolled back over to the year 2000.

Encore was able to figure out a script to trick the system into adjusting the output to correct the base year issue, but it also meant many customers had database crammed with bad data that needed to be adjusted to correct the erroneous year.

After this, Encore’s company released upgraded version of the system which contained a GPS receiver, so that it could set its date based on that, but a large number of their customers weren’t interested in the upgrade. Encore has already blocked off the first few weeks of 2032 in preparation.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I'm speaking at Indiana University Bloomington on January 30, 2020.
  • I'll be at RSA Conference 2020 in San Francisco. On Wednesday, February 26, at 2:50 PM, I'll be part of a panel on "How to Reduce Supply Chain Risk: Lessons from Efforts to Block Huawei." On Thursday, February 27, at 9:20 AM, I'm giving a keynote on "Hacking Society."
  • I'm speaking at SecIT by Heise in Hannover, Germany on March 26, 2020.

The list is maintained on this page.

Krebs on SecurityPatch Tuesday, January 2020 Edition

Microsoft today released updates to plug 50 security holes in various flavors of Windows and related software. The patch batch includes a fix for a flaw in Windows 10 and server equivalents of this operating system that prompted an unprecedented public warning from the U.S. National Security Agency. This month also marks the end of mainstream support for Windows 7, a still broadly-used operating system that will no longer be supplied with security updates.

As first reported Monday by KrebsOnSecurity, Microsoft addressed a severe bug (CVE-2020-0601) in Windows 10 and Windows Server 2016/19 reported by the NSA that allows an attacker to spoof the digital signature tied to a specific piece of software. Such a weakness could be abused by attackers to make malware appear to be a benign program that was produced and signed by a legitimate software company.

An advisory (PDF) released today by the NSA says the flaw may have far more wide-ranging security implications, noting that the “exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities.”

“NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable,” the advisory continues. “The consequences of not patching the vulnerability are severe and widespread.”

Matthew Green, an associate professor in the computer science department at Johns Hopkins University, said the flaw involves an apparent implementation weakness in a component of recent Windows versions responsible for validating the legitimacy of authentication requests for a panoply of security functions in the operating system.

Green said attackers can use this weakness to impersonate everything from trusted Web sites to the source of software updates for Windows and other programs.

“Imagine if I wanted to pick the lock in your front door,” Green analogized. “It might be hard for me to come up with a key that will open your door, but what if I could tamper with or present both the key and the lock at the same time?”

Kenneth White, security principal at the software company MongoDB, equated the vulnerability to a phone call that gets routed to a party you didn’t intend to reach.

“You pick up the phone, dial a number and assume you’re talking to your bank or Microsoft or whomever, but the part of the software that confirms who you’re talking to is flawed,” White said. “That’s pretty bad, especially when your system is saying download this piece of software or patch automatically and it’s being done in the background.”

Both Green and White said it likely will be a matter of hours or days before security researchers and/or bad guys work out ways to exploit this bug, given the stakes involved. Indeed, already this evening KrebsOnSecurity has seen indications that people are teasing out such methods, which will likely be posted publicly online soon.

According to security vendor Qualys, only eight of the 50 flaws fixed in today’s patch roundup from Microsoft earned the company’s most dire “critical” rating, a designation reserved for bugs that can be exploited remotely by malware or miscreants to seize complete control over the target computer without any help from users.

Once again, some of those critical flaws include security weaknesses in the way Windows implements Remote Desktop connections, a feature that allows systems to be accessed, viewed and controlled as if the user was seated directly in front of the remote computer. Other critical patches include updates for the Web browsers and Web scripting engines built into Windows, as well as fixes for ASP.NET and the .NET Framework.

The security fix for the CVE-2020-0601 bug and others detailed in this post will be offered to Windows users as part of a bundle of patches released today by Microsoft. To see whether any updates are available for your Windows computer, go to the Start menu and type “Windows Update,” then let the system scan for any available patches.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re not losing your mind when the odd buggy patch causes problems booting the system. So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

Today also marks the last month in which Microsoft will ship security updates for Windows 7 home/personal users. I count myself among some 30 percent of Windows users who still like and (ab)use this operating system in one form or another, and am sad that this day has come to pass. But if you rely on this OS for day-to-day use, it’s probably time to think about upgrading to something newer.

That might be a computer with Windows 10. Or maybe you have always wanted that shiny MacOS computer. If cost is a primary motivator and the user you have in mind doesn’t do much with the system other than browsing the Web, perhaps a Chromebook or an older machine with a recent version of Linux is the answer. Whichever system you choose, it’s important to pick one that fits the owner’s needs and provides security updates on an ongoing basis.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

,

Planet DebianEnrico Zini: Raspberry Pi 4: force video mode at boot

Testig himblick automatic media replication

This is part of a series of posts on the design and technical steps of creating Himblick, a digital signage box based on the Raspberry Pi 4.

Another surprise hits us at the last moment: if the system boots without an HDMI monitor plugged in, no framebuffer device is ever created, and X will not start, lightdm will give up after some tries, and even if one plugs in a monitor afterwards, it will stay blank until a reboot or some kind of manual intervention.

As a workaround, one can configure the bootloader to force a specific HDMI configuration. This post documents how we did it.


Find out what video mode one needs

We plugged the target monitor into a laptop and ran xrandr to see the selection of video modes:

…
   1920x1080     60.00*+  50.00    59.94    30.00    25.00    24.00    29.97    23.98
   1920x1080i    60.00    50.00    59.94
…

Then we looked up the video mode in the hdmi_mode table, for DMT video modes, in the Video options in config.txt documentation.

Then, since the Raspberry Py 4 has two HDMI outputs, one can append :0 or :1 to each video option to select the output for which it applies.

The resulting bit of config.txt that did the trick for us was this:

# Pretend that a monitor is attached on HDMI0
hdmi_force_hotplug=1:0
# Pretend that the monitor is a monitor and not a TV
hdmi_group=2:0
# Pretend that the monitor has resolution 1920x1080
hdmi_mode=82:0

With that X started, but for some reason it started with different (lower) monitor resolution. Thankfully, a call to xrandr on startup fixed that too, and now everything works as expected whether the system boots with a monitor attached or not.

CryptogramUSB Cable Kill Switch for Laptops

BusKill is designed to wipe your laptop (Linux only) if it is snatched from you in a public place:

The idea is to connect the BusKill cable to your Linux laptop on one end, and to your belt, on the other end. When someone yanks your laptop from your lap or table, the USB cable disconnects from the laptop and triggers a udev script [1, 2, 3] that executes a series of preset operations.

These can be something as simple as activating your screensaver or shutting down your device (forcing the thief to bypass your laptop's authentication mechanism before accessing any data), but the script can also be configured to wipe the device or delete certain folders (to prevent thieves from retrieving any sensitive data or accessing secure business backends).

Clever idea, but I -- and my guess is most people -- would be much more likely to stand up from the table, forgetting that the cable was attached, and yanking it out. My problem with pretty much all systems like this is the likelihood of false alarms.

Slashdot article.

EDITED TO ADD (1/14): There are Bluetooth devices that will automatically encrypt a laptop when the device isn't in proximity. That's a much better interface than a cable.

Planet DebianJonathan Dowland: data-types for representing stream-processing programs

This year I want to write much more about my PhD work on my blog, and here's my first effort. Most of this material has been languishing as a draft for over a year, so it's past time to get it out!


1 + 2

As part of my PhD work, I've been looking at data structures for representing stream-processing programs. The intention for our system is to take a user-supplied stream-processing program, rewrite it in order to alter its behaviour and partition it up into sub-programs which could be deployed and executed on different computers, connected together via TCP/IP.


1 * 2

To help familiarise myself with the existing system, when I started working on this I begun to explore different ways of representing both a stream-processing program and a set of interconnected, partitioned programs. Graph data structures seem like a natural fit for these, with stream-processing programs as a graph and interconnected programs as a graph-of-graphs1.


1 * (2 + 3)

There are a number of different graph libraries for Haskell. The most common approach they use for representation is "tabular": lists of edges as pairs of vertices, or similar. This isn't the only approach. One of the older, more established libraries — fgl — uses inductive types. But the one I have initially settled on is Algebra.Graph, which defines an algebra of graphs with which you can construct your instances2.

The USP for Algebra.Graph is that the four provided constructors are all total functions, so certain types of invalid graph are impossible to represent with the library (such as those where an edge does not point to a vertex).

The four basic constructors are3:

  • Vertex x, a single vertex, containing x
  • Overlay x y, which overlays one graph upon another
  • Connect x y, which connects all the vertices from Graph x to all of the vertices in Graph y.
  • Empty, for an empty graph

The Graph type implements the Num type-class, so Overlay can be abbreviated to + and connect to *. I've included some example graph definitions, encoded using + and * for brevity, and images of their corresponding renderings within this blog post.

I didn't perform an exhaustive search — nor evaluation — of all the available graph libraries. There's no definitive "right" answer to the question of which to choose: the graphs I will be dealing with are relatively small, so raw performance is not a major consideration.

So, what does a stream-processing program look like, encoded in this way? Here's a real example of a simple 5-node path graph (from here), simplified a little for clarity:

λ> foldg Empty (Vertex . vertexId) Overlay Connect graph
Overlay (Connect (Vertex 1) (Vertex 2)) (Overlay (Connect (Vertex 2)
(Vertex 3)) (Overlay (Connect (Vertex 3) (Vertex 4)) (Connect (Vertex 4)
(Vertex 5))))

Rendering it graphically is more clear:

simple 5-node stream graph


  1. Graphs are not the only data-type that could be used, of course. I've started out using a graph representation in order to bootstrap the implementation and get further along with a proof-of-concept, but there are shortcomings that might be addressed by other approaches. I'll write more about those in another blog post.
  2. By coincidence, Andrey Mokhov, the author of Algebra.Graph was a Senior Lecturer at Newcastle University, where I am a student, and was also co-author of a draft paper that was responsible for me getting interested in pursuing this work in the first place. Later, Andrey briefly became my second supervisor, but has now moved on to work for Jane Street. He remains a visiting fellow at Newcastle.
  3. Different variants of the grammar can vary these constructors to achieve different results. For example, you can forbid empty graphs by removing the Empty constructor. An adjustment to the types is made to support edge-labelling.

Worse Than FailureRepresentative Line: Gormless and Gone

There’s always a hope that in the future, our code will be better. Eventually, we won’t be dealing with piles of krufty legacy code and unprepared programmers and business users who don’t understand how clicking works. It’s 2020: we officially live in the future. Things aren’t better.

Duane works in Go, and has a piping hot “Representative Line” written in 2020. If, like me, you don’t quite know Go, it still looks pretty terrible at first glance:

db.Raw("update backlog set id = ? where id = ?;", job_id, job_id).Row()

The SQL query is… a head scratcher, for sure. id is the primary key in this case, and as a general rule, updating the primary key in a relational database is not a good idea: it is the identity of the record, and if it’s used in relationships you can have weird cascading failures. But it’s okay in this case, since we’re setting the id equal to job_id where id already equals job_id, which gives us a nice NOP.

Theoretically, this might cause some triggers to fire, but that’s it’s own WTF.

There are other problems here, if you know a little bit about Go. First, db is a “GORM” object- Go’s ORM layer. If you just want to update a single object, using the ORM layer directly is probably cleaner and more readable. But if you do want to execute raw SQL that returns no results, like an update, the correct method to use is Exec, not Raw. Raw + Row is used when you intend to capture the results.

Duane adds: “The return result from Row() isn’t assigned to a variable. So this line ignores any output that Row() might have had, including any errors.”

Duane also adds: “This particular programmer is no longer working for us for some reason.”

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: New year's haul

Accumulated book purchases for the past couple of months. A rather eclectic mix of stuff.

Becky Albertalli — Simon vs. the Homo Sapiens Agenda (young adult)
Ted Chiang — Exhalation (sff collection)
Tressie McMillan Cottom — Thick (nonfiction)
Julie E. Czerneda — This Gulf of Time and Stars (sff)
Katharine Duckett — Miranda in Milan (sff)
Sarah Gailey — Magic for Liars (sff)
Carol Ives Gilman — Halfway Human (sff)
Rachel Hartman — Seraphina (sff)
Isuna Hasekura — Spice and Wolf, Volume 1 (sff)
Elizabeth Lim — Spin the Dawn (sff)
Sam J. Miller — Blackfish City (sff)
Tamsyn Muir — Gideon the Ninth (sff)
Sylvain Neuvel — The Test (sff)
K.J. Parker — Sixteen Ways to Defend a Walled City (sff)
Caroline Criado Perez — Invisible Women (nonfiction)
Delia Sherman — The Porcelain Dove (sff)
Connie Willis — All About Emily (sff)

Several sales on books that I wanted to read for various reasons, several recommendations, one book in an ongoing series, and one earlier book in a series that I want to read.

We'll see if, in 2020, I can come closer to reading all the books that I buy in roughly the same year in which I buy them.

Cory DoctorowInaction is a form of action

In my latest podcast (MP3), I read my latest Locus column, Inaction is a Form of Action,, where I I discuss how the US government’s unwillingness to enforce its own anti-monopoly laws has resulted in the dominance of a handful of giant tech companies who get to decide what kind of speech is and isn’t allowed — that is, how the USG’s complicity in the creation of monopolies allows for a kind of government censorship that somehow does not violate the First Amendment.

We’re often told that “it’s not censorship when a private actor tells you to shut up on their own private platform” — but when the government decides not to create any public spaces (say, by declining to create publicly owned internet infrastructure) and then allows a handful of private companies to dominate the privately owned world of online communications, then those companies’ decisions about who may speak and what they may say become a form of government speech regulation — albeit one at arm’s length.

I don’t think that the solution to this is regulating the tech platforms so they have better speech rules — I think it’s breaking them up and forcing them to allow interoperability, so that their speech rules no longer dictate what kind of discourse we’re allowed to have.

Imagine two different restaurants: one prohibits any discussion of any subject the management deems “political” and the other has no such restriction. It’s easy to see that we’d say that you have more right to freely express yourself in the Anything Goes Bistro than in the No Politics at the Table Diner across the street.

Now, the house rules at the No Politics at the Table Diner have implications for free speech, but these are softened by the fact that you can always eat at the Anything Goes Bistro, and, of course, you can always talk politics when you’re not at a restaurant at all: on the public sidewalk (where the First Amendment shields you from bans on political talk), in your own home, or even in the No Politics Diner, assuming you can text covertly under the tablecloth when the management isn’t looking.

Depending on your town and its dining trends, the house rules at The No Politics Diner might matter more or less. If No Politics has the best food in town and everywhere else has a C rating from the health department, then the No Politics Diner’s rules matter a lot more than if No Politics is a greasy spoon that no one eats in if they can get a table elsewhere.

What happens if some deep-pocketed private-equity types hit on a strategy to turn The No Politics Diner into a citywide phenomenon? They merge The No Politics Diner with all the other restaurants in town, spending like drunken sailors. Once that’s accomplished, the NPD cartel goes after the remaining competition: any holdouts, and anyone who tries to open a rival is given the chance to sell out cheap, or be driven out of business. NPD has lots of ways to do this: for example, they’ll open a rival on the same block and sell food below cost to drive the refuseniks out of business (they’re not above sending spies to steal their recipes, either!). Even though some people resent NPD and want to talk politics, there’s not enough people willing to pay a premium for their dinner to keep the Anything Goes Bistro in business.

MP3

,

Krebs on SecurityCryptic Rumblings Ahead of First 2020 Patch Tuesday

Sources tell KrebsOnSecurity that Microsoft Corp. is slated to release a software update on Tuesday to fix an extraordinarily serious security vulnerability in a core cryptographic component present in all versions of Windows. Those sources say Microsoft has quietly shipped a patch for the bug to branches of the U.S. military and to other high-value customers/targets that manage key Internet infrastructure, and that those organizations have been asked to sign agreements preventing them from disclosing details of the flaw prior to Jan. 14, the first Patch Tuesday of 2020.

According to sources, the vulnerability in question resides in a Windows component known as crypt32.dll, a Windows module that Microsoft says handles “certificate and cryptographic messaging functions in the CryptoAPI.” The Microsoft CryptoAPI provides services that enable developers to secure Windows-based applications using cryptography, and includes functionality for encrypting and decrypting data using digital certificates.

A critical vulnerability in this Windows component could have wide-ranging security implications for a number of important Windows functions, including authentication on Windows desktops and servers, the protection of sensitive data handled by Microsoft’s Internet Explorer/Edge browsers, as well as a number of third-party applications and tools.

Equally concerning, a flaw in crypt32.dll might also be abused to spoof the digital signature tied to a specific piece of software. Such a weakness could be exploited by attackers to make malware appear to be a benign program that was produced and signed by a legitimate software company.

This component was introduced into Windows more than 20 years ago — back in Windows NT 4.0. Consequently, all versions of Windows are likely affected (including Windows XP, which is no longer being supported with patches from Microsoft).

Microsoft has not yet responded to requests for comment. However, KrebsOnSecurity has heard rumblings from several sources over the past 48 hours that this Patch Tuesday (tomorrow) will include a doozy of an update that will need to be addressed immediately by all organizations running Windows.

Update 7:49 p.m. ET: Microsoft responded, saying that it does not discuss the details of reported vulnerabilities before an update is available. The company also said it does “not release production-ready updates ahead of regular Update Tuesday schedule. “Through our Security Update Validation Program (SUVP), we release advance versions of our updates for the purpose of validation and interoperability testing in lab environments,” Microsoft said in a written statement. “Participants in this program are contractually disallowed from applying the fix to any system outside of this purpose and may not apply it to production infrastructure.”

Original story:

Will Dormann, a security researcher who authors many of the vulnerability reports for the CERT Coordination Center (CERT-CC), tweeted today that “people should perhaps pay very close attention to installing tomorrow’s Microsoft Patch Tuesday updates in a timely manner. Even more so than others. I don’t know…just call it a hunch?” Dormann declined to elaborate on that teaser.

It could be that the timing and topic here (cryptography) is nothing more than a coincidence, but KrebsOnSecurity today received a heads up from the U.S. National Security Agency (NSA) stating that NSA’s Director of Cybersecurity Anne Neuberger is slated to host a call on Jan. 14 with the news media that “will provide advanced notification of a current NSA cybersecurity issue.”

The NSA’s public affairs folks did not respond to requests for more information on the nature or purpose of the discussion. The invitation from the agency said only that the call “reflects NSA’s efforts to enhance dialogue with industry partners regarding its work in the cybersecurity domain.”

Stay tuned for tomorrow’s coverage of Patch Tuesday and possibly more information on this particular vulnerability.

Update, Jan. 14, 9:20 a.m. ET: The NSA’s Neuberger said in a media call this morning that the agency did indeed report this vulnerability to Microsoft, and that this was the first time Microsoft will have credited NSA for reporting a security flaw. Neuberger said NSA researchers discovered the bug in their own research, and that Microsoft’s advisory later today will state that Microsoft has seen no active exploitation of it yet.

According to the NSA, the problem exists in Windows 10 and Windows Server 2016. Asked why the NSA was focusing on this particular vulnerability, Neuberger said the concern was that it “makes trust vulnerable.” The agency declined to say when it discovered the flaw, and that it would wait until Microsoft releases a patch for it later today before discussing further details of the vulnerability.

Update, 1:47 p.m. ET: Microsoft has released updates for this flaw (CVE-2020-0601). Their advisory is here. The NSA’s writeup (PDF) includes quite a bit more detail, as does the advisory from CERT.

Planet DebianEnrico Zini: Creating a Raspberry PI SD from tar files

Pile of Raspberry Pi 4 boxes

This is part of a series of posts on the design and technical steps of creating Himblick, a digital signage box based on the Raspberry Pi 4.

Provisioning a SD card starting from the official raspbian-lite is getting quite slow, since there are a lot of packages to install.

It would be significantly faster if we could take a SD card, partition it from scratch, then untar the boot and rootfs partition contents into them.

Here's how.


Partitioning a SD card from scratch

We can do almost everything with pyparted.

See this LinuxVoice article for a detailed introduction to pyparted, and the C parted documentation for some low-level reference.

Here is the pyparted recipe for the SD card, plus a media directory at the end:

def partition_reset(self, dev: Dict[str, Any]):
    """
    Repartition the SD card from scratch
    """
    try:
        import parted
    except ModuleNotFoundError:
        raise Fail("please install python3-parted")

    device = parted.getDevice(dev["path"])

    device.clobber()
    disk = parted.freshDisk(device, "msdos")

    # Add 256M fat boot
    optimal = device.optimumAlignment
    constraint = parted.Constraint(
        startAlign=optimal,
        endAlign=optimal,
        startRange=parted.Geometry(
            device=device,
            start=parted.sizeToSectors(4, "MiB", device.sectorSize),
            end=parted.sizeToSectors(16, "MiB", device.sectorSize)),
        endRange=parted.Geometry(
            device=device,
            start=parted.sizeToSectors(256, "MiB", device.sectorSize),
            end=parted.sizeToSectors(512, "MiB", device.sectorSize)),
        minSize=parted.sizeToSectors(256, "MiB", device.sectorSize),
        maxSize=parted.sizeToSectors(260, "MiB", device.sectorSize))
    geometry = parted.Geometry(
        device,
        start=0,
        length=parted.sizeToSectors(256, "MiB", device.sectorSize),
    )
    geometry = constraint.solveNearest(geometry)
    boot = parted.Partition(
            disk=disk, type=parted.PARTITION_NORMAL, fs=parted.FileSystem(type='fat32', geometry=geometry),
            geometry=geometry)
    boot.setFlag(parted.PARTITION_LBA)
    disk.addPartition(partition=boot, constraint=constraint)

    # Add 4G ext4 rootfs
    constraint = parted.Constraint(
        startAlign=optimal,
        endAlign=optimal,
        startRange=parted.Geometry(
            device=device,
            start=geometry.end,
            end=geometry.end + parted.sizeToSectors(16, "MiB", device.sectorSize)),
        endRange=parted.Geometry(
            device=device,
            start=geometry.end + parted.sizeToSectors(4, "GiB", device.sectorSize),
            end=geometry.end + parted.sizeToSectors(4.2, "GiB", device.sectorSize)),
        minSize=parted.sizeToSectors(4, "GiB", device.sectorSize),
        maxSize=parted.sizeToSectors(4.2, "GiB", device.sectorSize))
    geometry = parted.Geometry(
        device,
        start=geometry.start,
        length=parted.sizeToSectors(4, "GiB", device.sectorSize),
    )
    geometry = constraint.solveNearest(geometry)
    rootfs = parted.Partition(
            disk=disk, type=parted.PARTITION_NORMAL, fs=parted.FileSystem(type='ext4', geometry=geometry),
            geometry=geometry)
    disk.addPartition(partition=rootfs, constraint=constraint)

    # Add media partition on the rest of the disk
    constraint = parted.Constraint(
        startAlign=optimal,
        endAlign=optimal,
        startRange=parted.Geometry(
            device=device,
            start=geometry.end,
            end=geometry.end + parted.sizeToSectors(16, "MiB", device.sectorSize)),
        endRange=parted.Geometry(
            device=device,
            start=geometry.end + parted.sizeToSectors(16, "MiB", device.sectorSize),
            end=disk.maxPartitionLength),
        minSize=parted.sizeToSectors(4, "GiB", device.sectorSize),
        maxSize=disk.maxPartitionLength)
    geometry = constraint.solveMax()
    # Create media partition
    media = parted.Partition(
            disk=disk, type=parted.PARTITION_NORMAL,
            geometry=geometry)
    disk.addPartition(partition=media, constraint=constraint)

    disk.commit()

Setting MBR disk identifier

So far so good, but /boot/cmdline.txt has root=PARTUUID=6c586e13-02, and we need to change the MBR disk identifier to match:

# Fix disk identifier to match what is in cmdline.txt
with open(dev["path"], "r+b") as fd:
    buf = bytearray(512)
    fd.readinto(buf)
    buf[0x1B8] = 0x13
    buf[0x1B9] = 0x6e
    buf[0x1BA] = 0x58
    buf[0x1BB] = 0x6c
    fd.seek(0)
    fd.write(buf)

Formatting the partitions

Formatting is reasonably straightforward, and although we've tried to match the way raspbian formats partitions, it may be that not all of these options are needed:

# Format boot partition with 'boot' label
run(["mkfs.fat", "-F", "32", "-n", "boot", disk.partitions[0].path])

# Format rootfs partition with 'rootfs' label
run(["mkfs.ext4", "-F", "-L", "rootfs", "-O", "^64bit,^huge_file,^metadata_csum", disk.partitions[1].path])

# Format exfatfs partition with 'media' label
run(["mkexfatfs", "-n", "media", disk.partitions[2].path])

Now the SD card is ready for a simple untarring of the boot and rootfs partition contents.

Useful commands

These commands were useful in finding out differences between how the original Raspbian image partitions were formatted, and how we were formatting them:

sudo minfo -i /dev/sdb1 ::
sudo tune2fs -l /dev/sdb2

Krebs on SecurityPhishing for Apples, Bobbing for Links

Anyone searching for a primer on how to spot clever phishing links need look no further than those targeting customers of Apple, whose brand by many measures remains among the most-targeted. Past stories here have examined how scammers working with organized gangs try to phish iCloud credentials from Apple customers who have a mobile device that is lost or stolen. Today’s piece looks at the well-crafted links used in some of these lures.

KrebsOnSecurity heard from a reader in South Africa who recently received a text message stating his lost iPhone X had been found. The message addressed him by name and said he could view the location of his wayward device by visiting the link https://maps-icloud[.]com — which is most definitely not a legitimate Apple or iCloud link and is one of countless spoofing Apple’s “Find My” service for locating lost Apple devices.

While maps-icloud[.]com is not a particularly convincing phishing domain, a review of the Russian server where that domain is hosted reveals a slew of far more persuasive links spoofing Apple’s brand. Almost all of these include encryption certificates (start with “https://) and begin with the subdomains “apple.” or “icloud.” followed by a domain name starting with “com-“.

Here are just a few examples (the phishing links in this post have been hobbled with brackets to keep them from being clickable):

apple.com-support[.]id
apple.com-findlocation[.]id
apple.com-sign[.]in
apple.com-isupport[.]in
icloud.com-site-log[.]in

Savvy readers here no doubt already know this, but to find the true domain referenced in a link, look to the right of “http(s)://” until you encounter the first forward slash (/). The domain directly to the left of that first slash is the true destination; anything that precedes the second dot to the left of that first slash is a subdomain and should be ignored for the purposes of determining the true domain name.

For instance, in the case of the imaginary link below, example.com is the true destination, not apple.com:

https://www.apple.com.example.com/findmyphone/

Of course, any domain can be used as a redirect to any other domain. Case in point: Targets of the phishing domains above who are undecided on whether the link refers to a legitimate Apple site might seek to load the base domain into a Web browser (minus the customization in the remainder of the link after the first forward slash). To assuage such concerns, the phishers in this case will forward anyone visiting those base domains to Apple’s legitimate iCloud login page (icloud.com).

The best advice to sidestep phishing scams is to avoid clicking on links that arrive unbidden in emails, text messages and other mediums. Most phishing scams invoke a temporal element that warns of dire consequences should you fail to respond or act quickly. If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.

CryptogramArtificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it's time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they're poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated "people" will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It's writing news stories, particularly in sports and finance. It's talking with customers on merchant websites. It's writing convincing op-eds on topics in the news (though there are limitations). And it's being used to bulk up "pink-slime journalism" -- websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There's a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them -- maybe half -- were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn't stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won't be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year's Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi "We all have trust in Mohammed bin Salman" tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people's sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos -- sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won't be so easily identified. They'll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they'll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there's commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it'll be domestic political groups. Maybe it'll be the candidates themselves. Most likely, it'll be everybody. The most important lesson from the 2016 election about misinformation isn't that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots -- a proposed California law would require bots to identify themselves -- but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don't have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that'll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we're talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it's becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don't know the effect of that noise on democracy, only that it'll be pernicious, and that it's inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

Worse Than FailureCodeSOD: An Unreal Json Parser

As we've discussed in the past, video game code probably shouldn't be held to the standards of your average WTF: they're operating under wildly different constraints. So, for example, when a popular indie game open sources itself, and people find all sorts of horrors in the codebase: hey, the game shipped and made money. This isn't life or death stuff.

It's a little different when you're building the engine. You're not just hacking together whatever you need to make your product work, but putting together a reusable platform to make other people's products work.

Rich D, who previously shared some horrors he found in the Unreal engine, recently discovered that UnrealScript has a useful sounding JsonObject. Since Rich is thinking in terms of mods, being able to read/write JSON to handle mod configuration is useful, but anyone designing a game might have many good reasons to want JSON documents.

The file starts promisingly with:

class JsonObject extends Object native; /// COMMENT!! …

It's good that someone put that comment there, because I assume it was meant as a reminder: comment this code. And the comments start giving us hints of some weird things:

/** * Looks up a value with the given key in the ObjectMap. If it was a number * in the Json string, this will be prepended with \# (see below helpers) * * @param Key The key to search for * * @return A string value */ native function string GetStringValue(const string Key);

The method GetStringValue returns a string from JSON, but if the string is a number, it… puts a \# in front of it? Why?

function int GetIntValue(const string Key) { local string Value; // look up the key, and skip the \# Value = Mid(GetStringValue(Key), 2); return int(Value); }

Oh… that's why. So that we can ignore it. There's a similar version of this method for GetFloatValue, and GetBoolValue.

So, how do those \#s get prepended? Well, as it turns out, there are also set methods:

function SetIntValue(const string Key, int Value) { SetStringValue(Key, "\\#" $ Value); }

In addition to these methods, there are also native methods (e.g., methods which bind to native code, and thus don't have an UnrealScript body) to encode/decode JSON:

/** * Encodes an object hierarchy to a string suitable for sending over the web * * @param Root The toplevel object in the hierarchy * * @return A well-formatted Json string */ static native function string EncodeJson(JsonObject Root); /** * Decodes a Json string into an object hierarchy (all needed objects will be created) * * @param Str A Json string (probably received from the web) * * @return The root object of the resulting hierarchy */ static native function JsonObject DecodeJson(const string Str);

Guided by this code, Rich went on to do a few tests:

  • Calling SetStringValue with a string that happens to start with \# causes EncodeJson to produce malformed output.
  • Calling SetStringValue with any string that might require escape characters (newlines, backslashes, etc.) will not escape those characters, producing malformed output.
  • Which means that the output of EncodeJson cannot reliably be parsed by DecodeJson, as sometimes the output is invalid
  • Sometimes, when DecodeJson receives an invalid document, instead of throwing an error, it just crashes the entire game

Rich has wisely decided not to leverage this object, for now.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

Planet DebianEnrico Zini: Perception links

You're not going to believe what I'm about to tell you - The Oatmeal
propaganda comics perception archive.org
This is a comic about the backfire effect.
The Barnum effect, also called the Forer effect, or less commonly the Barnum-Forer effect, is a common psychological phenomenon whereby individuals give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically to them, that are in fact vague and general enough to apply to a wide range of people. This effect can provide a partial explanation for the widespread acceptance of some paranormal beliefs and practices, such as astrology, fortune telling, aura reading, and some types of personality tests.
We only use 10% of our brain. We evolved from chimps. Dairy foods increase mucous. Pfffff! These and over 45 other myths & misconceptions debunked. Interactively.
Psychology's reproducibility problem
“Slim by Chocolate!” the headlines blared. A team of German researchers had found that people on a low-carb diet lost weight 10 percent faster if they ate a chocolate bar every day. It made the front page of Bild, Europe’s largest daily newspaper, just beneath their update about the Germanwings crash. From there, it ricocheted around the internet and beyond, making news in more than 20 countries and half a dozen languages. It was discussed on television news shows. It appeared in glossy print, most recently in the June issue of Shape magazine (“Why You Must Eat Chocolate Daily,” page 128). Not only does chocolate accelerate weight loss, the study found, but it leads to healthier cholesterol levels and overall increased well-being. The Bild story quotes the study’s lead author, Johannes Bohannon, Ph.D., research director of the Institute of Diet and Health: “The best part is you can buy chocolate everywhere.”
For the next two weeks, a Tube station in South London will create a rip in the space time continuum. The Citizens Advertising Takeover Service has replaced 68 adverts in Clapham Common with pictures of cats.

Planet DebianEnrico Zini: Making a frozen raspbian repository

This is part of a series of posts on the design and technical steps of creating Himblick, a digital signage box based on the Raspberry Pi 4.

A month later, we tried building an himblick image and it stopped playing video with vlc, only showing a black screen.

Although we're working with Rasbian Buster, and Debian Buster is stable, it looks like Raspbian Buster is not stable at all.

Time to learn how to freeze a partial raspbian mirror.


Looking at at the output of dpkg -l in a himblick image of a month ago, we see that vlc changed in Raspbian from version 3.0.8-0+deb10u1+rpt1 to version 3.0.8-0+deb10u1+rpt7. The former works perfectly, the latter only shows black when running with --fullscreen on the Raspberry Pi 4.

Here is the relevant changelog, for reference:

vlc (3.0.8-0+deb10u1+rpt7) buster; urgency=medium

  * Apply MMAL patch 16

vlc (3.0.8-0+deb10u1+rpt6) buster; urgency=medium

  * Apply MMAL patch 15

vlc (3.0.8-0+deb10u1+rpt5) buster; urgency=medium

  * Disable vdpau, libva, aom
  * Enable dav1d

vlc (3.0.8-0+deb10u1+rpt4) buster; urgency=medium

  * Apply MMAL patch 14

vlc (3.0.8-0+deb10u1+rpt3) buster; urgency=medium

  * Apply MMAL patch 13

vlc (3.0.8-0+deb10u1+rpt2) buster; urgency=medium

  * Apply MMAL patch 12

vlc (3.0.8-0+deb10u1+rpt1) buster; urgency=medium

  * Apply MMAL patch 10

We can use aptly to setup a Debian mirror that has the parts of raspbian that we need, plus the working vlc.

aptly by default really wants to clutter the home directory with ~/.aptly and ~/.aptly.conf. So first step, we create a .aptly.conf in the himblick repo, pointing to a rootDir next to it:

{
  "rootDir": ".aptly",
  "downloadConcurrency": 4,
  "downloadSpeedLimit": 0,
  "architectures": ["armhf"],
  "dependencyFollowSuggests": false,
  "dependencyFollowRecommends": false,
  "dependencyFollowAllVariants": false,
  "dependencyFollowSource": false,
  "dependencyVerboseResolve": false,
  "gpgDisableSign": false,
  "gpgDisableVerify": false,
  "gpgProvider": "gpg",
  "downloadSourcePackages": false,
  "skipLegacyPool": true,
  "ppaDistributorID": "ubuntu",
  "ppaCodename": "",
  "skipContentsPublishing": false,
  "FileSystemPublishEndpoints": {},
  "S3PublishEndpoints": {},
  "SwiftPublishEndpoints": {}
}

Note that the Raspberry Pi 4 would be aarch64, but raspbian runs armhf binaries on it, to avoid maintaining packages for two different architectures.

Next to .aptly.conf, we put the /etc/apt/trusted.gpg from a raspbian image.

Then we setup a raspbian mirror of the bits that we need. APT sources of raspbian pull from two different repositories, so let's mirror them both:

aptly --config=.aptly.conf --keyring=`pwd`/trusted.gpg mirror create raspbian http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
aptly --config=.aptly.conf --keyring=`pwd`/trusted.gpg mirror create debian http://archive.raspberrypi.org/debian/ buster main

# Packages we need for himblick
NEEDED="Priority (required)|ansible|dracut|…"

# Packages that are would normally come with raspbian. This can be generated
# doing a dist-upgrade of a built himblick using Raspbian's repositories,
# following by dpkg -l to build the package list
IMAGE_PACKAGES="adduser|alsa-utils|apt|apt-listchanges|…"

FILTER="$NEEDED|$IMAGE_PACKAGES"

aptly --config=.aptly.conf mirror edit --dep-verbose-resolve --filter="$FILTER" --filter-with-deps raspbian
aptly --config=.aptly.conf --keyring=`pwd`/trusted.gpg mirror update raspbian

aptly --config=.aptly.conf mirror edit --dep-verbose-resolve --filter="$FILTER" --filter-with-deps debian
aptly --config=.aptly.conf --keyring=`pwd`/trusted.gpg mirror update debian

Building the FILTER expression took some iteration, trying provisioning and seeing where it stopped, as aptly's dependency resolver is more approximate than apt's.

Next, we create a distro with the bits of vlc that we need:

aptly --config=.aptly.conf repo create himblick
aptly --config=.aptly.conf repo add himblick fixed-vlc/*

Then a merged snapshot of the three:

NAME=$(date +%Y%m%d)
aptly --config=.aptly.conf snapshot create raspbian-$NAME from mirror raspbian
aptly --config=.aptly.conf snapshot create debian-$NAME from mirror debian
aptly --config=.aptly.conf snapshot create himblick-$NAME from repo himblick
aptly --config=.aptly.conf snapshot merge $NAME debian-$NAME raspbian-$NAME himblick-$NAME
aptly --config=.aptly.conf publish snapshot $NAME
echo "Published snapshot $NAME"

Finally, aptly --config=.aptly.conf serve brings up a web server with the mirror, that we can now use to build himblick.

We have adapted the provisioning script to use the local mirror, and to restore the original ones at the end, so build himblick images can still access the whole of Raspbian if needed.

Until we'll be able to run Debian stable on the Raspberry Pi 4, at least we can use snapshots to compensate for Raspbian's volatility.

Update: make sure you also disable apt update/upgrade timers. See Himblick one day later for details.

Planet DebianRuss Allbery: Review: Guardians of the West

Review: Guardians of the West, by David Eddings

Series: The Malloreon #1
Publisher: Del Rey
Copyright: April 1987
Printing: October 1991
ISBN: 0-345-35266-1
Format: Mass market
Pages: 438

Technically speaking, many things in this review are mild spoilers for the outcome of The Belgariad, the previous series set in this world. I'm not going to try to avoid that because I think most fantasy readers will assume, and be unsurprised by, various obvious properties of the ending of that type of epic fantasy.

The world has been saved, Garion is learning to be king (and navigate his domestic life, but more on that in a moment), and Errand goes home with Belgarath and Polgara to live the idyllic country life of the child he never was. That lasts a surprisingly long way into the book, with only occasional foreshadowing, before the voice in Garion's head chimes in again, new cryptic prophecies are discovered, and the world is once again in peril.

I can hear some of you already wondering what I'm doing. Yes, after re-reading The Belgariad, I'm re-reading The Malloreon. Yes, this means I'm arguably reading the same series four times. I was going through the process of quitting my job and wrapping up projects and was stressed out of my mind and wanted something utterly predictable and unchallenging that I could just read and enjoy without thinking about. A re-read of Eddings felt perfect for that, and it was.

The Malloreon is somewhat notorious in the world of epic fantasy because the plot... well, I won't say it's the same plot as The Belgariad, although some would, but it has eerie similarities. The overarching plot of The Belgariad is the battle between the Child of Light and the Child of Dark, resolved at the end of Enchanters' End Game. The kickoff of the plot of The Malloreon near the middle of this book is essentially "whoops, there was another prophecy and you have to do this all again." The similarities don't stop there: There's a list of named figures who have to go on the plot journey that's only slightly different from the first series, a mysterious dark figure steals something important to kick off the plot, and of course there is the same "free peoples of the west" versus "dictatorial hordes of the east" basic political structure. (If you're not interested in more of that in your fantasy, I don't blame you a bit and Eddings is not the author to reach for.)

That said, I've always had a soft spot for this series. We've gotten past the introduction of characters and gotten to know an entertaining variety of caricatures, Eddings writes moderately amusing banter, and the characters can be fun if you treat them like talking animals built around specific character traits. Guardians of the West moves faster and is less frustrating than Pawn of Prophecy by far. It also has a great opening section where Errand, rather than Garion, is the viewpoint character.

Errand is possibly my favorite character in this series because he takes the plot about as seriously as I do. He's fearless and calm in the face of whatever is happening, which his adult guardians attribute to his lack of understanding of danger, but which I attribute to him being the only character in the book who realizes that the plot is absurd and pre-ordained and there's no reason to get so worked up about it. He also has a casual, off-hand way of revealing that he has untapped plot-destroying magical powers, which for some reason I find hilarious. I wish the whole book were told from Errand's point of view.

Sadly, two-thirds of it returns to Garion. That part isn't bad, exactly, but it features more of his incredibly awkward and stereotyped relationship with Ce'Nedra, some painful and obvious stupidity around their attempt to have a child, and possibly the stupidest childbirth scene I've ever seen. (Eddings is aiming for humorous in a way that didn't work for me at all.) That's followed by a small war (against conservative religious fanatics; Eddings's interactions with cultural politics are odd and complicated) that wasn't that interesting.

That said, the dry voice in Garion's head was one of my favorite characters in the first series and that's even more true here when he starts speaking again. I like some of what Eddings is doing with prophecy and how it interacts with the plot. I'm also endlessly amused when the plot is pushed forward by various forces telling the main characters what to do next. Normally this is a sign of lazy writing and poor plotting, but Eddings is so delightfully straightforward about it that it becomes oddly metafictional and, at least for me, kind of fun. And more of Errand is always enjoyable.

I can't recommend this series (or Eddings in general). I like it for idiosyncratic reasons and can't defend it as great writing. There are a lot of race-based characterization, sexism, and unconsidered geographic stereotypes (when you lay the world map over a map of Europe, the racism is, uh, kind of blatant, even though Eddings makes relatively even-handed fun of everyone), and while you could say the same for Tolkien, Eddings is not remotely at Tolkien levels of writing in compensation. But Guardians of the West did exactly what I wanted from it when I picked it up, and now part of me wants to finish my re-read, so you may be hearing about the rest of the series.

Followed by King of the Murgos.

Rating: 6 out of 10

,

Planet DebianRomain Perier: Add support for F2FS filesystem to GRUB and initramfs-tools

Hi there,

For these like me who want to change their root filesystem to F2FS, I have enabled support for adding the F2FS module in the EFI signed image of grub in Debian (commit). So the grub EFI image can load configuration, kernel images and initrd from a /boot that is formatted in F2FS (the upstream grub supports the filesystem since 2.04).

Now that the kernel is loading it must be able to mount the rootfs. In Debian, a lot of features like some filesystems or some drivers are built as modules, this allow to be able to boot and work on a lot of different machines without have to build-in statically everything into the linux kernel image. This is why we use an initramfs , it offers a variety of cool features and detects magically some details for you like "load the brtfs module or your favorite emmc driver as module". If you want to use F2FS as your main filesystem on your rootfs, we need to add F2FS as base module into initramfs-tools (that handles all the scripts and the magic stuffs for your initramfs). It has been done by this commit.


See you !

Planet DebianMarkus Koschany: My Free Software Activities in December 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I started the month by backporting the latest version of minetest to buster-backports.
  • New versions of Springlobby, the single and multiplayer lobby for the Spring RTS engine, and Freeciv (now at 2.6.1) were packaged.
  • I had to remove python-pygccxml as a build-dependency from spring because of the Python 2 removal and there was also another unrelated build failure that got fixed as well.
  • I also released a new version of the debian-games metapackages. A considerable number of games were removed from Debian in the past months, in parts due to the ongoing Python 2 removal but also because of inactive maintainers or upstreams. There were also some new games though. Check out the 3.1 changelog for more information. As a consequence of our Python 2 goal, the development metapackage for Python 2 is gone now.

Debian Java

Misc

  • The imlib2 image library was updated to version 1.6.1 and now supports the webp image format.
  • I backported the Thunderbird addon dispmua to Buster and Stretch because the new Thunderbird ESR version had made it unusable.
  • I also updated binaryen, a compiler and library for WebAssembly and asked upstream if they could relax the build-dependency on Git which they did.

Debian LTS

This was my 46. month as a paid contributor and I have been paid to work 16,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

From 23.12.2019 until 05.01.2020 I was in charge of our LTS frontdesk. I investigated and triaged CVE in sudo, shiro, waitress, sa-exim, imagemagick, nss, apache-log4j1.2, sqlite3, lemonldap-ng, libsixel, graphicsmagick, debian-lan-config, xerces-c, libpodofo, vim, pure-ftpd, gthumb, opencv, jackson-databind, pillow, fontforge, collabtive, libhibernate-validator-java, lucene-solr and gpac.

  • DLA-2051-1. Issued a security update for intel-microcode fixing 2 CVE.
  • DLA-2058-1. Issued a security update for nss fixing 1 CVE.
  • DLA-2062-1. Issued a security update for sa-exim fixing 1 CVE.
  • I prepared a security update for tomcat7 by updating to the latest upstream release in the 7.x series. It is pending review by Mike Gabriel at the moment.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my nineteenth month and I have been assigned to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 23.12.2019 until 05.01.2020 and I triaged CVE in sqlite3, libxml2 and nss.
  • ELA-200-2. Issued a security update for intel-microcode.
  • Worked on tomcat7, CVE-2019-12418 and CVE-2019-17563, and finished the patches prepared by Mike Gabriel. We have discovered some unrelated test failures and are currently investigating the root cause of them.
  • Worked on nss, which is required to build OpenJDK 7 and also needed at runtime for the SunEC security provider. I am currently investigating CVE-2019-17023 which has been assigned only a few days ago.
  • ELA-206-1. Issued a security update for apache-log4j1.2 fixing 1 CVE.

Thanks for reading and see you next time.

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.73

Laptop Mode Tools 1.73

I am pleased to announce the release of Laptop Mode Tools version 1.73

This release includes many bug fixes. For user convenience, 2 command options have been added.

rrs@priyasi:~$ laptop_mode -h
****************************
Following user commands are understood
status      :   Display a Laptop Mode Tools power savings status
power-stats  :  Display the power statistics on the machine
power-events :  Trap power related events on the machine
help        :   Display this help message (--help, -h)
version     :   Display program version (--version, -v)
****************************
15:22 â™’ ༐  â˜ş đŸ˜„    


rrs@priyasi:~$ sudo laptop_mode status
[sudo] password for rrs: 
Mounts:
   /dev/mapper/nvme0n1p4_crypt on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=5,subvol=/)
   /dev/nvme0n1p3 on /boot type ext4 (rw,relatime)
   /dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
   /dev/fuse on /run/user/1000/doc type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
 
Drive power status:
   Cannot read /dev/[hs]d[abcdefgh], permission denied - /usr/sbin/laptop_mode needs to be run as root
 
(NOTE: drive settings affected by Laptop Mode cannot be retrieved.)
 
Readahead states:
   /dev/mapper/nvme0n1p4_crypt: 128 kB
   /dev/nvme0n1p3: 128 kB
   /dev/nvme0n1p1: 128 kB
 
Laptop Mode Tools is allowed to run: /var/run/laptop-mode-tools/enabled exists.
 
/proc/sys/vm/laptop_mode:
   0
 
/proc/sys/vm/dirty_ratio:
   40
 
/proc/sys/fs/xfs/xfssyncd_centisecs:
   3000
 
/proc/sys/vm/dirty_background_ratio:
   10
 
/proc/sys/vm/dirty_expire_centisecs:
   3000
 
/proc/sys/vm/dirty_writeback_centisecs:
   500
 
......SNIPPED......

/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq:
   400000
 
/sys/devices/system/cpu/cpu5/cpufreq/cpuinfo_max_freq:
   2001000
 
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor:
   schedutil
 
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor:
   schedutil
 
/proc/acpi/button/lid/LID0/state:
   state:      open
 
/sys/class/power_supply/AC/online:
   1
 
/sys/class/power_supply/BAT0/status:
   Charging
 
15:22 â™’ ༐  â˜ş đŸ˜„    



rrs@priyasi:~$ laptop_mode power-stats
Power Supply details for /sys/class/power_supply/AC

P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
L: 0
E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
E: POWER_SUPPLY_NAME=AC
E: POWER_SUPPLY_ONLINE=1
E: SUBSYSTEM=power_supply

Power Supply details for /sys/class/power_supply/BAT0

P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
L: 0
E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
E: POWER_SUPPLY_NAME=BAT0
E: POWER_SUPPLY_STATUS=Charging
E: POWER_SUPPLY_PRESENT=1
E: POWER_SUPPLY_TECHNOLOGY=Li-poly
E: POWER_SUPPLY_CYCLE_COUNT=0
E: POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
E: POWER_SUPPLY_VOLTAGE_NOW=8760000
E: POWER_SUPPLY_CURRENT_NOW=545000
E: POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
E: POWER_SUPPLY_CHARGE_FULL=6592000
E: POWER_SUPPLY_CHARGE_NOW=6526000
E: POWER_SUPPLY_CAPACITY=98
E: POWER_SUPPLY_CAPACITY_LEVEL=Normal
E: POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
E: POWER_SUPPLY_MANUFACTURER=SMP
E: POWER_SUPPLY_SERIAL_NUMBER=1549
E: SUBSYSTEM=power_supply

15:23 â™’ ༐  â˜ş đŸ˜„    



rrs@priyasi:~$ laptop_mode power-events
Running Laptop Mode Tools in event tracing mode. Press ^C to interrupt
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[140321.536870] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=0
SEQNUM=5908

KERNEL[140321.569526] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Discharging
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8761000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5909

UDEV  [140321.577770] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=0
SEQNUM=5908
USEC_INITIALIZED=140321550931

UDEV  [140321.582123] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Discharging
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8761000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5909
USEC_INITIALIZED=140321580812

KERNEL[140324.857185] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=1
SEQNUM=5912

UDEV  [140324.916156] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=1
SEQNUM=5912
USEC_INITIALIZED=140324887055

KERNEL[140324.917955] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Unknown
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8622000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5913

UDEV  [140324.922916] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Unknown
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8622000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5913
USEC_INITIALIZED=140324922572

^C
15:24 â™’ ༐   ☚ đŸ˜Ÿ=> 130  

A filtered list of changes is mentioned below. For the full log, please refer to the git repository.

1.73 - Sat Jan 11 14:52:11 IST 2020

* Respect black/white lists when disabling autosuspend
* Add newer power supply names
* Fix crash due external battery of mouse
* Honor configuration setting for battery level polling
* cpufreq: intel_pstate should use performance governors
* runtime-pm: Speed up by avoiding fork in echo_to_file
* runtime-pm: Inline echo_to_file_do
* runtime-pm: Fix echo_to_file* indentation
* runtime-pm: Speed up by avoiding fork in listed_by_{id,type}
* runtime-pm: Simplify vendor/product match
* add help and verison user commands
* Add a power-stats status command
* Separate power sysfs attributes and add sysfs status attribute
* Add device type 'sd' to default blacklist
* Fix rpm spec file for new installable files

Source tarball, Feodra/SUSE RPM Packages available at: https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki

Mailing List: https://groups.google.com/d/forum/laptop-mode-tools

What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

Planet DebianRuss Allbery: Review: True Porn Clerk Stories

Review: True Porn Clerk Stories, by Ali Davis

Publisher: Amazon.com
Copyright: August 2009
ASIN: B002MKOQUG
Format: Kindle
Pages: 160

The other day I realized, as a cold claw of pure fear squeezed my frantic heart, that I have been working as a video clerk for ten months.

This is a job that I took on a temporary basis for just a month or two until freelancing picked back up and I got my finances in order.

Ten months.

It has been a test of patience, humility, and character.

It has been a lesson in dealing with all humankind, including their personal bodily fluids.

It has been $6.50 an hour.

If you're wondering whether you'd heard of this before and you were on the Internet in the early 2000s, you probably have. This self-published book is a collection of blog posts from back when blogs were a new thing and went viral before Twitter existed. It used to be available on-line, but I don't believe it is any more. I ran across a mention of it about a year ago and felt like reading it again, and also belatedly tossing the author a tiny bit of money.

I'm happy to report that, unlike a lot of nostalgia trips, this one holds up. Davis's stories are still funny and the meanness fairy has not visited and made everything awful. (The same, alas, cannot be said for Acts of Gord, which is of a similar vintage but hasn't aged well.)

It's been long enough since Davis wrote her journal that I feel like I have to explain the background. Back in the days when the Internet was slow and not many people had access to it, people went to a local store to rent movies on video tapes (which had to be rewound after watching, something that customers were irritatingly bad at doing). Most of those only carried normal movies (Blockbuster was the ubiquitous chain store, now almost all closed), but a few ventured into the far more lucrative, but more challenging, business of renting porn. Some of those were dedicated adult stores; others, like the one that Davis worked at, carried a mix of regular movies and, in a separate part of the store, porn. Prior to the days of ubiquitous fast Internet, getting access to video porn required going into one of those stores and handing lurid video tape covers and money to a human being who would give you your rented videos. That was a video clerk.

There is now a genre of web sites devoted to stories about working in retail and the bizarre, creepy, abusive, or just strange things that customers do (Not Always Right is probably the best known). Davis's journal predated all of that, but is in the same genre. I find most of those sites briefly interesting and then get bored with them, but I had no trouble reading this (short) book cover to cover even though I'd read the entries on the Internet years ago.

One reason for that is that Davis is a good story-teller. She was (and I believe still is) an improv comedian, and it shows. Many of the entries are stories about specific customers, who Davis gives memorable code names (Mr. Gentle, Mr. Cheekbones, Mr. Creaky) and describes quickly and efficiently. She has a good sense of timing and keeps the tone at "people are amazingly strange and yet somehow fascinating" rather than slipping too far into the angry ranting that, while justified, makes a lot of stories of retail work draining to read.

That said, I think a deeper reason why this collection works is that a porn store does odd things to the normal balance of power between a retail employee and their customers. Most retail stories are from stores deeply embedded in the "customer is always right" mentality, where the employee is essentially powerless and has to take everything the customer dishes out with a smile. The stories told by retail employees are a sort of revenge, re-asserting the employee's humanity by making fun of the customer. But renting porn is not like a typical retail transaction.

A video clerk learns things about a customer that perhaps no one else in their life knows, shifting some of the vulnerability back to the customer. The store Davis worked at was one of the most comprehensive in the area, and in a relatively rare business, so the store management knew they were going to get business anyway and were not obsessed with keeping every customer happy. They had regular trouble with customers (the 5% of retail customers who get weird in a porn store often get weird in disgusting and illegal ways) and therefore empowered the store clerks to be more aggressive about getting rid of unwanted business. That meant the power balance between the video clerks and the customers, while still not exactly equal, was more complicated and balanced in ways that make for better (and less monotonously depressing) stories.

There are, of course, stories of very creepy customers here, as well as frank thoughts on porn and people's consumption habits from a self-described first-amendment feminist who tries to take the over-the-top degrading subject matter of most porn with equanimity but sometimes fails. But those are mixed with stories of nicer customers, which gain something that's hard to describe from the odd intimacy of knowing little about them except part of their sex life. There are also some more-typical stories of retail work that benefit from the incongruity between their normality and the strangeness of the product and customers. Davis's account of opening the store by playing Aqua mix tapes is glorious. (Someone else who likes Aqua for much the same reason that I do!)

Content warning for public masturbation, sex-creep customers, and lots of descriptions of the sorts of degrading (and sexist and racist) sex acts portrayed on porn video boxes, of course. But if that doesn't drive you away, these are still-charming and still-fascinating slice-of-life stories about retail work in a highly unusual business that thrived for one brief moment in time and effectively no longer exists. Recommended, particularly if you want the nostalgia factor of re-reading something you vaguely remember from twenty years ago.

Rating: 7 out of 10

Krebs on SecurityAlleged Member of Neo-Nazi Swatting Group Charged

Federal investigators on Friday arrested a Virginia man accused of being part of a neo-Nazi group that targeted hundreds of people in “swatting” attacks, wherein fake bomb threats, hostage situations and other violent scenarios were phoned in to police as part of a scheme to trick them into visiting potentially deadly force on a target’s address.

In July 2019, KrebsOnSecurity published the story Neo-Nazi Swatters Target Dozens of Journalists, which detailed the activities of a loose-knit group of individuals who had targeted hundreds of individuals for swatting attacks, including federal judges, corporate executives and almost three-dozen journalists (myself included).

A portion of the Doxbin, as it existed in late 2019.

An FBI affidavit unsealed this week identifies one member of the group as John William Kirby Kelley. According to the affidavit, Kelley was instrumental in setting up and maintaining the Internet Relay Chat (IRC) channel called “Deadnet” that was used by he and other co-conspirators to plan, carry out and document their swatting attacks.

Prior to his recent expulsion on drug charges, Kelley was a student studying cybersecurity at Old Dominion University in Norfolk, Va. Interestingly, investigators allege it was Kelley’s decision to swat his own school in late November 2018 that got him caught. Using the handle “Carl,” Kelley allegedly explained to fellow Deadnet members he hoped the swatting would get him out of having to go to class.

The FBI says Kelley used virtual private networking (VPN) services to hide his true Internet location and various voice-over-IP (VoIP) services to conduct the swatting calls. In the ODU incident, investigators say Kelley told ODU police that someone was armed with an AR-15 rifle and had placed multiple pipe bombs within the campus buildings.

Later that day, Kelley allegedly called ODU police again but forgot to obscure his real phone number on campus, and quickly apologized for making an accidental phone call. When authorities determined that the voice on the second call matched that from the bomb threat earlier in the day, they visited and interviewed the young man.

Investigators say Kelley admitted to participating in swatting calls previously, and consented to a search of his dorm room, wherein they found two phones, a laptop and various electronic storage devices.

The affidavit says one of the thumb drives included multiple documents that logged statements made on the Deadnet IRC channel, which chronicled “countless examples of swatting activity over an extended period of time.” Those included videos Kelley allegedly recorded of his computer screen which showed live news footage of police responding to swatting attacks while he and other Deadnet members discussed the incidents in real-time on their IRC forum.

The FBI believes Kelley also was linked to a bomb threat in November 2018 at the predominantly African American Alfred Baptist Church in Old Town Alexandria, an incident that led to the church being evacuated during evening worship services while authorities swept the building for explosives.

The FBI affidavit was based in part on interviews with an unnamed co-conspirator, who told investigators that he and the others on Deadnet IRC are white supremacists and sympathetic to the neo-Nazi movement.

“The group’s neo-Nazi ideology is apparent in the racial tones throughout the conversation logs,” the affidavit reads. “Kelley and other co-conspirators are affiliated with or have expressed sympathy for Atomwafen Division,” an extremist group whose members are suspected of having committed multiple murders in the U.S. since 2017.

Investigators say on one of Kelley’s phones they found a photo of he and others in tactical gear holding automatic weapons next to pictures of Atomwaffen recruitment material and the neo-Nazi publication Siege.

As I reported last summer, several Deadnet members maintained a site on the Dark Web called the “Doxbin,” which listed the names, addresses, phone number and often known IP addresses, Social Security numbers, dates of birth and other sensitive information on hundreds of people — and in some cases the personal information of the target’s friends and family. After those indexed on the Doxbin were successfully swatted, a blue gun icon would be added next to the person’s name.

One of the core members of the group on Deadnet — an individual who used the nickname “Chanz,” among others — stated that he was responsible for maintaining SiegeCulture, a white supremacist Web site that glorifies the writings of neo-Nazi James Mason (whose various books call on followers to start a violent race war in the United States).

Deadnet chat logs obtained by KrebsOnSecurity show that another key swatting suspect on Deadnet who used the handle “Zheme” told other IRC members in March 2019 that one of his friends had recently been raided by federal investigators for allegedly having connections to the person responsible for the mass shooting in October 2018 at the Tree of Life Jewish synagogue in Pittsburgh.

At one point last year, Zheme also reminded denizens of Deadnet about a court hearing in the murder trial of Sam Woodward, an alleged Atomwaffen member who’s been charged with killing a 19-year-old gay Jewish college student.

As reported by this author last year, Deadnet members targeted dozens of journalists whose writings they considered threatening to their worldviews. Indeed, one of the targets successfully swatted by Deadnet members was Pulitzer prize winning columnist Leonard G. Pitts Jr., whose personal information as listed on the Doxbin was annotated with a blue gun icon and the label “anti-white race/politics writer.”

In another Deadnet chat log seen by this author, Chanz admits to calling in a bomb threat at the UCLA campus following a speech by Milo Yiannopoulos. Chanz bragged that he did it to frame feminists at the school for acts of terrorism.

On a personal note, I sincerely hope this arrest is just the first of many to come for those involved in swatting attacks related to Deadnet and the Doxbin. KrebsOnSecurity has obtained information indicating that several members of my family also have been targeted for harassment and swatting by this group.

Finally, it’s important to note that while many people may assume that murders and mass shootings targeting people because of their race, gender, sexual preference or religion are carried out by so-called “lone wolf” assailants, the swatting videos created and shared by Deadnet members are essentially propaganda that hate groups can use to recruit new members to their cause.

The Washington Post reports that Kelley had his first appearance in federal court in Alexandria, Va. on Friday.

“His public defender did not comment on the allegations but said his client has ‘very limited funds,'” The Post’s courts reporter Rachel Weiner wrote.

The charge against Kelley of conspiracy to make threats carries up to five years in prison. The affidavit in Kelley’s arrest is available here (PDF).

,

CryptogramFriday Squid Blogging: Stuffed Squid with Vegetables and Pancetta

A Croatian recipe.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAnisa Kuci: Outreachy post 3 - Midterm report

Time passes by quickly when you do the things that you like. And so have passed by very quickly the first six weeks of Outreachy. The first half of the internship has been an amazing experience for me. I have worked and learned so many new things. I got familiar more closely with the Debian project that I was already contributing to in the past, but less intensively. I am very happy to get to know more people from the community, feel so welcomed and find such a warm environment.

Since the first weeks of the internship I started working on fundraising materials for DebConf20 as part of my tasks, using LaTeX which is an amazing tool to work on creating different types of documents. My skills on using LaTeX are improved, and the more I use it the more I discover how powerful a tool it is and the variety of things that you can do with it. Lately I worked on the flyer and brochure that will be sent to potential sponsors.

DebConf20 sponsorship flyer

On the flyer I removed the translation elements, since this year the materials will be only in English. I updated the content making it relevant for this year, and also updated the logo to the winning entry of a contest the local team ran. Matching to the dominant color of the DebConf20 logo I created a color scale that we are using for headlines and decorative elements within the fundraising material and the conference web page.

DebConf20 color scale

As for the fundraising brochure, I took the content from a Google doc, which was carefully created by my mentor Karina and converted it into LaTeX. I adapted it with the new logo, colors and monetary values in the local currency. For this I needed to create a TeX \newcommand as the ILS currency symbol (₪) is not supported natively. This also led to a restriction in the choice of fonts available because the ILS symbol needs to be part of the font. With support from the wider DebConf team we settled on Liberation Sans. As we are working on the visual identity of DebConf20, we are almost finalizing the fundraising materials for this edition.

I have also worked on the draft email templates that I have proposed for the next phases of contacting sponsors, hoping I will receive a good feedback from the team. They are available on a private DebConf git repo. The basic idea is to provide new aspects of the benefits of sponsoring a DebConf with each contact that we have reaching out to sponsors.

Initial commit of the DebConf20 sponsorship brochure

Beside practicing LaTeX I have also worked a lot on git and it has been very helpful for me to practice. There is so much information to work on and so much you can do with git. I am trying to get beyond the common level of understanding git:

xkcd on git

Another task I have is documentation, so, I have worked on this too, in parallel. As each DebConf is organized every year in another country, you might imagine that for the local team not everything is familiar, even if they might be part of Debian, and of course depending also on the experience they might have on organizing events or specifically fundraising. So, working on fundraising now, I have had many things that I was not completely familiar with and I have started documenting the workflow so it will be hopefully more convenient and smooth process for future DebConf local organizing teams.

As mentioned on my last blog post, I have already joined the main communication channels that the Debian community uses. I try to be as much available as I can and try to stay updated with all the info that might be relevant information for my internship. I participate in all the biweekly team meetings for DebConf20, giving updates about my progress and staying in the loop of the current situation regarding organizational topics related to the conference.

Updating the DebConf20 sponsorship flyer in git

I stay in contact with my mentors Daniel and Karina via IRC and emails. I would like to take a moment and thank them for all their encouragement, support and feedback which has helped me improve and has motivated me a lot to continue working in this awesome project. I keep connection to the wider community as well via IRC, Planet Debian or constantly following the mailing lists.

Last but not least, I also participate in the Outreachy webchats where I had the chance to have a little bit of background from other Outreachy interns and meet the people who are running the Outreachy program. I am so glad to see what a safe, easygoing and inclusive environment they have created for everyone.

My experience so far has been a blast!

LongNowA Prescient Prediction from a Reader of Seventeen Magazine

In 02000, Seventeen magazine asked its readers to send in predictions about the year 02020. Over a decade before the founding of Revive & Restore, Tiffany Ann Ruter of Jacksonville, Florida had this to say:

I predict that the technology that already lets us clone live species, such as mice and sheep, and is now being used to try to clone the extinct woolly mammoth, we’ll be able to clone endangered species and prevent extinction in the future.

CryptogramPolice Surveillance Tools from Special Services Group

Special Services Group, a company that sells surveillance tools to the FBI, DEA, ICE, and other US government agencies, has had its secret sales brochure published. Motherboard received the brochure as part of a FOIA request to the Irvine Police Department in California.

"The Tombstone Cam is our newest video concealment offering the ability to conduct remote surveillance operations from cemeteries," one section of the Black Book reads. The device can also capture audio, its battery can last for two days, and "the Tombstone Cam is fully portable and can be easily moved from location to location as necessary," the brochure adds. Another product is a video and audio capturing device that looks like an alarm clock, suitable for "hotel room stings," and other cameras are designed to appear like small tree trunks and rocks, the brochure reads.

The "Shop-Vac Covert DVR Recording System" is essentially a camera and 1TB harddrive hidden inside a vacuum cleaner. "An AC power connector is available for long-term deployments, and DC power options can be connected for mobile deployments also," the brochure reads. The description doesn't say whether the vacuum cleaner itself works.

[...]

One of the company's "Rapid Vehicle Deployment Kits" includes a camera hidden inside a baby car seat. "The system is fully portable, so you are not restricted to the same drop car for each mission," the description adds.

[...]

The so-called "K-MIC In-mouth Microphone & Speaker Set" is a tiny Bluetooth device that sits on a user's teeth and allows them to "communicate hands-free in crowded, noisy surroundings" with "near-zero visual indications," the Black Book adds.

Other products include more traditional surveillance cameras and lenses as well as tools for surreptitiously gaining entry to buildings. The "Phantom RFID Exploitation Toolkit" lets a user clone an access card or fob, and the so-called "Shadow" product can "covertly provide the user with PIN code to an alarm panel," the brochure reads.

The Motherboard article also reprints the scary emails Motherboard received from Special Services Group, when asked for comment. Of course, Motherboard published the information anyway.

Worse Than FailureError'd: Are You Old Enough to Know Better?

"I guess my kid cousins won't be putting these together for a while," Travis writes.

 

Noah writes, "Who doesn't want a little error in their coffee first thing in the morning?"

 

"It would appear that Walmart and I have very different ideas about what constitutes 'Hard Candy'," wrote Todd P.

 

"Looks like the push to build roads in Poland paid off! Like, really REALLY paid off," Krzysztof wrote.

 

"Google nailed it - it caught all the cities that I visited in the um...Pacific region?" Francois wrote.

 

Darnell S. writes, "I beg to differ with WizzAir Cars' opinion of what is a 'good choice'...I want that free one!"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianDirk Eddelbuettel: rfoaas 2.1.0: New upstream so new access point!

rfoaas greed example

FOAAS, having been resting upstream for some time, released version 2.1.0 of its wonderful service this week! So without too much further ado we went to work and added support for it. And now we are in fact thrilled to announce that release 2.1.0 of rfoaas is now on CRAN as of this afternoon (with a slight delay as yours truly managed to state the package release date as 2019-01-09 which was of course flagged as ‘too old’).

The new 2.1.0 release of FOAAS brings a full eleven new REST access points, namely even(), fewer(), ftfty(), holygrail(), idea(), jinglebells(), legend(), logs(), ratsarse(), rockstar(), and waste(). On our end, documentation and tests were updated.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianLisandro Damián Nicanor Pérez Meyer: Qt 4 removed from Debian bullseye (current testing)

Today Qt 4 (aka src:qt4-x11) has been removed from Debian bullseye, what as of today we know as "testing". We plan to remove it from unstable pretty soon.



Krebs on SecurityLawmakers Prod FCC to Act on SIM Swapping

Crooks have stolen tens of millions of dollars and other valuable commodities from thousands of consumers via “SIM swapping,” a particularly invasive form of fraud that involves tricking a target’s mobile carrier into transferring someone’s wireless service to a device they control. But the U.S. Federal Communications Commission (FCC), the entity responsible for overseeing wireless industry practices, has so far remained largely silent on the matter. Now, a cadre of lawmakers is demanding to know what, if anything, the agency might be doing to track and combat SIM swapping.

On Thursday, a half-dozen Democrats in the House and Senate sent a letter to FCC Chairman Ajit Pai, asking the agency to require the carriers to offer more protections for consumers against unauthorized SIM swaps.

“Consumers have no choice but to rely on phone companies to protect them against SIM swaps — and they need to be able to count on the FCC to hold mobile carriers accountable when they fail to secure their systems and thus harm consumers,” reads the letter, signed by Sens. Ron Wyden (OR), Sherrod Brown (OH) and Edward Markey (MA), and Reps. Ted Lieu (CA), Anna Eshoo (CA) and Yvette Clarke (NY).

SIM swapping is an insidious form of mobile phone fraud that is often used to steal large amounts of cryptocurrencies and other items of value from victims. All too frequently, the scam involves bribing or tricking employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

Once in control of the stolen phone number, the attacker can then reset the password for any online account that allows password resets and/or two-factor verification requests via text messages or automated phone calls (i.e. most online services, including many of the mobile carrier Web sites).

From there, the scammers can pivot in a variety of directions, including: Plundering the victim’s financial accounts; hacking their identities on social media platforms;  viewing the victim’s email and call history; and abusing that access to harass and scam their friends and family.

The lawmakers asked the FCC to divulge whether it tracks consumer complaints about fraudulent SIM swapping and number “port-outs,” which involve moving the victim’s phone number to another carrier. The legislators demanded to know whether the commission offers any guidance for consumers or carriers on this important issue, and if the FCC has initiated any investigations or taken enforcement actions against carriers that failed to secure customer accounts.

The letter also requires the FCC to respond as to whether there is anything in federal regulations that prevents mobile carriers from sharing with banks information about the most recent SIM swap date of a customer as a way to flag potentially suspicious login attempts — a method already used by financial institutions in other countries, including Australia, the United Kingdom and several nations in Africa.

“Some carriers, both in the U.S. and abroad, have adopted policies that better protect consumers from SIM swaps, such as allowing customers to add optional security protections to their account that prevent SIM swaps unless the customer visits a store and shows ID,” the letter continues. “Unfortunately, implementation of these additional security measures by wireless carriers in the U.S. is still spotty and consumers are not likely to find out about the availability of these obscure, optional security features until it is too late.”

The FCC did not immediately respond to requests for comment.

SIM SWAP (CRIM)INNOVATIONS

Legitimate SIM swaps are a common request for all carriers, and they usually happen when a customer has lost their mobile phone or when they need to upgrade to a newer model that requires a different-sized SIM card (the small, removable smart chip that ties the customer’s device to their phone number).

But unauthorized SIM swaps enable even low-skilled thieves to quickly turn a victim’s life upside down and wrest control over a great deal of their online identities and finances. What’s more, the security options available to wireless customers concerned about SIM swapping — such as personal identification number (PIN) codes — are largely ineffective against crooked or clueless mobile phone store employees.

A successful SIM swap may allow tormentors to access a victim’s email inbox even after the target has changed his or her password. For example, some email services allow customers to reset their passwords just by providing a piece of information that would likely only be known to the legitimate account holder, such as the month and year the account was created, or the name of a custom folder or label in the account previously created by the user.

One technique used by SIM swappers to regain access to hacked inboxes is to jot down this information once a SIM swap affords them the ability to reset the account’s password. Alternatively, SIM swappers have been known to create their own folders or labels in the hacked account to facilitate backdoor access later on.

A number of young men have recently been criminally charged with using SIM swapping to steal accounts and cryptocurrencies like Bitcoin from victims. This week, a court in New York unsealed a grand jury indictment against 22-year-old alleged serial SIM swapper Nicholas Truglia, who stands accused of using the technique to siphon $24 million worth of cryptocurrencies from blockchain investor Michael Terpin.

But experts say the few arrests that have been made in conjunction with SIM swapping attacks have pushed many involved in this crime to enlist help from co-conspirators who are minors and thus largely outside the reach of federal prosecutors.

For his part, Terpin sent an open letter to FCC commissioners in October 2019, urging them to mandate that wireless carriers provide a way for customers to truly lock down their accounts against SIM swapping, even if that means requiring an in-person visit to a store or conversation with the carrier’s fraud department.

In an interview with KrebsOnSecurity, Terpin said the FCC has so far abdicated its responsibility over the carriers on this matter.

“It took them a long time to get around to taking robocalls seriously, but those scams rarely cost people millions of dollars,” Terpin said. “Imagine going into a bank and you don’t remember your PIN and the teller says, ‘Oh, that’s okay I can look it up for you.’ The fact that a $9-an-hour mobile store employee can see your high security password or PIN is shocking.”

“The carriers should also have to inform every single current and future customer that there is this high security option available,” Terpin continued. “That would stop a lot of this fraud and would take away the ability of these ne’er-do-well 19-year-old store employees who get bribed into helping out with the scam.”

Want to read more about SIM swapping? Check out Busting SIM Swappers and SIM Swap Myths, or view the entire catalog of stories on the topic here.

Worse Than FailureThe Compliance Ropeway

"So, let me get this straight," Derrick said. He closed his eyes and took a deep breath while massaging his temples before letting out an exasperated sigh. "Not a single person... in this entire organization... is taking ANY responsibility for Ropeway? No one is even willing to admit that they know anything about this application...?"

The Operations team had grown accustomed to their new director's mannerisms and learned it's just better to stay silent and let Derrick think out loud. Afterall, no one envied his job or his idealistic quest for actual compliance. If had he been at the bank as long as his team had, Derrick would have learned that there's compliance... and then there's "compliance."

"But we figured out that Ropeway somehow automatically transfers underwriting overrides from ISAC to AppPortal?" Derrick paused to collect his thoughts before a lightbulb went off. "Wait, wait. Those systems are both covered under our IBM Master Service Agreement, right? What did they say? Chris... did you reach out to our IBM liaison?"

"Well," Chris silently thanked everything good that Ropeway wasn't his problem. "IBM says that they have no idea. They said it's not in the scope of the MSA or any SOW, but they'd be happy to come out and—"

"Ab-so-lute-ly not," Derrick interrupted. He wasn't IBM's biggest fan, to put it mildly. "I've already eaten into next year's budget on this SSL initiative, and there's no way I'm gonna pay them just to tell me I have to pay them even more to fix what shouldn't even by my problem!"

Derrick let out another sigh, rubbing his temples again. "All I want," he grimaced, "is for Ropeway to use HTTPS instead of HTTP. That's all! Fine... fine! Chris, let's just move the whole damn Ropeway server behind the proxy."

"Roger that," Chris nodded, "We'll start prepping things for next week's maintenance window."

There was a lot of risk to moving Ropeway. The Operations team knew how to keep it running – it was just a Windows Service application – but they had no way of knowing if the slightest change in the environment would break things. Moving the server behind the http-to-https proxy meant a new IP and a new subnet, and they had seen far too may "if (IP==10.10.22.30) production_env = true" traps to know they can't just move things without a test plan.

But since no one on the business or Development side was willing to help, they were on their own and it'd be Derrick's head if ISAC or AppPortal stopped working once that maintenance window was over. But for the sake of actual compliance – not "compliance" – these were the risks Derrick was willing to take: SSL was simply non-negotiable.

##

"You'll never believe what I found on that Ropeway server," Chris said while popping into to Derrick's office. Actually, he knew that wasn't true; Derrick had come to expect the unbelievable, but Chris liked to prep Derrick nonetheless. Derrick took a deep breath and moved his hand towards his forehead.

"I found this." Chris plopped down a thick, tattered manila envelope that was covered in yellowed tape. "It was... um... taped to the server's chassis."

Derrick was legitimately surprised and started riffling through the contents as Chris explained things. "So apparently Ropeway was built by this guy, Jody Dorchester, at Roman, uh, wait. Ronin Software or something."

"And yeah," Chris continued as Derrick's eyes widened while he flipped through page-after-page-after page of documentation, "Jody apparently wrote all sorts of documentation... installation instructions, configuration instructions – and all the source code is on that enclosed CD-ROM."

Derrick was speechless. "This," he stuttered, "is dated... March ...of 2000."

"Yup," Chris jumped in nonchalantly, "but I took a shot in the dark here and sent Jody an email."

"And...," Chris said, smiling. He handed Derrick another document and said, "here's his reply."

Ropeway! Wow... that takes me back. I can't believe that's still around, and if you're contacting me about it... you must be desperate ;)

I built that app in a past life... I don't know how much this will help, but I'll tell you what I remember.

There was a fellow at the bank, Eric (or maybe Edward?), and he was a Director of Technology and a VP of something. I met him at a local networking event, and he told me about some project he was working on that was going to reduce all sorts of mindless paperwork.

One department over there was apparently printing out a bunch of records from their system and then faxing all that paper to a different department, who would then manually re-enter all those records into a different system. They had a bunch of data entry people, but the bigger problem was that it was slow, and there were a lot of mistakes, and it was costing a lot of money.

It sounded like a huge, complicated automation project – but actually, your guy had it mostly figured out. The one system could spit-out CSV files to some network share, and the other could import XML files via some web interface. He asked if I could help with that, and I said sure... why not? It just seemed like a really simple Windows service application.

It took a bit longer to wrangle those two formats together than I hoped, but I showed it off and he seemed absolutely thrilled. However, I'll never forget the look on his face when I told him the cost. It was something like 16 hours at $50/hr (my rate back then). I thought he was upset that I took so long, and billed $800 for something that seemed so simple.

Not even close. He said that IBM quoted over $100k to do the exact same thing – but there was just no way he could sell that he got this massive integration project accomplished for only $800. He said I needed to bill more... a lot more.

So, I made that little importer app as robust and configurable as what, VB4 would allow? Then I spent, like, a week writing that documentation you guys found, taped to the server. You know, I actually bought that server and did all the installation myself? Well after all that, he was satisfied with my new bill.

Anyways, I can't remember anything about the application, but there's probably a whole section of the docs dedicated to configuring it. Hopefully you guys can figure it out... good luck!

Jody was right. On page 16, there was, in fact, a section dedicated to the Web API configuration. To change use SSL, one would just have to open the service control tool, check off the "use secure connection," and then Ropeway would construct the Service URL using HTTPS instead of HTTP. That was it, apparently.

"I just... I can't believe it," Derrick paused, and shook his head before massaging his forehead – this time more in surprise that his usual temple massage. "No way. It can't be that simple. Nothing here is that simple!"

Chris shrugged his shoulders and said, "Checking that box does seem a bit simpler than moving—"

"Look," Derrick interrupted, "even if we change the setting, and that setting works, someone needs to own Ropeway, and take responsibility for it going forward. This is like rule number one of actual compliance!"

Chris nodded, and just let Derrick build up to what would certainly be yet another compliance rant.

"Come on," Derrick said enthusiastically, "Ropeway is so absurdly documented... look at this! Surely someone in Core Dev, Biz Apps, Vendor Apps, or heck, even Apps Integration will adopt this thing!? I mean... you realize what will happen if AppPortal craps out because the Ropeway integration breaks some day?"

Obviously, Chris knew exactly how bad it would be, but he let Derrick tell him anyways.

"I can't even imaginae it," Derrick took a breath. "The crap storm that all of those groups would face... this might even make the shareholder meeting! We gotta do something. How about we go to the ISAC team lead and say... here's Ropeway. It's your baby now. Congrats! Here's the docs, it's really easy to use, just--"

"Look," Chris cut in, soberly. "If you want me to go deliver that message -- in the middle of their whole Agile ScrumOps whatever war with Biz Apps – I will. But I'm pretty sure they're gonna go straight to the execs and push to expand the MSA with IBM to cover the App Portal integration..."

Chris paused for a few moments to let Derrick realize exactly who's budget an MSA expansion might come from. Derrick's eyes widened as he took another deep breath.

"Oooor", Chris continued, "this envelope still, umm, has a quite a bit of tape attached to it. Maybe... just maaaaybe...we never found the envelope in the first place? And perhaps, I don't know, Ropeway just... uhhh... happens to check that box itself one day? I don't know? It's an old app... old apps do weird things. Who knows? I don't... do you? Wait... I've already forgotten, what's Ropeway?"

Derrick slowly shook his head and started massaging the bridge of his nose with his index fingers. He let out a completely different sigh, defeated sigh, then uncharacteristically mumbled, "...better get some more tape..."

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianRuss Allbery: DocKnot 3.02

DocKnot is my set of tools for generating package documentation and releases. The long-term goal is for it to subsume the various tools and ad hoc scripts that I use to manage my free software releases and web site.

This release includes various improvements to docknot dist for generating a new distribution tarball: xz-compressed tarballs are created automatically if necessary, docknot dist now checks that the distribution tarball contains all of the expected files, and it correctly handles cleaning the staging directory when regenerating distribution tarballs. This release also removes make warnings when testing C++ builds since my current Autoconf machinery in rra-c-util doesn't properly exclude options that aren't supported by C++

This release also adds support for the No Maintenance Intended badge for orphaned software in the Markdown README file, and properly skips a test on Windows that requires tar.

With this release, the check-dist script on my scripts page is now obsolete, since its functionality has been incorporated into DocKnot. That script will remain available from my page, but I won't be updating it further.

You can get the latest release from CPAN or the DocKnot distribution page. I've also uploaded Debian packages to my personal repository. (I'm still not ready to upload this to Debian proper since I want to make another major backwards-incompatible change first.)

,

Planet DebianDirk Eddelbuettel: BH 1.72.0-3 on CRAN

Boost

The BH 1.72.0-1 release of BH required one update 1.72.0-2 when I botched a hand-edited path (to comply with the old-school path-length-inside-tar limit).

Turns out another issue needed a fix. This release improved on prior ones by starting from a pristine directory. But as a side effect, Boost Accumulators ended up incomplete with only the dependented-upon-by-others files included (by virtue of the bcp tool). So now we declared Boost Accumulators a full-fledged part of BH ensuring that bcp copies it “whole”. If you encounter issues with another incomplete part, please file an issue ticket at the GitHub repo.

No other changes were made.

Also, this fix was done initially while CRAN took a well-deserved winter break, and I had tweeted on Dec 31 about availability via drat and may use this more often for pre-releases. CRAN is now back, and this (large !!) package is now processed as part of the wave of packages that were in waiting (and Henrik got that right yesterday…).

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteve Kemp: I won't write another email client

Once upon a time I wrote an email client, in a combination of C++ and Lua.

Later I realized it was flawed, and because I hadn't realized that writing email clients is hard I decided to write it anew (again in C++ and Lua).

Nowadays I do realize how hard writing email clients is, so I'm not going to do that again. But still .. but still ..

I was doing some mail-searching recently and realized I wanted to write something that processed all the messages in a Maildir folder. Imagine I wanted to run:

 message-dump ~/Maildir/people-foo/ ~/Maildir/people-bar/  \
     --format '${flags} ${filename} ${subject}'

As this required access to (arbitrary) headers I had to read, parse, and process each message. It was slow, but it wasn't that slow. The second time I ran it, even after adjusting the format-string, it was nice and fast because buffer-caches rock.

Anyway after that I wanted to write a script to dump the list of folders (because I store them recursively so ls -1 ~/Maildir wasn't enough):

 maildir-dump --format '${unread}/${total} ${path}'

I guess you can see where this is going now! If you have the following three primitives, you have a mail-client (albeit read-only)

  • List "folders"
  • List "messages"
  • List a single message.

So I hacked up a simple client that would have a sub-command for each one of these tasks. I figured somebody else could actually use that, be a little retro, be a little cool, pretend they were using MH. Of course I'd have to write something horrid as a bash-script to prove it worked - probably using dialog to drive it.

And then I got interested. The end result is a single golang binary that will either:

  • List maildirs, with a cute format string.
  • List messages, with a cute format string.
  • List a single message, decoding the RFC2047 headers, showing text/plain, etc.
  • AND ALSO USE ITSELF TO PROVIDE A GUI

And now I wonder, am I crazy? Is writing an email client hard? I can't remember

Probably best to forget the GUI exists. Probably best to keep it a couple of standalone sub-commands for "scripting email stuff".

But still .. but still ..

Cory DoctorowRadicalized makes the CBC’s annual Canada Reads longlist

The Canadian Broadcasting Coporation’s annual Canada Reads prize is one of Canada’s top literary prizes, ranking with the Governor General’s prize for prestige and reach; it begins early in January with the announcement of a longlist of 15 recommended books, and then these are whittled down to a shortlist of five books later in the month. Over the months that follow, each of the shortlisted books is championed by a Canadian celebrity in a series of events across the country, with the grand prize winner being announced in late March after a televised debate among the five books’ “champions.”

The CBC has just announced its longlist, and I’m delighted to announce that Radicalized, my 2019 book of four novellas, is among them.

The entire list is incredibly impressive — really humbling company to be in! — and the five finalists will be announced on Jan 22.

I discussed Radicalized on the CBC radio programme Day 6 last year, and the CBC reviewed the book in glowing terms, later naming it one of their summer reads and best books of the year.

I’m sincerely honoured to be included.

Radicalized is a collection of four novellas that explore the quandaries — social, economic and technological — of contemporary America. Cory Doctorow’s characters deal with issues around immigration, corrupt police forces, dark web uprisings and more.

Here is the Canada Reads 2020 longlist [CBC Books]

Planet DebianEnrico Zini: Checking sphinx code blocks

I'm too lazy to manually check code blocks in autogenerated sphinx documentation to see if they are valid and reasonably up to date. Doing it automatically feels much more interesting to me: here's how I did it.


This is a simple sphinx extension to extract code blocks in a JSON file.

If the documentation is written well enough, I even get annotation on what programming language each snippet is made of:

## Extract code blocks from sphinx

from docutils.nodes import literal_block, Text
import json

found = []


def find_code(app, doctree, fromdocname):
    for node in doctree.traverse(literal_block):
        # if "dballe.DB.connect" in str(node):
        lang = node.attributes.get("language", "default")
        for subnode in node.traverse(Text):
            found.append({
                "src": fromdocname,
                "lang": lang,
                "code": subnode,
                "source": node.source,
                "line": node.line,
            })


def output(app, exception):
    if exception is not None:
        return

    dest = app.config.test_code_output
    if dest is None:
        return

    with open(dest, "wt") as fd:
        json.dump(found, fd)


def setup(app):
    app.add_config_value('test_code_output', None, '')

    app.connect('doctree-resolved', find_code)
    app.connect('build-finished', output)

    return {
        "version": '0.1',
        'parallel_read_safe': True,
        'parallel_write_safe': True,
    }

And this is an early prototype python code that runs each code block in a subprocess to see if it works.

It does interesting things, such as:

  • Walk the AST to see if the code expects some well known variables to have been set, and prepare the test environment accordingly
  • Collect DeprecationWarnings to spot old snippets using deprecated functions
  • Provide some unittest-like assert* functions that snippets can then use if they want
  • Run every snippet in a subprocess, which then runs in a temporary directory, deleted after execution
  • Colorful output, including highlighting of code lines that threw exceptions

CryptogramNew SHA-1 Attack

There's a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other's identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice's ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It's still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world's most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Planet DebianThomas Lange: 20 years of FAI and a new release

20 years ago, on December 20, 1999 FAI 1.0 was released. Many things have happened since then. Some milestones:

  • 1999: version 1.0
  • 2000: first official Debian package
  • 2001: first detailed user report ("No complete installs. Teething problems.")
  • 2005: Wiki page and IRC
  • 2005: FAI CD
  • 2006: fai dirinstall
  • 2007: new partitioning tool setup-storage
  • 2009: new web design
  • 2014: brtfs support
  • 2016: autodiscover function, profiles menu
  • 2016: fai-diskimage, cloud images
  • 2017: cross architecture builds
  • 2017: Fai.me web service
  • 2020: UEFI support

Besides that, a lot of other things happened in the FAI project. Apart from the first report, we got more than 300 detailed reports containing positive feedback. We had several FAI developers meetings and I did more than 40 talks about FAI all over the world. We had a discussion about an alleged GPL violation of FAI in the past, I did several attempts to get a logo for FAI, but we still do not have one. We moved from subversion to git, which was very demanding for me. The FAI.me service for customized installation and cloud images was used more than 5000 times. The Debian Cloud team now uses FAI to build the official Debian cloud images.

I'm very happy with the outcome of this project and I like to thank all people who contributed to FAI in the past 20 years!

This week, I've released the new FAI version 5.9. It supports UEFI boot from CD/DVD and USB stick. Also two new tools were added:

  • fai-sed - call sed on a file but check for changes before writing
  • fai-link - create symlink idempotent

UEFI support in fai-cd only used grub, no syslinux or isolinux is needed. New FAI installation images are also available from

https://fai-project.org/fai-cd

The FAI.me build service is also using the newest FAI version and the customized ISO images can now be booted in an UEFI environment.

https://fai-project.org/FAIme

Worse Than FailureCodeSOD: Sharing the Power

"For my sins," John writes, "I'm working on a SharePoint 2010 migration."

This tells us that John has committed a lot of sins. But not as many as one of his coworkers.

Since they were running a farm of SharePoint servers, they needed to know what was actually running, which was quite different from the documentation which told them what was supposed to be running. John's coworker did some googling, some copy-and-pasting, some minor revisions of their own, and produced this wad of PowerShell scripting which does produce the correct output.

# there could be more than one search service, so this code is wrapped in a for loop with $tempCnt as the loop variable $cmdstr = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $searchServiceAppIds[$tempCnt] | Select Id | ft -HideTableHeaders | Out-String -Width 1000 $cmdstr = $cmdstr.Trim().Split("`n") for($i = 0; $i -lt $cmdstr.Length ; $i++) { $cmdstr2 = $cmdstr[$i].Trim() $searchServiceAppID = $searchServiceAppIds[$tempCnt] $tempXML = [xml] (Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $searchServiceAppIds[$tempCnt] | select Name, Type, DeleteCount, ErrorCount, SuccessCount, WarningCount, StartAddresses, Id, CrawlStatus, CrawlStarted, CrawlCompleted, CrawlState | where {$_.Id -eq $cmdstr2 } | ConvertTo-Xml -NoTypeInformation) $tempstr = [System.String] $tempXML.Objects.Object.InnerXML $searchServiceAppID = $searchServiceAppID + "|" + $cmdstr2 $global:SearchConfigContentSources.Add($searchServiceAppID, $tempstr) }

This is particularly ugly, even ignoring the SharePoint-ness that's going on. This block's final output is a hashtable where the keys are the IDs of various SharePoint "content sources", and the value is a description in XML.

If we trace through it, we can see the basic flow of logic:

Use Get-SPEnterpriseSearchCrawContentSource to get a list of all of the content source IDs, using Select and ft to filter the output. Format the output with Out-String -Width 1000, making each row in the result set 1000 characters long.

We can then split on newlines, and start iterating across the list. Since everything is 1000 characters long, let's trim them, and then let's… invoke Get-SPEnterpriseSearchCrawlContentSource to fetch the full set of content sources again, but this time we'll grab different fields and use where to filter down to the one we're currently looking for. This has the nice effect of forcing us to fetch all the records once for every content source.

Then, of course, we have $cmdstr and $cmdstr2, neither of which are command strings, but instead store content source IDs.

This script will be invoked many times during their migration, and with a large number of content sources, it's not exactly going to perform terribly well. This tortured logic is pretty typical of John's coworker's approach to problems, but as this script is only needed during the migration and will eventually be thrown away, John adds:

The output's basically fine, so I'm not even gonna fix it.

Which is likely the right answer. The real problem to fix is how this code got written in the first place.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: C TAP Harness 4.6

C TAP Harness is my test framework for C software packages.

This release is mostly a release for my own convenience to pick up the reformatting of the code using clang-format, as mentioned in my previous release of rra-c-util. There are no other user-visible changes in this release.

I did do one more bit of housekeeping, namely added proper valgrind testing support to the test infrastructure. I now run the test suite under valgrind as part of the release process to look for any memory leaks or other errors in the harness or in the C TAP library.

The test suite for this package is written entirely in shell (with some C helpers), and I'm now regretting that. The original goal was to make this package maximally portable, but I ended up adding Perl tests anyway to test the POD source for the manual pages, and then to test a few other things, and now the test suite effectively depends on Perl and could have from the start. At some point, I'll probably rewrite the test machinery in Perl, which will make it far more maintainable and easier to read.

I think I've now finally learned my lesson for new packages: Trying to do things in shell for portability isn't worth it. As soon as any bit of code becomes non-trivial, and possibly before then, switch to a more maintainable programming language with better primitives and library support.

You can get the latest release from the C TAP Harness distribution page.

,

Krebs on SecurityTricky Phish Angles for Persistence, Not Passwords

Late last year saw the re-emergence of a nasty phishing tactic that allows the attacker to gain full access to a user’s data stored in the cloud without actually stealing the account password. The phishing lure starts with a link that leads to the real login page for a cloud email and/or file storage service. Anyone who takes the bait will inadvertently forward a digital token to the attackers that gives them indefinite access to the victim’s email, files and contacts — even after the victim has changed their password.

Before delving into the details, it’s important to note two things. First, while the most recent versions of this stealthy phish targeted corporate users of Microsoft’s Office 365 service, the same approach could be leveraged to ensnare users of many other cloud providers. Second, this attack is not exactly new: In 2017, for instance, phishers used a similar technique to plunder accounts at Google’s Gmail service.

Still, this phishing tactic is worth highlighting because recent examples of it received relatively little press coverage. Also, the resulting compromise is quite persistent and sidesteps two-factor authentication, and it seems likely we will see this approach exploited more frequently in the future.

In early December, security experts at PhishLabs detailed a sophisticated phishing scheme targeting Office 365 users that used a malicious link which took people who clicked to an official Office 365 login page — login.microsoftonline.com. Anyone suspicious about the link would have seen nothing immediately amiss in their browser’s address bar, and could quite easily verify that the link indeed took them to Microsoft’s real login page:

This phishing link asks users to log in at Microsoft’s real Office 365 portal (login.microsoftonline.com).

Only by copying and pasting the link or by scrolling far to the right in the URL bar can we detect that something isn’t quite right:

Notice this section of the URL (obscured off-page and visible only by scrolling to the right quite a bit) attempts to grant a malicious app hosted at officesuited.com full access to read the victim’s email and files stored at Microsoft’s Office 365 service.

As we can see from the URL in the image directly above, the link tells Microsoft to forward the authorization token produced by a successful login to the domain officesuited[.]com. From there, the user will be presented with a prompt that says an app is requesting permissions to read your email, contacts, OneNote notebooks, access your files, read/write to your mailbox settings, sign you in, read your profile, and maintain access to that data.

Image: PhishLabs

According to PhishLabs, the app that generates this request was created using information apparently stolen from a legitimate organization. The domain hosting the malicious app pictured above — officemtr[.]com — is different from the one I saw in late December, but it was hosted at the same Internet address as officesuited[.]com and likely signed using the same legitimate company’s credentials.

PhishLabs says the attackers are exploiting a feature of Outlook known as “add-ins,” which are applications built by third-party developers that can be installed either from a file or URL from the Office store.

“By default, any user can apply add-ins to their outlook application,” wrote PhishLabs’ Michael Tyler. “Additionally, Microsoft allows Office 365 add-ins and apps to be installed via side loading without going through the Office Store, and thereby avoiding any review process.”

In an interview with KrebsOnSecurity, Tyler said he views this attack method more like malware than traditional phishing, which tries to trick someone into giving their password to the scammers.

“The difference here is instead of handing off credentials to someone, they are allowing an outside application to start interacting with their Office 365 environment directly,” he said.

Many readers at this point may be thinking that they would hesitate before approving such powerful permissions as those requested by this malicious application. But Tyler said this assumes the user somehow understands that there is a malicious third-party involved in the transaction.

“We can look at the reason phishing is still around, and it’s because people are making decisions they shouldn’t be making or shouldn’t be able to make,” he said. “Even employees who are trained on security are trained to make sure it’s a legitimate site before entering their credentials. Well, in this attack the site is legitimate, and at that point their guard is down. I look at this and think, would I be more likely to type my password into a box or more likely to click a button that says ‘okay’?”

The scary part about this attack is that once a user grants the malicious app permissions to read their files and emails, the attackers can maintain access to the account even after the user changes his password. What’s more, Tyler said the malicious app they tested was not visible as an add-in at the individual user level; only system administrators responsible for managing user accounts could see that the app had been approved.

Furthermore, even if an organization requires multi-factor authentication at sign-in, recall that this phish’s login process takes place on Microsoft’s own Web site. That means having two-factor enabled for an account would do nothing to prevent a malicious app that has already been approved by the user from accessing their emails or files.

Once given permission to access the user’s email and files, the app will retain that access until one of two things happen: Microsoft discovers and disables the malicious app, or an administrator on the victim user’s domain removes the program from the user’s account.

Expecting swift action from Microsoft might not be ideal: From my testing, Microsoft appears to have disabled the malicious app being served from officesuited[.]com sometime around Dec. 19 — roughly one week after it went live.

In a statement provided to KrebsOnSecurity, Microsoft Senior Director Jeff Jones said the company continues to monitor for potential new variations of this malicious activity and will take action to disable applications as they are identified.

“The technique described relies on a sophisticated phishing campaign that invites users to permit a malicious Azure Active Directory Application,” Jones said. “We’ve notified impacted customers and worked with them to help remediate their environments.”

Microsoft’s instructions for detecting and removing illicit consent grants in Office 365 are here. Microsoft says administrators can enable a setting that blocks users from installing third-party apps into Office 365, but it calls this a “drastic step” that “isn’t strongly recommended as it severely impairs your users’ ability to be productive with third-party applications.”

PhishLabs’ Tyler said he disagrees with Microsoft here, and encourages Office 365 administrators to block users from installing apps altogether — or at the very least restrict them to apps from the official Microsoft store.

Apart from that, he said, it’s important for Office 365 administrators to periodically look for suspicious apps installed on their Office 365 environment.

“If an organization were to fall prey to this, your traditional methods of eradicating things involve activating two-factor authentication, clearing the user’s sessions, and so on, but that won’t do anything here,” he said. “It’s important that response teams know about this tactic so they can look for problems. If you can’t or don’t want to do that, at least make sure you have security logging turned on so it’s generating an alert when people are introducing new software into your infrastructure.”

Planet DebianIngo Juergensmann: XMPP - Prosody & Ejabberd

In my day job I'm responsible of maintaining the VoIP and XMPP infrastructure. That's about approx. 40.000 phones and several thousand users using Enterprise XMPP software. Namely it is Cisco CUCM and IM&P on the server side and Cisco Jabber on the client side. There is also Cisco Webex and Cisco Telepresence infrastructure to maintain.

On the other hand I'm running an XMPP server myself for a few users. It all started with ejabberd more than a decade ago or so. Then I moved to Openfire, because it was more modern and had a nice web GUI for administration. At some point there was Prosody as a new shiny star. This is now running for many users, mostly without any problems, but without much love and attention as well.

It all started as "Let's see what this Jabber stuff is..." on a subdomain like jabber.domain.com - it was later that I discovered the benefits of SRV records and the possibility of having the same address for mail, XMPP and SIP. So I began to provide XMPP acounts as well for some of my mail domains.

A year ago I enabled XMPP for my Friendica node on Nerdica.net, the second largest Friendica node according to the-federation.info. Although there are hundreds of monthly active users on Friendica, only a handful of users are using XMPP. XMPP has a hard stand since Google and Facebook went from open federation to closing in their user base.

My personal impression is that there is a lot of development in the last years in regards of XMPP - thanks to the Conversations client on Android - and its Compliance Tester. With that tool it is quite easy to have a common ground for the most needed features of todays user expectation in a mobile world. There is also some news in regards to XMPP clients on Apple iOS, but that's for another article.

This is about the server side, namely Prosody and Ejabberd. Of course there are already several excellent comparisons between these two server softwares. So, this is just my personal opinion and personal impressions about the two softwares I got in the past two weeks.

Prosody:
As I have the most experience with Prosody I'll start with it. Prosody has the advantage of being actively maintained and having lots of community modules to extend its functionality. This is a big win - but there is also the other side of truth: you'll need to install and configure many contrib modules to pass 100% in the Compliance Tester. Some modules might be not that well maintained. Another obstacle I faced with Prosody is the configuration style: usually you have the main config file where you can configure common settings, modules for all virtual hosts and components like PubSub, MUC, HTTP Upload and such. And then there are the config files for the virtual hosts, which feature the same kind of configuration. Important to all is (apparently): order does matter! This can get confusing: Components are similar to loading modules, using both for the same purpose can be, well, interesting. and configuration of modules and components can be challenging as well. When trying to get mod_http_upload working in the last days I experienced that a config on one virtual host was working, but the same config on a different host was not working. This was when I thought I might give Ejabberd a chance...

Ejabberd:
Contrary to Prosody there is a company behind Ejabberd. And this is often perceived as being good and bring some stability to Ejabberd. However, when I joined Ejabberd chat room, I learned in the first minutes by regarding the chat log that the main developer of that company left and the company itself seemed to have lost interest in Ejabberd. However the people in the chat room were relaxed: it's not the end of the world and there are other developers working on the code. So, no issue in the end, but that's not something you expect to read when you join a chat room for the first time. ;)
Contrary to Prosody Ejabberd seems to be well-prepared to pass the Compliance Tester without installing (too many) modules. Large sites such as conversations.im are running on Ejabberd. It is also said that Ejabberd doesn't need restarts of the server for certain config changes as Prosody does. The config file itself appears to be more straightforward and doesn't differentiate between modules and components which makes it a little more easy to understand.

Currently I haven't been able to deal much with Ejabberd, but one other difference is: there is a Debian repository on Prosody.im, but for Ejabberd there is no such repository. You'll have to use backports.debian.org for a newer version of Ejabberd on Debian Buster. It's up to you to decide what is better for you.

I'm still somewhat undecided whether or not to proceed with Ejabberd and migrate from Prosody. The developer of Prosody is very helpful and responsive and I like that. On the other hand, the folks in the Ejabberd chat rooms are very supportive as well. I like the flexibility and the various number of contrib modules for Prosody, but then again it's hard to find the correct/best one to load and to configure for a given task and to satisfy the Compliance Tester. Then again, both servers do feature a Web GUI for some basic tasks, but I like the one of Ejabberd more.

So, in the end, I'm also open for suggestions about either one. Some people will state of course that neither is the best way and I should consider Matrix, Briar or some other solutions, but that's maybe another article comparing XMPP and other options. This one is about XMPP server options: Prosody or Ejabberd. What do you prefer and why?

 

Kategorie: 

TEDPlanet Protectors: Notes from Session 3 of TEDWomen 2019

Singer-songwriter Shawnee brings her undeniable stage presence to TEDWomen 2019: Bold + Brilliant (Photo: Marla Aufmuth / TED)

The world is experiencing the consequences of climate change and the urgency couldn’t be more clear. In Session 3 of TEDWomen 2019, we dug deep into some of the most pressing environmental issues of our time — exploring solutions and the many ways people across the globe are fighting for change.

The event: TEDWomen 2019, Session 3: Planet Protectors, hosted by Whitney Pennington Rodgers and Chee Pearlman

When and where: Thursday, December 5, 2019, 11am PT, at La Quinta Resort & Club in La Quinta, California

Speakers: Hindou Oumarou Ibrahim, Kelsey Leonard, Shawnee, Colette Pichon Battle, Renee Lertzman, Jane Fonda

Music: Singer-songwriter Shawnee brings their undeniable stage presence and music of empowerment to the stage, performing two songs: “Way Home” and “Warrior Heart.”

The talks in brief:

Hindou Oumarou Ibrahim, environmental activist

Big idea: To combat climate change, we must combine our current efforts with those of indigenous people. Their rich, extensive knowledge base and long-standing relationship with the earth are the keys to our collective survival.

Why? Modern science and technology date back only a few hundred years, but indigenous knowledge spans thousands, says Hindou Oumarou Ibrahim. As she puts it: “For us, nature is our supermarket … our pharmacy … our school.” But climate change threatens indigenous people’s — and all of humanity’s — way of life; in her nomadic community, some of their social fabric is unraveling under the strain of its effects. To ensure resilience in the face of these developments, she suggests a marriage of new and old learnings to map and share crucial information for global survival. “We have 10 years to change it. 10 years is nothing,” she says. “So we need to act all together and we need to act right now.”

Quote of the talk: “I think if we put together all the knowledge systems that we have — science, technology, traditional knowledge — we can give the best of us to protect our peoples, to protect the planet, to restore the ecosystems that we are losing.”


“We need to fundamentally transform the way in which we value water,” says Kelsey Leonard. She speaks at TEDWomen 2019: Bold + Brilliant on December 5, 2019 in Palm Springs, California. (Photo: Marla Aufmuth / TED)

Kelsey Leonard, indigenous legal scholar and scientist

Big idea: Granting bodies of water legal personhood is the first step to addressing both our water crises and injustices —  especially those endured by indigenous people. 

Why? Water is essential to life. Yet in the eyes of the law, it remains largely unprotected — and our most vulnerable communities lack access to it, says Kelsey Leonard. As a representative of the Shinnecock Indian Nation, she shares the wisdom of her nokomis, or grandmother, on how we should honor this precious resource. We must start by asking like: What if we asked who water is, in the same way that we might ask who is our mother? This perspective shift transforms the way we fundamentally think about water, she says — prompting us to grant water the same legal rights held by corporations. In this way, and by looking to indigenous laws, we can reconnect with the lakes, oceans and seas around us.

Quote of the talk: “We are facing a global water crisis. And if we want to address these crises in our lifetime, we need to change. We need to fundamentally transform the way in which we value water.”


Colette Pichon Battle, attorney and climate equity advocate

Big idea: Climate migration — the mass displacement of communities due to climate change — will escalate rapidly in coming years. We need to prepare by radically shifting both policies and mindsets.

Why? Scientists predict climate change will displace more than 180 million people by 2100. Colette Pichon Battle believes the world is not prepared for these population shifts. As a generational native of southern Louisiana and an attorney who has worked on post-Hurricane Katrina disaster recovery, Pichon Battle urges us to plan before it’s too late. How? By first acknowledging that climate change is a symptom of exploitative economic systems that privilege the few over the many and then working to transform them. We need to develop collective resilience by preparing communities to receive climate migrants, allocating resources and changing social attitudes. Lastly, she says, we must re-indigenize ourselves — committing to ecological equity and human rights as foundational tenets of a new climate-resilient society.

Quote of the talk: “All of this requires us to recognize a power greater than ourselves and a life longer than the one we will live. We must transform from a disposable, short-sighted reality of the individual to one that values the long-term life cycle of our collective humanity. Even the best of us are entangled in an unjust system. To survive, we will have to find our way to a shared liberation.”


Renee Lertzman, climate psychologist 

Big idea: We need to make our emotional well-being a fundamental part of the fight against climate change.

How? What’s happening to our planet seems overwhelming. And while we have tons of information on the science of climate change, we know much less about its emotional impact. Renee Lertzman has interviewed hundreds of people about how climate change makes them feel, and she wants to equip us with a toolkit to handle our climate grief and still be able to take action. Patience, compassion and kindness are qualities we need to deploy more often in our conversations about the crisis, she says. As climate events push us outside our “window of tolerance” — the stresses we can withstand without becoming overwhelmed — numbness and apathy are natural responses. Many people tell her: “I don’t know where to start.” She recommends practicing attunement: listening to our own feelings and those of others, accepting them without judgement and meeting our experiences with curiosity. Whether we’re with a few friends or at a larger climate action gathering, remembering that we are human is a key ingredient in the fight for our world.

Quote of the talk: “These are hard issues. This is a hard moment to be a human being. We’re waking up.”


Civil disobedience is becoming a new normal, says actor and activist Jane Fonda. She speaks with host Pat Mitchell about Fire Drill Fridays, her weekly climate demonstrations, at TEDWomen 2019: Bold + Brilliant on December 5, 2019 in Palm Springs, California. (Photo: Marla Aufmuth / TED)

Jane Fonda, actor, author and activist

Big idea: In the wake of climate change, protest is becoming a new normal — at least until we see the changes we need.

Why? In a video interview with TEDWomen curator Pat Mitchell, Fonda discussed Fire Drill Fridays, the weekly demonstrations on Capitol Hill she leads in partnership with Greenpeace. Since moving to Washington D.C. in September, Fonda has staged a sit-in at the Hart Senate Office Building on Capitol Hill every Friday to protest the extraction of fossil fuels. At age 81, she has been arrested multiple times and spent a night in jail — and her actions are inspiring people around the world to host their own Fire Drill Fridays. But, she says, we don’t need to get arrested to raise awareness; there are many other ways to put pressure on lawmakers and hold governments accountable. Read a full recap of her interview here.

Quote of the talk: “I’m not leading. It’s the young people, it’s the students, that are leading. It’s always the young people that step up with the courage.”

Planet DebianEnrico Zini: Staticsite for blogging

Build this blog in under one minute

I just released staticsite version 1.4, dedicated to creating a blog.


After reorganising the documentation, I decided to write a simple tutorial showing how to get a new blog started.

The goal of this release was to make it so that the tutorial would be as simple as possible: the result is "A new blog in under one minute".

Once staticsite is installed1, one can start a new blog by copypasting a short text file, then just adding markdown files anywhere in its directory. Staticsite can then serve a live preview of the site, automatically updated as pages are saved, and build an HTML version ready to be served.

I enjoyed picking a use case to drive a release. Next up is going to be "use staticsite to view a git repository, and preview its documentation". I already use it for that, and let's see what comes out after polishing for it.


  1. I just uploaded staticsite 1.4.1 to Debian Unstable 

Worse Than FailureY-Ok

Twenty years out, people have a hard time remembering that Y2K was an actual thing, an actual problem, and it was only solved because people recognized the danger well ahead of time, and invested time and effort into mitigating the worst of it. Disaster didn’t come to pass because people worked their butts off to avoid it.

Gerald E was one of those people. He worked for a cellular provider as a customer service rep, providing technical support and designing the call-center scripts for providing that support. As 1999 cranked on, Gerald was pulled in to the Y2K team to start making support plans for the worst case scenarios.

The first scenario? Handling calls when “all phone communication stopped working”. Gerald didn’t see much point in building a script for that scenario, but he gamely did his best to pad “we can’t answer the phones if they don’t ring” into a “script”.

There were many other scenarios, though, and Gerald was plenty busy. Since he was in every meeting with the rest of the Y2K team, he got to watch their preparedness increase in real time, as different teams did their tests and went from red-to-green in the test results. A few weeks before the New Year, most everything was green.

Y2K fell on a Saturday. As a final preparation, the Y2K team decided to do a final dry-run test, end-to-end, on Wednesday night. They already ran their own internal NTP server which every device on the network pulled from in one way or another, so it was easy to set the clock forward. They planned to set the clock so that at December 29th, 22:30 wall-clock time the time server would report January 1st, 00:00.

The Y2K team gathered to watch their clock count down, and had plans to watch the changeover happen and then go party like it was 1999 while they still had time.

At 22:29, all systems were green. At 22:30- when the time server triggered Y2K- the entire building went dark. There was no power. The backup generator didn’t kick on. The UPSes didn’t kick over. Elevator, Phones, HVAC, everything was down.

No one had expected this catastrophic a failure. The incident room was on the 7th floor of the building. The server room was in the basement. Gerald, as the young and spry CSR was handed a flashlight and ended up spending the next few hours as the runner, relaying information between the incident room and the server room.

In the wee hours of the morning, and after Gerald got his cardio for the next year, the underlying problem became clear. The IT team had a list of IT assets. They had triaged them all, prioritized their testing, and tested everything.

What no one had thought to do was inventory the assets managed by the building services team. Those assets included a bunch of industrial control systems which managed little things, like the building’s power system. Nothing from building services had ended up in their test plan. The backup generator detected the absence of power and kicked on- but the building’s failure meant that the breakers tripped and refused to let that power get where it was needed. Similar issues foiled their large-scale UPS- they could only get the servers powered up by plugging them directly into battery backups.

It was well into the morning on December 30th when they started scrambling to solve the problem. Folks were called back from vacation, electricians were called in and paid exorbitant overtime. It was an all-hands push to get the building wired up in such a way that it wouldn’t just shut down.

It was a straight crunch all the way until New Year’s Eve, but when the clock hit midnight, nothing happened.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowScience fiction and the unforeseeable future: In the 2020s, let’s imagine better things

In my latest podcast (MP3), I read my Globe and Mail editorial, Science fiction and the unforeseeable future: In the 2020s, let’s imagine better things, where I reflect on what science fiction can tell us about the 2020s for the Globe‘s end-of-the-decade package; I wrote about how science fiction can’t predict the future, but might inspire it, and how the dystopian malaise of science fiction can be turned into a inspiring tale of “adversity met and overcome – hard work and commitment wrenching a limping victory from the jaws of defeat.”

I describe a scenario for a “Canadian miracle”: “As the vast majority of Canadians come to realize the scale of the crisis, they are finally successful in their demand that their government address it unilaterally, without waiting for other countries to agree.”

Canada goes on a war footing: Full employment is guaranteed to anyone who will work on the energy transition – building wind, tide and solar facilities; power storage systems; electrified transit systems; high-speed rail; and retrofits to existing housing stock for an order-of-magnitude increase in energy and thermal efficiency. All of these are entirely precedented – retrofitting the housing stock is not so different from the job we undertook to purge our homes of lead paint and asbestos, and the cause every bit as urgent.

How will we pay for it? The same way we paid for the Second World War: spending the money into existence (much easier now that we can do so with a keyboard rather than a printing press), then running a massive campaign to sequester all that money in war bonds so it doesn’t cause inflation.

The justification for taking such extreme measures is obvious: a 1000 Year Reich is a horror too ghastly to countenance, but rendering our planet incapable of sustaining human life is even worse.

MP3

Cory DoctorowPermitting the growth of monopolies is a form of government censorship

In my latest Locus column, Inaction is a Form of Action, I discuss how the US government’s unwillingness to enforce its own anti-monopoly laws has resulted in the dominance of a handful of giant tech companies who get to decide what kind of speech is and isn’t allowed — that is, how the USG’s complicity in the creation of monopolies allows for a kind of government censorship that somehow does not violate the First Amendment.

We’re often told that “it’s not censorship when a private actor tells you to shut up on their own private platform” — but when the government decides not to create any public spaces (say, by declining to create publicly owned internet infrastructure) and then allows a handful of private companies to dominate the privately owned world of online communications, then those companies’ decisions about who may speak and what they may say become a form of government speech regulation — albeit one at arm’s length.

I don’t think that the solution to this is regulating the tech platforms so they have better speech rules — I think it’s breaking them up and forcing them to allow interoperability, so that their speech rules no longer dictate what kind of discourse we’re allowed to have.

Imagine two different restaurants: one prohibits any discussion of any subject the management deems “political” and the other has no such restriction. It’s easy to see that we’d say that you have more right to freely express yourself in the Anything Goes Bistro than in the No Politics at the Table Diner across the street.

Now, the house rules at the No Politics at the Table Diner have implications for free speech, but these are softened by the fact that you can always eat at the Anything Goes Bistro, and, of course, you can always talk politics when you’re not at a restaurant at all: on the public sidewalk (where the First Amendment shields you from bans on political talk), in your own home, or even in the No Politics Diner, assuming you can text covertly under the tablecloth when the management isn’t looking.

Depending on your town and its dining trends, the house rules at The No Politics Diner might matter more or less. If No Politics has the best food in town and everywhere else has a C rating from the health department, then the No Politics Diner’s rules matter a lot more than if No Politics is a greasy spoon that no one eats in if they can get a table elsewhere.

What happens if some deep-pocketed private-equity types hit on a strategy to turn The No Politics Diner into a citywide phenomenon? They merge The No Politics Diner with all the other restaurants in town, spending like drunken sailors. Once that’s accomplished, the NPD cartel goes after the remaining competition: any holdouts, and anyone who tries to open a rival is given the chance to sell out cheap, or be driven out of business. NPD has lots of ways to do this: for example, they’ll open a rival on the same block and sell food below cost to drive the refuseniks out of business (they’re not above sending spies to steal their recipes, either!). Even though some people resent NPD and want to talk politics, there’s not enough people willing to pay a premium for their dinner to keep the Anything Goes Bistro in business.

Inaction is a Form of Action [Cory Doctorow/Locus]

Krebs on SecurityThe Hidden Cost of Ransomware: Wholesale Password Theft

Organizations in the throes of cleaning up after a ransomware outbreak typically will change passwords for all user accounts that have access to any email systems, servers and desktop workstations within their network. But all too often, ransomware victims fail to grasp that the crooks behind these attacks can and frequently do siphon every single password stored on each infected endpoint. The result of this oversight may offer attackers a way back into the affected organization, access to financial and healthcare accounts, or — worse yet — key tools for attacking the victim’s various business partners and clients.

In mid-November 2019, Wisconsin-based Virtual Care Provider Inc. (VCPI) was hit by the Ryuk ransomware strain. VCPI manages the IT systems for some 110 clients that serve approximately 2,400 nursing homes in 45 U.S. states. VCPI declined to pay the multi-million dollar ransom demanded by their extortionists, and the attack cut off many of those elder care facilities from their patient records, email and telephone service for days or weeks while VCPI rebuilt its network.

Just hours after that story was published, VCPI chief executive and owner Karen Christianson reached out to say she hoped I would write a follow-up piece about how they recovered from the incident. My reply was that I’d consider doing so if there was something in their experience that I thought others could learn from their handling of the incident.

I had no inkling at the time of how much I would learn in the days ahead.

EERIE EMAILS

On December 3, I contacted Christianson to schedule a follow-up interview for the next day. On the morning of Dec. 4 (less than two hours before my scheduled call with VCPI and more than two weeks after the start of their ransomware attack) I heard via email from someone claiming to be part of the criminal group that launched the Ryuk ransomware inside VCPI.

That email was unsettling because its timing suggested that whoever sent it somehow knew I was going to speak with VCPI later that day. This person said they wanted me to reiterate a message they’d just sent to the owner of VCPI stating that their offer of a greatly reduced price for a digital key needed to unlock servers and workstations seized by the malware would expire soon if the company continued to ignore them.

“Maybe you chat to them lets see if that works,” the email suggested.

The anonymous individual behind that communication declined to provide proof that they were part of the group that held VPCI’s network for ransom, and after an increasingly combative and personally threatening exchange of messages soon stopped responding to requests for more information.

“We were bitten with releasing evidence before hence we have stopped this even in our ransoms,” the anonymous person wrote. “If you want proof we have hacked T-Systems as well. You may confirm this with them. We havent [sic] seen any Media articles on this and as such you should be the first to report it, we are sure they are just keeping it under wraps.” Security news site Bleeping Computer reported on the T-Systems Ryuk ransomware attack on Dec. 3.

In our Dec. 4 interview, VCPI’s acting chief information security officer — Mark Schafer, CISO at Wisconsin-based SVA Consulting — confirmed that the company received a nearly identical message that same morning, and that the wording seemed “very similar” to the original extortion demand the company received.

However, Schafer assured me that VCPI had indeed rebuilt its email network following the intrusion and strictly used a third-party service to discuss remediation efforts and other sensitive topics.

‘LIKE A COMPANY BATTLING A COUNTRY’

Christianson said several factors stopped the painful Ryuk ransomware attack from morphing into a company-ending event. For starters, she said, an employee spotted suspicious activity on their network in the early morning hours of Saturday, Nov. 16. She said that employee then immediately alerted higher-ups within VCPI, who ordered a complete and immediate shutdown of the entire network.

“The bottom line is at 2 a.m. on a Saturday, it was still a human being who saw a bunch of lights and had enough presence of mind to say someone else might want to take a look at this,” she said. “The other guy he called said he didn’t like it either and called the [chief information officer] at 2:30 a.m., who picked up his cell phone and said shut it off from the Internet.”

Schafer said another mitigating factor was that VCPI had contracted with a third-party roughly six months prior to the attack to establish off-site data backups that were not directly connected to the company’s infrastructure.

“The authentication for that was entirely separate, so the lateral movement [of the intruders] didn’t allow them to touch that,” Schafer said.

Schafer said the move to third-party data backups coincided with a comprehensive internal review that identified multiple areas where VCPI could harden its security, but that the attack hit before the company could complete work on some of those action items.

“We did a risk assessment which was pretty much spot-on, we just needed more time to work on it before we got hit,” he said. “We were doing the right things, just not fast enough. If we’d had more time to prepare, it would have gone better. I feel like we were a company battling a country. It’s not a fair fight, and once you’re targeted it’s pretty tough to defend.”

WHOLESALE PASSWORD THEFT

Just after receiving a tip from a reader about the ongoing Ryuk infestation at VCPI, KrebsOnSecurity contacted Milwaukee-based Hold Security to see if its owner Alex Holden had any more information about the attack. Holden and his team had previously intercepted online traffic between and among multiple ransomware gangs and their victims, and I was curious to know if that held true in the VCPI attack as well.

Sure enough, Holden quickly sent over several logs of data suggesting the attackers had breached VCPI’s network on multiple occasions over the previous 14 months.

“While it is clear that the initial breach occurred 14 months ago, the escalation of the compromise didn’t start until around November 15th of this year,” Holden said at the time. “When we looked at this in retrospect, during these three days the cybercriminals slowly compromised the entire network, disabling antivirus, running customized scripts, and deploying ransomware. They didn’t even succeed at first, but they kept trying.”

Holden said it appears the intruders laid the groundwork for the VPCI using Emotet, a powerful malware tool typically disseminated via spam.

“Emotet continues to be among the most costly and destructive malware,” reads a July 2018 alert on the malware from the U.S. Department of Homeland Security. “Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat.”

According to Holden, after using Emotet to prime VCPI’s servers and endpoints for the ransomware attack, the intruders deployed a module of Emotet called Trickbot, which is a banking trojan often used to download other malware and harvest passwords from infected systems.

Indeed, Holden shared records of communications from VCPI’s tormentors suggesting they’d unleashed Trickbot to steal passwords from infected VCPI endpoints that the company used to log in at more than 300 Web sites and services, including:

-Identity and password management platforms Auth0 and LastPass
-Multiple personal and business banking portals;
-Microsoft Office365 accounts
-Direct deposit and Medicaid billing portals
-Cloud-based health insurance management portals
-Numerous online payment processing services
-Cloud-based payroll management services
-Prescription management services
-Commercial phone, Internet and power services
-Medical supply services
-State and local government competitive bidding portals
-Online content distribution networks
-Shipping and postage accounts
-Amazon, Facebook, LinkedIn, Microsoft, Twitter accounts

Toward the end of my follow-up interview with Schafer and VCPI’s Christianson, I shared Holden’s list of sites for which the attackers had apparently stolen internal company credentials. At that point, Christianson abruptly ended the interview and got off the line, saying she had personal matters to attend to. Schafer thanked me for sharing the list, noting that it looked like VCPI probably now had a “few more notifications to do.”

Moral of the story: Companies that experience a ransomware attack — or for that matter any type of equally invasive malware infestation — should assume that all credentials stored anywhere on the local network (including those saved inside Web browsers and password managers) are compromised and need to be changed.

Out of an abundance of caution, this process should be done from a pristine (preferably non-Windows-based) system that does not reside within the network compromised by the attackers. In addition, full use should be made of the strongest method available for securing these passwords with multi-factor authentication.

Cory DoctorowMachine learning is innately conservative and wants you to either act like everyone else, or never change

Next month, I’m giving a keynote talk at The Future of the Future: The Ethics and Implications of AI, an event at UC Irvine that features Bruce Sterling, Rose Eveleth, David Kaye, and many others!

Preparatory to that event, I wrote an op-ed for the LA Review of Books on AI and its intrinsic conservativism, building on Molly Sauter’s excellent 2017 piece for Real Life.

Sauter’s insight in that essay: machine learning is fundamentally conservative, and it hates change. If you start a text message to your partner with “Hey darling,” the next time you start typing a message to them, “Hey” will beget an autosuggestion of “darling” as the next word, even if this time you are announcing a break-up. If you type a word or phrase you’ve never typed before, autosuggest will prompt you with the statistically most common next phrase from all users (I made a small internet storm in July 2018 when I documented autocomplete’s suggestion in my message to the family babysitter, which paired “Can you sit” with “on my face and”).

This conservativeness permeates every system of algorithmic inference: search for a refrigerator or a pair of shoes and they will follow you around the web as machine learning systems “re-target” you while you move from place to place, even after you’ve bought the fridge or the shoes. Spend some time researching white nationalism or flat earth conspiracies and all your YouTube recommendations will try to reinforce your “interest.” Follow a person on Twitter and you will be inundated with similar people to follow. Machine learning can produce very good accounts of correlation (“this person has that person’s address in their address-book and most of the time that means these people are friends”) but not causation (which is why Facebook constantly suggests that survivors of stalking follow their tormentors who, naturally, have their targets’ addresses in their address books).

Our Conservative AI Overlords Want Everything to Stay the Same [Cory Doctorow/LA Review of Books]

(Image: Groundhog Day/Columbia Pictures)

LongNowKevin Kelly Appears on Jason Silva’s Flow Sessions Podcast

Long Now board member Kevin Kelly recently sat down with Jason Silva on his Flow Sessions podcast for a wide-ranging interview about the deeper sides of technology. 

I’m part of the Long Now Foundation, which is trying to take a long-term view of things. And I realized recently that that long view is really taking the view of systems; it’s a systems viewpoint. And if you look at things as systems, you automatically have a kind of overview. The big view. And I’ve been looking at technology with the big view. To get that big view you have to get out of yourself. You have to transcend. You have to have this distant perspective. And yet, for it to be valid, you have to remain honest and connected on the ground or else it doesn’t make sense. And so trying to stand with one foot in the real here-and-now and one foot with the cosmic billions—if you can do that, then I think you an be helpful in terms of looking at, “Hey, this is where we’re going. Hey, this is where we’ve been. Hey, this is where we are in the context of the cosmos.” 

Kevin Kelly

You can find the podcast on Spotify here, or watch it on YouTube here.

Planet DebianReproducible Builds: Reproducible Builds in December 2019

Welcome to the December 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In this report for December, we cover:

  • Media coverageA Google whitepaper, The Update Framework graduates within the Cloud Native Computing Foundation, etc.
  • Reproducible Builds Summit 2019What happened at our recent meetup?
  • Distribution workThe latest reports from Arch, Debian and openSUSE, etc.
  • Software developmentPatches, patches, patches…
  • Mailing list summary
  • Contact — *How to contribute

If you are interested in contributing to our project, please visit the Contribute page on our website.


Media coverage

Google published Binary Authorization for Borg, a whitepaper on how they reduce exposure of user data to unauthorised code as well as methods for verifying code provenance using their Borg cluster manager. In particular, the paper notes how they attempt to limit their “insider risk”, ie. the potential for internal personnel to use organisational credentials or knowledge to perform malicious activities.

The Linux Foundation announced that The Update Framework (TUF) has graduated within the Cloud Native Computing Foundation (CNCF) and thus becomes the first specification and first security-focused project to reach the highest maturity level in that group. TUF is a technology that secures software update systems initially developed by Justin Cappos at the NYU Tandon School of Engineering.

Andrew “bunnie” Huang published a blog post asking Can We Build Trustable Hardware? Whilst it concludes pessimistically that “open hardware is precisely as trustworthy as closed hardware” it does mention that reproducible builds can:

Enable any third-party auditor to download, build, and confirm that the program a user is downloading matches the intent of the developers.

At the 36th Chaos Communication Congress (36C3) in Leipzig, Hannes Mehnert from the MirageOS project gave a presentation called Leaving legacy behind which talks generally about Mirage system offering a potential alternative and minimalist approach to security but has a section on reproducible builds (at 38m41s).


Reproducible Builds Summit 2019

We held our fifth annual Reproducible Builds summit between the 1st and 8th December at Priscilla, Queen of the Medina in Marrakesh, Morocco.

The aim of the meeting was to spend time discussing and working on Reproducible Builds with a widely diverse agenda and the event was a huge success.

During our time together, we updated and exchanged the status of reproducible builds in our respective projects, improved collaboration between and within these efforts, expanded the scope and reach of reproducible builds to yet more interested parties, established and continued strategic long-term thinking in a way not typically possible via remote channels, and brainstormed designs for tools to enable end-users to get the most benefit from reproducible builds.

Outside of these achievements in the hacking sessions kpcyrd made a breakthrough in Alpine Linux by producing the first reproducible package — specifically, py3-uritemplate — in this operating system. After this, progress was accelerated and by the denouement of our meeting the reproducibility status in Alpine reached 94%. In addition, Jelle van der Waa, Mattia Rizzolo and Paul Spooren discussed and implemented substantial changes to the database that underpins the testing framework that powers tests.reproducible-builds.org in order to abstract the schema in a distribution agnostic way, for example to allow submitting the results of attempts to verify officially distributed Arch Linux packages.

Lastly, Jan Nieuwenhuizen, David Terry and Vagrant Cascadian used three entirely-separate distributions (GNU Guix, NixOS and Debian) to produce a bit-for-bit identical GNU Mes binary despite using three different major versions of GCC and other toolchain components to build an initial binary, which was then used to build a final, bit-for-bit identical, binary of Mes.

The event was held at Priscilla, Queen of the Medina in Marrakesh, a location sui generis that stands for gender equality, female empowerment and the engagement of vulnerable communities locally through cultural activism. The event was open to anybody interested in working on Reproducible Builds issues, with or without prior experience.

A number of reports and blog posts have already been written, including for:

… as well as a number of tweets including ones from Jan Nieuwenhuizen celebrating progress in GNU Guix [] and Hannes [].


Distribution work

Within Debian, Chris Lamb categorised a large number of packages and issues in the Reproducible Builds notes.git repository, including identifying and creating markdown_random_email_address_html_entities and nondeterministic_devhelp_documentation_generated_by_gtk_doc.

In openSUSE, Bernhard published his monthly Reproducible Builds status update and filed the following patches:

Bernhard also filed bugs against:

The Yocto Project announced that it is running continuous tests on the reproducibility of its output which can observed through the oe-selftest runs on their build server. This was previously limited to just the mini images but this has now been extended to the larger graphical images. The test framework is available for end users to use against their own builds. Of particular interest is the production of binary identical results — despite arbitrary build paths — to allow more efficient builds through reuse of previously built objects, a topic covered in more-depth in a recent LWN article.

In Arch Linux, the database structure on tests.reproducible-builds.org was changed and the testing jobs updated to match and work has been started on a verification test job which rebuilds the officially released packages and verifies if they are reproducible or not. In the “hacking” time after our recent summit, several key packages were made reproducible, raising the amount of reproducible packages by approximately 1.5%. For example libxslt was patched with the patch originating from Debian and openSUSE.


Software development

diffoscope

diffoscope is our in-depth and content-aware diff-like utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, diffoscope version 134 was uploaded to Debian unstable by Chris Lamb. He also made the following changes to diffoscope itself, including:

  • Always pass a filename with a .zip extension to zipnote otherwise it will return with an UNIX exit code of 9 and we fallback to displaying a binary difference for the entire file. []
  • Include the libarchive file listing for ISO images to ensure that timestamps – and not just dates – are visible in any difference. (#81)
  • Ensure that our autopkgtests are run with our pyproject.toml present for the correct black source code formatter settings. (#945993)
  • Rename the text_option_with_stdiout test to text_option_with_stdout [] and tidy some unnecessary boolean logic in the ISO9660 tests [].

In addition, Eli Schwartz fixed an error in the handling of the progress bar [] and Vagrant Cascadian added external tool reference for the zstd compression format for GNU Guix [] as well as updated the version to 133 [] and 134 [] in that distribution.

Project website & documentation

There was more work performed on our website this month, including:

In addition, Paul Spooren added a new page overviewing our Continuous Tests overview [], Hervé Boutemy made a number of improvements to our Java and JVM documentation expanding and clarifying various definitions as well as adding external links [][][][] and Mariana Moreira added a .jekyll-cache entry to the .gitignore file [].

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Holger Levsen:

    • Alpine:

      • Indicate where Alpine is being built on the node overview page. []
      • Turn off debugging output. []
      • Sleep longer if no packages are to be built. []
    • Misc:

      • Add some help text to our script to powercycle IONOS (neé Profitbricks) nodes. []
      • Install mosh everywhere. []
      • Only install ripgrep on Debian nodes. []
  • Mattia Rizzolo:

    • Arch Linux:

      • Normalise the suite names in the database. [][][][][]
      • Drop an unneeded line in the scheduler. []
    • Debian:

      • Fix a number of SQL errors. [][][][]
      • Use the debian.debian_support Python library over apt_pkg to perform version comparisons. []
    • Misc:

      • Permit other distributions to use our web-based package scheduling script. []
      • Reformat our power-cycling script using Black and use the Python logging module. []
      • Introduce a dsources database view to simplify some queries [] and add a build_type field to support both “doublerebuilds” and verification rebuilds [].
      • Move (almost) all the timestamps in the database schema from raw strings to “real” timestamp data types. []
      • Only block bots on jenkins.debian.net and tests.reproducible-builds.org, not any other sites. []

  • kpcyrd (for Alpine Linux):

    • Patch/install the abuild utility to one that is reproducible. [][][][]
    • Bump the number of build workers and collect garbage more frequently. [][][][]
    • Classify and display build results consistently. [][][]
    • Ensure that tmux and ripgrep is installed. [][]
    • Support building packages in the future. [][][]

Lastly, Paul Spooren removed the project overview from the bottom-left of the generated pages [] and the usual node maintenance was performed by Holger Levsen [] and Mattia Rizzolo [][].


Mailing list summary

There was considerable activity on our mailing list this month. Firstly, Bernhard M. Wiedemann posted a thread asking What is the goal of reproducible builds? in order to encourage refinements, extra questions and other contributions to what an end-user experience of reproducible builds should or even could look like.

Eli Schwartz then resurrected a previous thread titled Progress in rpm and openSUSE in 2019 to clarify some points around Arch Linux and Python package installation. Hans-Christoph Steiner followed-up to a separate thread originally started by Hervé Boutemy announcing the status of .buildinfo file support in the Java ecosystem, and Paul Spooren then informed the list that Google Summer of Code is now looking for projects for the latest cohort.

Lastly, Lars Wirzenius enquired about the status of Reproducible system images which resulted in a large number of responses.


Contact

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Hervé Boutemy, Holger Levsen, Jelle van der Waa, Lukas Puehringer and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

CryptogramMailbox Master Keys

Here's a physical-world example of why master keys are a bad idea. It's a video of two postal thieves using a master key to open apartment building mailboxes.

Changing the master key for physical mailboxes is a logistical nightmare, which is why this problem won't be fixed anytime soon.

Planet DebianJulien Danjou: Atomic lock-free counters in Python

Atomic lock-free counters in Python

At Datadog, we're really into metrics. We love them, we store them, but we also generate them. To do that, you need to juggle with integers that are incremented, also known as counters.

While having an integer that changes its value sounds dull, it might not be without some surprises in certain circumstances. Let's dive in.

The Straightforward Implementation

class SingleThreadCounter(object):
	def __init__(self):
    	self.value = 0
        
    def increment(self):
        self.value += 1

Pretty easy, right?

Well, not so fast, buddy. As the class name implies, this works fine with a single-threaded application. Let's take a look at the instructions in the increment method:

>>> import dis
>>> dis.dis("self.value += 1")
  1           0 LOAD_NAME                0 (self)
              2 DUP_TOP
              4 LOAD_ATTR                1 (value)
              6 LOAD_CONST               0 (1)
              8 INPLACE_ADD
             10 ROT_TWO
             12 STORE_ATTR               1 (value)
             14 LOAD_CONST               1 (None)
             16 RETURN_VALUE

The self.value +=1 line of code generates 8 different operations for Python. Operations that could be interrupted at any time in their flow to switch to a different thread that could also increment the counter.

Indeed, the += operation is not atomic: one needs to do a LOAD_ATTR to read the current value of the counter, then an INPLACE_ADD to add 1, to finally STORE_ATTR to store the final result in the value attribute.

If another thread executes the same code at the same time, you could end up with adding 1 to an old value:

Thread-1 reads the value as 23
Thread-1 adds 1 to 23 and get 24
Thread-2 reads the value as 23
Thread-1 stores 24 in value
Thread-2 adds 1 to 23
Thread-2 stores 24 in value

Boom. Your Counter class is not thread-safe. 😭

The Thread-Safe Implementation

To make this thread-safe, a lock is necessary. We need a lock each time we want to increment the value, so we are sure the increments are done serially.

import threading

class FastReadCounter(object):
    def __init__(self):
        self.value = 0
        self._lock = threading.Lock()
        
    def increment(self):
        with self._lock:
            self.value += 1

This implementation is thread-safe. There is no way for multiple threads to increment the value at the same time, so there's no way that an increment is lost.

The only downside of this counter implementation is that you need to lock the counter each time you need to increment. There might be much contention around this lock if you have many threads updating the counter.

On the other hand, if it's barely updated and often read, this is an excellent implementation of a thread-safe counter.

A Fast Write Implementation

There's a way to implement a thread-safe counter in Python that does not need to be locked on write. It's a trick that should only work on CPython because of the Global Interpreter Lock.

While everybody is unhappy with it, this time, the GIL is going to help us. When a C function is executed and does not do any I/O, it cannot be interrupted by any other thread. It turns out there's a counter-like class implemented in Python: itertools.count.

We can use this count class as our advantage by avoiding the need to use a lock when incrementing the counter.

If you read the documentation for itertools.count, you'll notice that there's no way to read the current value of the counter. This is tricky, and this is where we'll need to use a lock to bypass this limitation. Here's the code:

import itertools
import threading

class FastWriteCounter(object):
    def __init__(self):
        self._number_of_read = 0
        self._counter = itertools.count()
        self._read_lock = threading.Lock()

    def increment(self):
        next(self._counter)

    def value(self):
        with self._read_lock:
            value = next(self._counter) - self._number_of_read
            self._number_of_read += 1
        return value

The increment code is quite simple in this case: the counter is just incremented without any lock. The GIL protects concurrent access to the internal data structure in C, so there's no need for us to lock anything.

On the other hand, Python does not provide any way to read the value of an itertools.count object. We need to use a small trick to get the current value. The value method increments the counter and then gets the value while subtracting the number of times the counter has been read (and therefore incremented for nothing).

This counter is, therefore, lock-free for writing, but not for reading. The opposite of our previous implementation

Measuring Performance

After writing all of this code, I wanted to make sure how the different implementations impacted speed. Using the timeit module and my fancy laptop, I've measured the performance of reading and writing to this counter.

Operation SingleThreadCounter FastReadCounter FastWriteCounter
increment 176 ns 390 ns 169 ns
value 26 ns 26 ns 529 ns

Atomic lock-free counters in Python

I'm glad that the performance measurements in practice match the theory 😅. Both SingleThreadCounter and FastReadCounter have the same performance for reading. Since they use a simple variable read, it makes absolute sense.

The same goes for SingleThreadCounter and FastWriteCounter, which have the same performance for incrementing the counter. Again they're using the same kind of lock-free code to add 1 to an integer, making the code fast.

Conclusion

It's pretty obvious, but if you're using a single-threaded application and do not have to care about concurrent access, you should stick to using a simple incremented integer.

For fun, I've published a Python package named fastcounter that provides those classes. The sources are available on GitHub. Enjoy!

Sam VargheseWith two-vote majority, Morrison still fears he will lose leadership

When Scott Morrison led the Liberal-National Coalition to victory in the last federal election in May, he was greeted as some kind of superman, mainly because all the polls had predicted a Labor win, and by a substantial margin too.

All the political pundits crowed that this win gave the Australian Prime Minister complete authority to govern as he wished, and the chance to implement policies of his liking.

Nobody pointed out that after the dust had settled, Morrison still only had a majority of two, just one more than his predecessor Malcolm Turnbull enjoyed for much of his tenure.

And that two-seat majority is the reason why Morrison has seemed to be deaf, dumb and blind to the horrific fires that have swept the country and that will continue to do so for a couple of months more.

He cannot even entertain the thought of making a neutral statement on the question of climate change as he fears that could well open the door to another leadership challenge and the loss of the prime ministership.

One is unsure why mainstream political writers have not noticed this simple fact. The group of MPs who are closely tied to the coal industry and who have been behind the Coalition’s leadership woes in 2018 are still very much there.

Tony Abbott may have gone but everyone else is around and they are more than ready to rise up and agitate again if Morrison even breathes a word about energy policy.

So Morrison has no choice but to pretend that nothing has changed from his earlier stance that there is no definite evidence that climate change is responsible for the intensity of the fires that have been seen.

As a marketing man, his responses have been crude, but then that is par for the course. When have you seen someone in marketing act in a manner that can be called empathetic?

Leadership hopefuls like Christian Porter and Peter Dutton may well be taking the temperature of the party and one should not be surprised if another leadership challenge is seen once the politicians return to Canberra.

Everyone wants to be prime minister. Whether anyone can actually lead is an entirely different question.

Worse Than FailureCodeSOD: Yet Another Master of Evil

As a general rule, if you find yourself writing an extension system for your application, stop and do something else. It's almost always in the case of YAGNI: you ain't gonna need it.

George is a "highly paid consultant", and considers himself one of the "good ones": he delivers well tested, well documented, and clean code to his clients. His peer, Gracie on the other hand… is a more typical representative of the HPC class.

George and Gracie found themselves with a problem: based on the contents of a configuration file, they needed to decide what code to execute. Now, you might be thinking that some kind of conditional statement or maybe some sort of object-oriented inheritance thing would do the job.

There were five different code paths, and no one really expected to see those code paths change significantly. Gracie, who was identified as "the architect" on the responsibility matrix for the project, didn't want to write five different ways to do a similar task, so instead, she wrote one way to do all those tasks.

Here's the YAML configuration file that her efforts produced:

use.cases: 0: description: Default product user_file_name_cond: (\'product_start\' in file_name) and (\'pilot\' not in file_name) user_file_name_pf_truncate: '.sig' data_files: False downstream_file_name_templates: signal: _product_start_<platform>.sig downstream_script: another_script.sh 1: description: Full forced product user_file_name_cond: (\'force_all_products\' in file_name) and (\'pilot\' not in file_name) user_file_name_pf_truncate: '.sig' data_files: False downstream_file_name_templates: signal: _product_start_<platform>_pilot.sig conf_override: ['FORCE_PRODUCT=TRUE'] downstream_script: another_script.sh 2: description: Pilot forced product user_file_name_cond: (\'force_all_offers\' in file_name) and (\'pilot\' in file_name) user_file_name_pf_truncate: '_pilot_user.sig' data_files: True downstream_file_name_templates: signal: _product_start_<platform>_pilot_user.sig pilot_users: <platform>_pilot_users_<YYYYMMDD>.txt conf_override: ['FORCE_PRODUCT=TRUE'] downstream_script: another_script.sh 3: description: Pilot product user_file_name_cond: (\'product\' in file_name) and (\'pilot_user\' in file_name) user_file_name_pf_truncate: '_pilot.sig' data_files: True downstream_file_name_templates: signal: _product_start_<platform>_pilot_user.sig pilot_users: <platform>_pilots_<YYYYMMDD>.txt conf_override: ['FORCE_OFFERMATCH=FALSE'] downstream_script: another_script.sh 4: description: Forced assignment user_file_name_cond: (\'user_offer_mapping\' in file_name) or (\'budget\' in file_name) and (\'csv\' in file_name) user_file_name_pf_truncate: '_user_offer_mapping.sig' data_files: True downstream_file_name_templates: signal: _force_offer_assignment_start_<platform>_user_product_mapping.sig user_offer_mapping: <platform>_user_products_<YYYYMMDD>.txt budget: budget_<platform>.csv downstream_script: product/python/forced_assignment/src/main/forced_assignment.py 5: description: Test user_file_name_cond: (\'dummy\' in file_name) user_file_name_pf_truncate: '.sig' data_files: False downstream_file_name_templates: signal: _dummy_<platform>.sig downstream_script: python/forced_offers/semaphore/bin/dummy_downstream_script.sh wait_strs: ['dummy_downstream_script.sh']

Take a look at the user_file_name_cond field in each of those entries. Are you thinking to yourself, "Boy, that looks like some Python syntax in there?"

def get_file_use_case(file_name): for use_case, use_case_conf in Configuration.get_conf('use.cases').items(): user_file_name_cond = use_case_conf['user_file_name_cond'].replace('\\', '') if eval(user_file_name_cond): return use_case

You know that's an eval. It's at least not an exec, which is allowed to use the assignment operator, but there's nothing in eval that prevents it from having side effects. While a malicious config file would fail to do user_case_conf['downstrem_script'] = 'my_hostile_script', it could easily do user_case_conf.__setitem__('downstream_script', 'my_hostile_script').

Of course, hostile attacks aren't really the concern here, at least intentionally hostile. This code was going to run in production, and clients were going to be free to edit and modify those files however they liked. Malicious attacks aside, idiotic mistakes, or worse: clever hacks were entirely possible.

George did his best to illustrate why this was a terrible idea. Gracie, who was the architect, gleefully ignored him. The code got shipped. The client signed off on it. It runs in production.

There is a silver-lining here. George aggressively documented their solution, and his reasons for opposing it, and followed the company policy for involving management in conflicts. Management saw his point, and there was a reshuffling in the teams: George was made the architect, Gracie was "pivoted" to doing more development. While it was too late to fix the code, George used his position to make some policy changes: he worked with the team to democratically build a set of official code standards, made sure that they were applied to all code equally, and revised the team's approach to retros to improve their code in the future.

Of course, there's one person on the team who isn't interested in these changes. Their PRs end up being lengthy flamewars over whether or not they should be following policies. You can guess who that is…

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianRuss Allbery: rra-c-util 8.1

rra-c-util is my collection of utility code that I use in my various other software packages (mostly, but not only, C).

I now forget what I was reading, but someone on-line made a side reference to formatting code with clang-format, which is how I discovered that it exists. I have become a big fan of automated code reformatting, mostly via very positive experiences with Python's black and Rust's rustfmt. (I also use perltidy for my Perl code, but I'm not as fond of it; it's a little too aggressive and it changes how it formats code from version to version.) They never format things in quite the way that I want, but some amount of inelegant formatting is worth it for not having to think about or manually fix code formatting or argue with someone else about it.

So, this afternoon I spent some time playing with clang-format and got it working well enough. For those who are curious, here's the configuration file that I ended up with:

Language: Cpp
BasedOnStyle: LLVM
AlignConsecutiveMacros: true
AlignEscapedNewlines: Left
AlwaysBreakAfterReturnType: AllDefinitions
BreakBeforeBinaryOperators: NonAssignment
BreakBeforeBraces: WebKit
ColumnLimit: 79
IndentPPDirectives: AfterHash
IndentWidth: 4
IndentWrappedFunctionNames: false
MaxEmptyLinesToKeep: 2
SpaceAfterCStyleCast: true

This fairly closely matches my personal style, and the few differences are minor enough that I'm happy to change. The biggest decision that I'm not fond of is how to format array initializers that are longer than a single line, and clang-format's tendency to move __attribute__ annotations onto the same line as the end of the function arguments in function declarations.

I had some trouble with __attribute__ annotations on function definitions, but then found that moving the annotation to before the function return value made the right thing happen, so I'm now content there.

I did have to add some comments to disable formatting in various places where I lined related code up into columns, but that's normal for code formatting tools and I don't mind the minor overhead.

This release of rra-c-util reformats all of the code with clang-format (version 10 required since one of the options above is only in the latest version). It also includes the changes to my Perl utility code to drop support for Perl 5.6, since I dropped that in my last two hold-out Perl packages, and some other minor fixes.

You can get the latest version from the rra-c-util distribution page.

,

Planet DebianEnrico Zini: Gender in history links

Amelio Robles Ávila - Wikipedia
history people archive.org
Amelio Robles Ávila (3 November 1889 – 9 December 1984) was a colonel during the Mexican Revolution. Assigned female at birth with the name Amelia Robles Ávila, Robles fought in the Mexican Revolution, rose to the rank of colonel, and lived openly as a man from age 24 until his death at age 95.
Alan L. Hart (October 4, 1890 – July 1, 1962) was an American physician, radiologist, tuberculosis researcher, writer and novelist. He was in 1917–18 one of the first trans men to undergo hysterectomy in the United States, and lived the rest of his life as a man. He pioneered the use of x-ray photography in tuberculosis detection, and helped implement TB screening programs that saved thousands of lives.[1]
Many people have engaged in cross-dressing during wartime under various circumstances and for various motives. This has been especially true of women, whether while serving as a soldier in otherwise all-male armies, while protecting themselves or disguising their identity in dangerous circumstances, or for other purposes.
Breeching was the occasion when a small boy was first dressed in breeches or trousers. From the mid-16th century[1] until the late 19th or early 20th century, young boys in the Western world were unbreeched and wore gowns or dresses until an age that varied between two and eight.[2] Various forms of relatively subtle differences usually enabled others to tell little boys from little girls, in codes that modern art historians are able to understand.
Sull’opportunità di rimarcare o meno le differenze di genere negli anni della prima infanzia è stato scritto tutto e il contrario di tutto. Indipendentemente da ciò che ognuno di noi può pensare, ancora una volta pare proprio che la storia smentisca solide convinzioni.
Lynn Ann Conway (born January 2, 1938)[2][3] is an American computer scientist, electrical engineer, inventor, and transgender activist.[4]

Planet DebianMichael Prokop: Revisiting 2019

Mika on the Drums, picture by Gregor

Mainly to recall what happened last year and to give thoughts and to plan for the upcoming year(s) I’m once again revisiting my previous year (previous editions: 2018, 2017, 2016, 2015, 2014, 2013 + 2012).

In terms of IT events, I attended Grazer Linuxdays 2019 and gave a talk (Best Practices in der IT-Administration, Version 2019) and was interviewed by Radio Helsinki there. With the Grml project, we attended the Debian Bug Squashing Party in Salzburg in April. I also visited a meeting of the Foundation for Applied Privacy in Vienna. Being one of the original founders I still organize the monthly Security Treff Graz (STG) meetups. In 2020 I might attend DebConf 20 in Israel (though not entirely sure about it yet), will definitely attend Grazer Linuxdays (maybe with a talk about »debugging for sysadmins« or alike) and of course continue with the STG meetups.

I continued to play Badminton in the highest available training class (in german: “Kader”) at the University of Graz (Universitäts-Sportinstitut, USI). I took part in the Zoo run in Tiergarten Schönbrunn (thanks to an invitation by a customer).

I started playing the drums at the »HTU Big Band Graz« (giving a concert on 21st of November). Playing in a big band was like a dream come true, being a big fan of modern Jazz big bands since being a kid and I even played the drums in a big band more than 20 years ago, so I’m back™. I own a nice e-drum set and recently bought a Zildjian Gen16 cymbal set and also own a master-keyboard (AKA MIDI keyboard) for many years, which is excellent for recording. But in terms of “living room practicality”, I wanted something more piano alike, and we bought a Yamaha CLP-645 B digital piano, which my daughters quite regularly use and now and then I manage to practice on it as well. As you might guess, I want to make music a more significant part of my life again.

I visited some concerts, including Jazz Redoute, Jazzwerkstatt Graz, Billy Cobham’s Crosswinds Project, Jazz Night Musikforum Viktring, Gnackbruch evening with AMMARITE, a concert of the Kärntner Sinfonieorchester, Steven Wilson’s To The Bone tour, Sting’s My Songs tour and the Corteo show of Cirque du Soleil. I took some local trips in Graz, including a Murkraftwerk Graz tour and a »Kanalführung«.

Business-wise it was the sixth year of business with SynPro Solutions, and we moved the legal form of our company from GesnbR to GmbH. No big news but steady and ongoing work with my other business duties Grml Solutions and Grml-Forensic.

I also continued with taking care of our kids every Monday and half another day of the week – which is still challenging now and then with running your own business, but so absolutely worth it. With a kid going to school, it was quite some change for my schedule and day planning as well. Now having a fixed schedule for most of the days, the Sonos soundbox wakes us up with Ö1 news and its Ö1 signature tune Monday to Friday. Thanks to Pavlovian conditioning, when waking up on Saturdays and Sundays I also hear the Ö1 signature tune in my head while no radio is present then. :)

I tried to minimize my Amazon usage as much as possible and will try to continue doing so in 2020 as well.

I had quite some troubles with my Vespa PX125, hopefully things are sorted out nowadays though. *knockingonwood*

After being ~20 years in the Usenet (mostly de.* + at.* + tu-graz.*) I pretty much gave it up.

Book reading became more of a habit again, and I managed to complete 42 books (see Bookdump 01/2019 and Bookdump 02/2019). I noticed that what felt like good days for me always included reading books, and want to keep my reading pace for 2020.

Planet DebianRuss Allbery: Term::ANSIColor 5.00

Term::ANSIColor is the core Perl module that provides functions to change text attributes using ECMA-048 escape sequences.

This release adds support for true color (24-bit color), with which I was not previously familiar but which turns out to be widely supported, including by xterm (which calls it direct-color). The new color attributes are in the form rNNNgNNNbNNN and on_rNNNgNNNbNNN and work like the existing color attributes. There is no constant support for somewhat obvious logistical reasons, so they're supported only in the function interface. Thanks to Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯 for the initial patch and drawing true color to my attention.

Color aliases now can expand to more than one attribute. This means that you can do things like:

coloralias('warning', 'black', 'on_bright_red');

and have the expected thing happen. I took advantage of this to clean up the alias handling in general, so you can also now define aliases in terms of other aliases (although the second-level alias doesn't change when the first-level alias changes). The alias fixes are based on work by Yanick Champoux.

Also in this release are a few minor cleanups and documentation improvements.

Finally, this release drops support for Perl 5.6. The minimum supported version is now Perl 5.8. Testing on 5.6 is spotty and Travis-CI doesn't support it, so I don't think I can truly claim it's supported any more.

You can get the latest release from CPAN or from the Term::ANSIColor distribution page.

Planet DebianMarco d'Itri: Debian support for libxcrypt

glibc 2.29-7 and libxcrypt 1:4.4.10-10 today entered Debian testing: crypt(3) and the other related library functions in libcrypt from now on will be provided by libxcrypt instead of glibc.

After 18 months of packaging work, Debian finally supports modern password hashing methods like yescrypt: the details about them are documented in crypt(5).

For the time being there is still no support for libxcrypt in our release of PAM, but hopefully the Debian maintainer will update the package soon and this will allow using yescrypt by default.

If you want to test now the new algorithms then you can generate a password using my mkpasswd program and copy it to /etc/shadow:

# echo "marco:$(echo 12345 | mkpasswd --stdin)" | chpasswd --encrypted

,

Planet DebianRuss Allbery: podlators 4.14

podlators provides the Pod::Man and Pod::Text conversion modules for Perl. This release is a minor bug-fix release, mostly correcting a test suite problem with old versions of Pod::Simple. The user-visible change is to document that parse_lines and parse_string_document expect raw bytes, not decoded characters.

The other change in this release is internal. I finally finished refactoring the test suite, so now all parts of the test suite use separate snippet files and modern coding style, so it should be more maintainable in the future.

You can get the latest release from CPAN or from the podlators distribution page.

Harald Welte36C3 Talks on SIM card technology / Mitel DECT

At 36C3 in December 2019 I had the pleasure of presenting: One full talk about SIM card technology from A to Z and another talk where I presented together with eventphone team members about Security issues in the Mitel SIP-DECT system.

The SIM card talk was surprisingly successful, both in terms of a full audience on-site, as well as in terms of the number of viewers of the recordings on media.ccc.de. SIM cards are a rather niche topic in the wider IT industry, and my talk was not covering any vulnerabilities or the like. Also, there was nothing novel in the talk: SIM cards have been around for decades, and not much has changed (except maybe eSIM and TLS) in recent years.

In any case, I'm of course happy that it was well received. So far I've received lots of positive feedback.

As I'm working [more than] full time in cellular technology for almost 15 years now, it's sometimes hard to imagine what kind of topics people might be interested in. If you have some kind of suggestion on what kind of subject within my area of expertise you'd like me to talk about, please don't hesitate to reach out.

The Mitel DECT talk also went quite well. I covered about 10 minutes of technical details regarding the reverse engineering of the firmware and the communication protocols of the device. Thanks again to Dieter Spaar for helping with that. He is and remains the best reverse engineer I have met, and it's always a privilege to collaborate on any project. It was of course also nice to see what kind of useful (and/or fun) things the eventphone team have built on top of the knowledge that was gained by protocol-level reverse engineering.

If you want to know more low-level technical detail than the 36C3 talk, I recommend my earlier talk at the OsmoDevCon 2019 about Aastra/Mitel DET base station dissection.

If only I had more time, I would love to work on improving the lack of Free / Open Source Software realted to the DECT protocol family. There's the abandoned deDECTed.org, and the equally abandoned dect.osmocom.org project. The former only deals with the loewst levels of DECT (PHY/MAC). The latter is to a large extent implemented as part of an ancient version of the Linux kernel (I would say this should all run in userspace, like we run all of GSM/UMTS/LTE in userspace today).

If anyone wants to help out, I still think working on the DECT DLC and NWK dissectors for wireshark is the best way to start. It will create a tool that's important for anyone working with the DECT protocols, and it will be more or less a requirement for development and debugging should anyone ever go further in terms of implementing those protocols on either the PP or FP side. You can find my humble beginnings of the related dissectors in the laforge/dect branch of osmocom.org/wireshark.git.

Harald WelteRetronetworking / BBS-Revival setup at #36C3

After many years of being involved in various projects at the annual Chaos Communication Congress (starting from the audio/vidoe recording team at 15C3), I've finally also departed the GSM team, i.e. the people who operate (Osmocom based) cellular networks at CCC events.

The CCC Camp in August 2019 was slightly different: Instead of helping an Osmocom based 2G/3G network, I decided to put up a nextepc-based LTE network and make that use the 2G/3G HLR (osmo-hlr) via a newly-written DIAMETER-to-GSUP proxy. After lots of hacking on that proxy and fixing various bugs in nextepc (see my laforge/cccamp2019 branch here) this was working rather fine.

For 36C3 in December 2019 I had something different in mind: It was supposed to be the first actual demo of the retronetworking / bbs-revival setup I've been working on during past months. This setup in turn is sort-of a continuation of my talk at 34C3 two years ago: BBSs and early Intenet access in the 1990ies.

Rather than just talking about it, I wanted to be able to show people the real thing: Actual client PCs running (mainly) DOS, dialling over analog modems and phone lines as well as ISDN-TAs and ISDN lines into BBSs, together with early Interent access using SLIP and PPP over the same dial-up lines.

The actual setup can be seen at the Dialup Network In A Box wiki page, together with the 36C3 specific wiki page.

What took most of the time was - interestingly - mainly two topics:

  1. A 1U rack-mount system with four E1 ports. I had lots of old Sangoma Quad-E1 cards in PCI form-factor available, but wanted to use a PC with a more modern/faster CPU than those old first-generation Atom boxes that still had actual PCI slots. Those new mainboards don't have PCI but PCIe. There are plenty of PCIe to PCI bridges and associated products on the market, which worked fine with virtually any PCI card I could find, but not with the Sangoma AFT PCI cards I wanted to use. Seconds to minutes after boot, the PCI-PCIe bridges would always forget their secondary bus number. I suspected excessive power consumption or glitches, but couldn't find anything wrong when looking at the power rails with a scope. Adding additional capacitors on every rail also didn't change it. The !RESET line is also clean. It remains a mystery. I then finally decided to but a new (expensive) DAHDI 4-port E1 PCIe card to move ahead. What a waste of money if you have tons of other E1 cards around.

  2. Various trouble with FreeSWITCH. All I wanted/needed was some simple emulation of a PSTN/ISDN switch, operating in NT mode towards both the Livingston Portmaster 3 RAS and the Auerswald PBX. I would have used lcr, but it supports neither DAHDI nor Sangoma, but only mISDN - and there are no mISDN cards with four E1 ports :( So I decided to go for FreeSWITCH, knowing it has had a long history of ISDN/PRI/E1 support. However, it was a big disappointment. First, there were some segfaults due to a classic pointer deref before NULL-check. Next, libpri and FreeSWITCH have a different idea how channel (timeslot) numbers are structured, rendering any call attempt to fail. Finally, FreeSWITCH decided to blindly overwrite any bearer capabilities IE with 'speech', even if an ISDN dialup call (unrestricted digital information) was being handled. The FreeSWITCH documentation contains tons of references on channel input/output variables related to that - but it turns out their libpri integration doesn't set any of those, nor use any of them on the outbound side.

Anyway, after a lot more time than expected the setup was operational, and we could establish modem calls as well as ISDN dialup calls between the clients and the Portmaster3. The PM3 in turn then was configured to forward the dialup sessions via telnet to a variety of BBSs around the internet. Some exist still (or again) on the public internet. Some others were explicitly (re)created by 36C3 participants for this very BBS-Revival setup.

My personal favorite was finding ACiD Underworld 2.0, one of the few BBSs out there today who support RIPscrip, a protocol used to render vector graphics, text and even mouse-clickable UI via modem connection to a DOS/EGA client program called RIPterm. So we had one RIPterm installation on Novell DOS7 that was just used for dialling into ACiD Underworld 2.0.

Among other things we also tested interoperability between the 1980ies CCC DIY accoustic coupler "Datenklo" and the Portmaster, and confirmed that Windows 2000 could establish multilink-PPP not only over two B-channels (128 kbps) but also over 3 B-Channels (192).

Running this setup for four days meant 36C3 was a quite different experience than many previous CCC congresses:

  • I was less stressed as I wasn't involved in operating a service that many people would want to use (GSM).

  • I got engaged with many more people with whom I would normally not have entered a conversation, as they were watching the exhibits/demos and we got to chat about the technology involved and the 'good old days'.

So all in all, despite the last minute FreeSWITCH-patching, it was a much more relaxing and rewarding experience for me.

Special thanks to

  • Sylvain "tnt" Munaut for spending a lot of time with me at the retronetworking assembly. The fact that I had an E1 interface around was a good way for him to continue development on his ICE40 based bi-directional E1 wiretap. He also helped with setup and teardown.

  • miaoski and evanslify for reviving two of their old BBSs from Taiwan so we could use them at this event

The retronetworking setup is intended to operate at many other future events, whether CCC related, Vintage Computing or otherwise. It's relatively small and portable.

I'm very much looking forward to the next incarnations. Until then, I will hopefully have more software configured and operational, including a variety of local BBSs (running in VMs/containers), together with the respective networking (FTN, ZConnect, ...) and point software like CrossPoint.

If you are interested in helping out with this project: I'm very much looking for help. It doesn't matter if you're old and have had BBS experience back in the day, or if you're a younger person who wants to learn about communications history. Any help is appreciated. Please reach out to the bbs-revival@lists.osmocom.org mailing list, or directly to me via e-mail.

Planet DebianShirish Agarwal: Indian Economy, NPR, NRC and Crowd Control Part – II

Protests and their history

A Happy New Year to all. While I would have loved to start on a better note, situations are the way they are. Before starting with the prickly process of NPR, NRC let me just focus on the protests themselves. Now protests either in India or abroad are not a new thing. While thinking on this, I found one of the modern, medieval recorded entries of protests to be in 12th century Paris, and just like most strikes, this one was for rights of liberty, freedom and price increase. The famous or infamous University of Paris strike 1229 , There have been so many strikes and protests worldwide which changed the world, in recent memory the protests against American involvement in Vietnam , The Montgomery bus boycott by Rosa Parks , the protests that I became aware in South Africa, UCT about Rhodes must fall movement . I would be forever grateful to Bernelle for sharing with us the protests that had happened the year before near the Sarah Bartman Hall. Closer home i.e. in India, we have had a rich history of protests especially during the Colonial period and the Indian Freedom movement, as well as afterwards, i.e. after india became free. Whether it was the Navnirman Andolan or the Great Bombay Textile Strike which actually led to industries moving out of Mumbai. My idea of sharing above strikes and protests has been that protests are not a new thing in India and have been part of India socio-political culture throughout its history. I am sure there were also protests during the medieval period but not going that far as it would not add value to the post currently. It may be a good idea to share about that in some other blog post perhaps.

The protests against NPR, NRC and CAA

So, let’s start with what these acronyms are and why people are protesting are against it . The first acronym is National Population Register (NPR) . Now while the Government says that NPR is nothing but the census which is done by GOI every year, there is a difference. There are few things which make it different from earlier years, those are, birth certificates, ‘date and birth of parents’ and ‘last place of residence’ . The problem starts and ends with these two points for NPR apart from biometric information which again has issues, both for rich and poor alike . Let me explain what is the problem therein, using my own use-case or history which probably can be multiplied by probably millions of people of my age and lesser and elder to me.

Now one could ask naively, what is wrong in birth certificates, in theory and today’s day and age, perhaps not, but everything has a context and a time and place. While I was born in Pune, in 1975, the Registration and Births Act had been recently passed in 1969. So neither the Municipal Corporation of that time was active and nor was that a statutory requirement in those times. To add to that, my mother had to go at least 10-15 times to the Municipal Corporation in order to secure my birth certificate even though there was no name. This brings to another issue, in those times, almost till date, the mortality rate of newborns have been high. While we have statistics of the last 20 odd years which do show some change, one can only guess what the infant mortality rates would have been in the 60’s and the 70’s . Apart from that, most deliveries happened at home with a mid-wife rather than in a nursing home. This is still the norm today in many cities and hinterland as well. Recently, there was news of 100 babies who died in Kota, Rajasthan. While earlier they were being given clean chits, it seems most neonatal units which should house only one child were housing three children. Hence the large number of deaths. Probably, most of the parents were poor and the administration while showing that each child was given a separate neonatal unit were put together. The corruption whether in Indian public or private hospitals deserves its own blog post. I had shared some of it in the blog post no country for women or doctors.

But that apart, in those days because babies died, many children didn’t get the name till s/he was of kinder-garden, school going age. In my situation was a bit more different and difficult as my parents had separated and there was possibility that my father may go for a custody battle which never happened. Now apart from proving my own identity even though I have all the papers, I might still need to procure more, I would need papers or documentation proving the relationship between mother and I . Now I can’t produce father because he is no more apart from his death certificate, with my mother, I would have to get and submit most probably a DNA test which is expensive to say the least. I know of some labs who also charge depending upon many genetic markers you are looking for, the more the better and costs go up like that. Now here itself two questions arise with the NPR itself.

a. How many children would have proper birth certificates, more so the ones who live either below poverty line or just above ? Many of them don’t have roof over their head or have temporary shelters, where would they have place to get and put such documents. Also, as many of them cannot either read or write, they are usually suspicious of Government papers (and with good reason) . Time and again, the Government of the day has promised one thing and done another. Mostly to do with property rights and exclusion. Why to go far, just a few days back Sonam wangchuck, the engineer, who shared his concerns about Ladakh and the fragile ecosystem of Ladakh in Himalayas due to unchecked ‘development’ as been promised by the Prime Minister bringing it par to Delhi. Sadly, many people do not know that Himalayas is the youngest geologically speaking while the oldest are in South Africa (thanks to Debconf for that bit of info. as well.) . While they celebrate the political freedom from Kashmir, they do have concerns as being with Kashmir, they enjoyed land rights and rights to admission to Indians and foreigners alike, this they have now lost. A region which is predominantly buddhist in nature is being introduced to sex tourism and massage parlours which are not needed. If possible, people should see Mr. Wangchuk’s talk on Ted talks or any of the work the gentleman has done but this is getting off-topic here.

b. What about people of my age or above me. Would all of us would become refugees in our own lands ? What about those who go from place to place for work ? What about Government servants themselves ? Those who work in Central Government, many of them have and are supposed to change places every 3-4 years. My grandfather (from mother’s side) toured all of India due to his job. My mother also got transferred a few times, although I stayed in Pune. This puts up questions which make it difficult for many to answer if they were to do it truthfully as nobody would have full papers. There are also no guarantees that just having the right papers would make it right. It is very much possible that the ‘babu’ or beareaucrat sitting at the other end would demand money. I am sure there was lot of black money generated during NRC in Assam. This would happen at NPR stage itself.

c. The third question is what would happen to those who are unable to prove their citizenship at the beginning itself. They probably would need to go to court of law. What happens to their job, property and life as it is. As it is, most businesses have a slack of 40-50%, this will become more pronounced. Would we be stateless citizens in our own land ?

NRC and CAA

Somehow, let’s say you managed to have yourself included in NPR, it doesn’t mean that you will be included in NRC or National Register of Citizens. For this, one may have to rinse and repeat. They may ask more invasive questions as never before. The possible exploitation of people by the state would be as never seen before. Most of the majority in India think of Germany and how it became an industrious house and they think, they became industrious because they persecuted Jews. In fact, I believe it was the opposite. As have shared before on the blog itself, numerous times, if the Jews had been part of Germany, Germany may have prospered many times over. If the Jews could make Israel so powerful, where would Germany would have been today ? Also not many people know about the Marshall Plan which perhaps laid the foundation of the European Union as we know today but that may be part of another blog post, another day. I would probably do a part III as there are still aspects of the whole issue which I haven’t touched upon. I might do it tomorrow or few days after. Till later.

Planet DebianIustin Pop: System load and ping latency strangeness

So, instead of a happy new year post or complaining about Debian’s mailing list threads (which make me very sad), here’s an interesting thing (I think).

Having made some changes to the local network recently, I was surprised at the variability in ping latency on the local network but also to localhost! I thought, well, such is Linux, yada yada, it’s a kernel build with CONFIG_NO_HZ_IDLE=y, etc. However looking at the latency graph an hour ago showed something strange: latencies stabilised… and then later went bad again. Huh?

This is all measured via smokeping, which calls fping 10 times in a row, and records both average and spread of the values. For “stable”, I’m talking here about a somewhat even split between 10µsec and 15µsec (for the 10-ping average), with very consistent values, and everything between 20µsec and 45µsec, which is a lot.

For the local-lan host, it’s either consistently 200µsec vs 200-300µsec with high jitter (outliers up to 1ms). This is very confusing.

The timing of the “stable” periods aligned with times when I was running heavy disk I/O. Testing quickly confirmed this:

  • idle system: localhost : 0.03 0.04 0.04 0.03 0.03 0.03 0.03 0.03 0.03 0.03
  • pv /dev/md-raid5-of-hdds: localhost : 0.02 0.01 0.01 0.01 0.01 0.01 0.03 0.03 0.03 0.02
  • pv /dev/md-raid5-of-ssds: localhost : 0.03 0.01 0.01 0.01 0.01 0.02 0.02 0.02 0.02 0.02
  • with all CPUs at 100%, via stress -c $N: localhost : 0.02 0.00 0.01 0.00 0.01 0.01 0.01 0.01 0.01 0.01
  • with CPUs idle, but with governor performance so there’s no frequency transition: localhost : 0.01 0.15 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03

So, this is not CPU frequency transitions, at least as seen by Linux. This is purely CPU load, and, even stranger, it’s about single core load. Running the following in parallel:

  • taskset -c 8 stress -c 1 and
  • taskset -c 8 fping -C 10 localhost

Results in the awesome values of:

localhost : 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01

Now, that is what I would expect :) Even more interesting, running stress on a different CPU (any different CPU) seems to improve things, but only by half (using ping which has better resolution).

To give a more graphical impression of the latencies involved (staircase here is due to fping resolution bug, mentioned below):

Smokeping: localhost Smokeping: localhost
Smokeping: host on local net Smokeping: host on local net
Localhost CPU usage Localhost CPU usage

Note that plain I/O wait (the section at the top) doesn’t affect latency; only actual CPU usage, as seen at Fri 02:00-03:00 and then later 11:00-21:00 and (much higher) Sat 10:15-20:00.

If you squint, you can even correlate lower CPU usage on Fri 16:00-21:00 to slightly increased latencies.

Localhost CPU frequency caling Localhost CPU frequency caling

Does this all really matter? Not really, not in any practical sense. Would I much prefer clean, stable ping latencies? Very much so.

I’ve read the documentation on no HZ, which tells me I should be rebooting about 20 or 30 times with all kinds of parameter combinations and kernel builds, and that’s a bit too much from my free time. So maybe someone has some idea about this, would be very happy to learn what I can tune to make my graphs nicer :)

I’ve also tested ping from another host to this host, and high CPU usage results in lower latencies. So it seems to be not user-space related, but rather kernel latencies?!

I’ve also thought this might be purely an fping issue; however, I can clearly reproduce it simply by watching ping localhost which running (or not) stress -c $N; the result is ~10-12µsec vs. ~40µsec.

Thanks in advance for any hints.

Planet DebianThorsten Alteholz: My Debian Activities in December 2019

FTP master

This month I accepted 450 packages and rejected 61. The overall number of packages that got accepted was 481.

Debian LTS

This was my sixty sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 16.5h. During that time I did LTS uploads of:

  • [DLA 2035-1] libpgf security update for one CVE
  • [DLA 2039-1] libvorbis security update for two CVEs
  • [DLA 2040-1] harfbuzz security update for one CVE
  • [DLA 2043-1] gdk-pixbuf security update for five CVEs
  • [DLA 2043-2] gdk-pixbuf regression update
  • [DLA 2047-1] cups security update for one CVE
  • [DLA 2050-1] php5 security update for four CVEs
  • [DLA 2052-1] libbsd security update for one CVE
  • [DLA 2055-1] igraph security update for one CVE

Last but not least I did some days of frontdesk duties and started to work on the sqlite3 package.

Debian ELTS

This month was the nineteenth ELTS month.

During my allocated time I uploaded:

  • ELA-202-1 for gdk-pixbuf
  • ELA-202-2 for gdk-pixbuf
  • ELA-204-1 for php5

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new upstream versions of …

I improved packaging of …

As nobody really used them, I removed the lam4 and mpich2 version of meep. Now only the serial version, the openmpi- and the mpi-default-version are available. Please complain in case you need one of the other versions again.

I also uploaded all meep packages, libctl and mpb to unstable.

On my Go challenge I uploaded the source-only versions of golang-github-boj-redistore, golang-github-dchest-uniuri, golang-github-jackc-fake, golang-github-joyent-gocommon, golang-github-mattetti-filebuffer, golang-github-nrdcg-goinwx, golang-github-pearkes-dnsimple, golang-github-soniah-dnsmadeeasy, golang-github-vultr-govultr, golang-github-zorkian-go-datadog-api.
New Go packages I uploaded were: golang-github-hashicorp-terraform-svchost, golang-github-apparentlymart-go-cidr, golang-github-bmatcuk-doublestar, golang-github-cactus-go-statsd-client, golang-github-corpix-uarand, golang-github-cyberdelia-heroku-go

Planet DebianJonathan Carter: Free Software Activities (2019-12)

Watching people windsurf at Blouberg beach

A lot has happened in Debian recently, I wrote seperate blog entries about that but haven’t had the focus to finish them up, maybe I’ll do that later this month. In the meantime, here are some uploads I’ve done during the month of December…

Debian packaging work

2019-12-02: Upload package calamares (3.2.17-1) to Debian unstable.

2019-12-03: Upload package calamares (3.2.17.1-1) to Debian unstable.

2019-12-04: Upload package python3-flask-caching to Debian unstable.

2019-12-04: File removal request for python3-flask-cache (BTS: #946139).

2019-12-04: Upload package gamemode (1.5~git20190812-107d469-3) to Debian unstable.

2019-12-11: Upload package gnome-shell-extension-draw-on-your-screen (5-1) to Debian unstable.

2019-12-11: Upload package xabacus (8.2.3-1) to Debian unstable.

2019-12-11: Upload package gnome-shell-extension-gamemode (4-1) to Debian unstable.

2019-12-11: Upload package gamemode (1.5~git20190812-107d469-4) to Debian unstable.

Debian package sponsoring/reviewing

2019-12-02: Sponsor package scrcpy (1.11+ds-1) for Debian unstable (mentors.debian.net request).

2019-12-03: Sponsor package python3-portend (2.6-1) for Debian unstable (Python team request).

2019-12-04: Merge MR#1 for py-postgresql (DPMT).

2019-12-04: Merge MR#1 for pyphen (DPMT).

2019-12-04: Merge MR#1 for recommonmark (DPMT).

2019-12-04: Merge MR#1 for python-simpy3 (DPMT).

2019-12-04: Merge MR#1 for gpxpy (DPMT).

2019-12-04: Sponsor package gpxpy (1.3.5-2) (Python team request).

2019-12-04: Merge MR#1 for trac-subcomponents (DPMT).

2019-12-04: Merge MR#1 for debomatic (PAPT).

2019-12-04: Merge MR#1 for archmage (PAPT).

2019-12-04: Merge MR#1 for ocrfeeder (PAPT).

2019-12-04: Sponsor package python3-tempura (1.14.1-2) for Debian unstable (Python team request).

2019-12-04: Sponsor package python-sabyenc (4.0.1-1) for Debian experimental (Python team request).

2019-12-04: Sponsor package python-yenc (0.4.0-7) for Debian unstable (Python team request).

2019-12-05: Sponsor package python-gntp (1.0.3-1) for Debian unstable (Python team request).

2019-12-05: Sponsor package python-cytoolz (0.10.1-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package mwclient (0.10.0-2) for Debian unstable (Python team request).

2019-12-22: Sponsor package hyperlink (19.0.0-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package drf-generators (0.4.0-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package python-mongoengine (0.18.2-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package libcloud (2.7.0-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package pep8-naming (0.9.1-1) for Debian unstable (Python team request).

2019-12-23: Sponsor package python-django-braces (1.13.0-2) for Debian unstable (Python team request).

Planet DebianElana Hashman: KubeCon NA 2019 Talk Resources

At KubeCon + CloudNativeCon North America 2019, I co-presented "Weighing a Cloud: Measuring Your Kubernetes Clusters" with Han Kang. Here's some links and resources related to my talk, for your reference.

Weighing a Cloud: Measuring Your Kubernetes Clusters

Related readings

I'm including these documents for reference to add some context around what's currently happening (as of 2019Q4) in the Kubernetes instrumentation SIG and wider ecosystem.

Note that GitHub links are pinned to their most recent commit to ensure they will not break; if you want the latest version, make sure to switch the branch to "master".

,

CryptogramFriday Squid Blogging: Giant Squid Video from the Gulf of Mexico

Fantastic video:

Scientists had used a specialized camera system developed by Widder called the Medusa, which uses red light undetectable to deep sea creatures and has allowed scientists to discover species and observe elusive ones.

The probe was outfitted with a fake jellyfish that mimicked the invertebrates' bioluminescent defense mechanism, which can signal to larger predators that a meal may be nearby, to lure the squid and other animals to the camera.

With days to go until the end of the two-week expedition, 100 miles (160 kilometers) southeast of New Orleans, a giant squid took the bait.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramChrome Extension Stealing Cryptocurrency Keys and Passwords

A malicious Chrome extension surreptitiously steals Ethereum keys and passwords:

According to Denley, the extension is dangerous to users in two ways. First, any funds (ETH coins and ERC0-based tokens) managed directly inside the extension are at risk.

Denley says that the extension sends the private keys of all wallets created or managed through its interface to a third-party website located at erc20wallet[.]tk.

Second, the extension also actively injects malicious JavaScript code when users navigate to five well-known and popular cryptocurrency management platforms. This code steals login credentials and private keys, data that it's sent to the same erc20wallet[.]tk third-party website.

Another example of how blockchain requires many single points of trust in order to be secure.

CryptogramMysterious Drones Are Flying over Colorado

No one knows who they belong to. (Well, of course someone knows. And my guess is that it's likely that we will know soon.)

EDITED TO ADD (1/3): Another article.

Worse Than FailureError'd: Variable Trust

Brian writes, "Of course server %1 is trustworthy, I couldn't do my work without it!"

 

"Rockefeller lived on Millionaires' Row, but I think it was a little bit later than the 18th century," Bryan writes.

 

"I've heard about 64-bit builds being inherently less efficient than their 32-bit cousins, but I didn't know Notepad++ would need to download this much extra data!" wrote Lee M.

 

Shawn M. writes, "I wonder if this is how new hackers get their start at pwning web sites?"

 

"This trip left me feeling a bit empty, regardless, I still gave 5 stars," Martin C. wrote.

 

"Let me guess Outlook 2016, you aren't 'old enough' to understand umlauts?" Justus B. wrote.

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianKurt Kremitzki: November and December Update for FreeCAD & Debian Science

Hello again! This new year's update announces some interesting new beginnings for the FreeCAD project, though it's a little short since I got some much needed vacation time over the last two months.

OpenFOAM on One Core? Only 92 Hours! (for mipsel)

OpenFOAM & ParaView flow simulation.

In November a strange bug was found in the OpenFOAM package which led to only one core being used during builds, even though the logs reported an N core build. In the worst case scenario, on the mipsel architecture, this led to an increase in build times from 17 to 92 hours! I did some troubleshooting on this but found it a bit difficult since OpenFOAM uses a bespoke build system called wmake. I found myself wishing for the simplicity of CMake, and found there was an experimental repo implementing support for it but it didn't seem to work out of the box or with a bit of effort. I wonder if there's any consideration amongst OpenFOAM developers in moving away from wmake?

Anyway, OpenFOAM ended up getting removed from Debian Testing, but thankfully Adrian Bunk identified the problem, which is that the environment variable MAKEFLAGS was getting set to 'w' for some reason, and thus falling through the wmake code block that set up a proper parallel build for OpenFOAM. So, unsatisfyingly, as a workaround I uploaded the latest OpenFOAM version, 1906.191111, with unexport MAKEFLAGS. It would be nice to find an explanation, but I didn't spend much more time digging.

So, to end on the good news, the newest bugfix version of the OpenFOAM 1906 release, from November 11th 2019, is available for use going into 2020!

Trip to FOSDEM 2020 and MiniDebCamp at the Hackerspace Brussels

FOSDEM logo.

It was a bit last minute, but I finally decided to attend FOSDEM 2020. I had balked a bit at the cost since flights from the US are around $900, but decided it would be an important opportunity for FreeCAD developers and community to get together and possibly do some important work. Thankfully, Yorik and other senior FreeCAD developers thought it would be a good use of the project's Bountysource money to cover the cost of one ticket, split in half between myself and sliptonic, a developer from Missouri. He focuses on the Path workbench and FreeCAD in CAM applications, an area I'm interested in moving into as I now have such machining equipment available to me through my local ATX Hackerspace. The three of us will be giving a talk, "Open-source design ecosystems around FreeCAD", at 11:20 on Saturday, so please come by and say hi if you're able to!

I'll be staying for a few days before and after FOSDEM, including attending the MiniDebCamp at Hackerspace Bruxelles on Thursday & Friday, interested in anything Debian/FreeCAD related, so I look forward to getting a lot of work done indeed!

Looking at BRL-CAD for Debian

Developer working on BRL-CAD, circa 1980

Lead developer Mike Muuss works on the XM-1 tank in BRL‑CAD on a PDP‑11/70 computer system, circa 1980.

For the past several summers, FreeCAD has participated in the Google Summer of Code program under an umbrella organization led by Sean Morrison of BRL-CAD. BRL-CAD is a very interesting bit of software with a long history, in fact the oldest known public version-controlled codebase in the world still under development, dating back to 1983-12-16 00:10:31 UTC. It is inspired by the development ideas of the era, a sort of UNIX philosophy for CAD, made up of many small tools doing one thing well and meant to be used in a normal UNIXy way, being piped into one another and so forth, with a unifying GUI using those tools. Since it's made up of BSD/LGPL licensed code, it ought to be available as part of the Debian Science toolkit, where it may be useful for FreeCAD as an included alternative CAD kernel to the currently exclusive OpenCASCADE. For example, fillets in OpenCASCADE are somewhat buggy and unmaintainably implemented such that an upstream rewrite is the only hope for long-term improvement. BRL-CAD could potentially improve FreeCAD in areas like this.

It turns out a Debian Request for Packaging bug for BRL-CAD has been open since 2005. I plan to close it! It turns out there's already existing Debian packaging work, too, though it's quite a few years old and thus some adaptation still is required.

PySide 2 and KDE Maintenance in Debian

Recently, FreeCAD has been unbuildable in Debian Sid because of issues related to PySide 2 and the Python 3.8 migration. This is complicated by the fact that the upstream fix has been released but in version 5.14.0, which builds fine with Qt 5.14, although Sid currently has 5.12. Furthermore, the PySide 2 package itself isn't building at the moment either! Since FreeCAD depends on PySide 2 and Qt, and I use the Qt-based KDE as my desktop, it seems like taking on maintenance of PySide 2 is something I should do to get started in this realm. However, the Qt/KDE Team's packaging practices and tools are rather different than the ones I'm used to for Science Team packages. This makes sense: Science Team packages are very often a single Git repo, but Qt5 for example is really 44 Git submodules smushed together. As such, things are a bit different! Once I get things taken care of for the package, I will try to write up some notes to help others interested in getting started, especially since KDE packaging could use some help.

FreeCAD Sysadmin Woes Begone: DigitalOcean Sponsorship

DigitalOcean's "Powered By" blue badge lgoo.

I'm very happy to announce that the FreeCAD project is now among the many open source software sponsorships by DigitalOcean.

One of the first things I did when interested in FreeCAD was to try to take on the responsibility of maintaining the project's infrastructure, since that would free up time for people to work on FreeCAD itself. FreeCAD's 17 years old now, and some of our infrastructure stack is about as dated. However, it isn't easy to just move things, I had to get things up to speed first and try to minimize disruption, so it's been a slow process. I'll go into more details in a technical blog post on the matter after I've finished our migration, hopefully by the end of this month, including details on our new setup, with the goal of allowing people to get set up with a dev environment of our project tools so you can do some hacking on things yourself and help out if possible.

Thanks for your support

I appreciate any feedback you might have.

You can get in touch with me via Twitter @thekurtwk.

If you'd like to donate to help support my work, there are several methods available on my site.

,

Planet DebianBen Hutchings: Debian LTS work, December 2019

I was assigned 16.5 hours of work by Freexian's Debian LTS initiative and carried over 3.75 hours from November. I worked all 20.25 hours this month.

I prepared and, after review, released Linux 3.16.79. I rebased the Debian package onto 3.16.79 and sent out a request for testing.

I also released Linux 3.16.80, but haven't yet rebased the Debian package onto this.

Planet DebianJonathan Dowland: Linux Desktop

Happy New Year!

It's been over two years since writing back on the Linux desktop, and I've had this draft blog post describing my desktop setup sitting around for most of that time. I was reminded of it by two things recently: an internal work discussion about "the year of the linux desktop" (or similar), and upon discovering that the default desktop choice for the current Debian release ("Buster") uses Wayland, and not the venerable X. (I don't think that's a good idea).

GNOME 3 Desktop

I already wrote a little bit about my ethos and some particulars, so I'll not repeat myself here. The version of GNOME I am using is 3.30.2. I continue to rely upon Hide Top Bar, but had to disable TopIcons Plus which proved unstable. I use the Arc Darker theme to shrink window title bars down to something reasonable (excepting GTK3 apps that insist on stuffing other buttons into that region).

Although I mostly remove or hide things, I use one extension to add stuff: Suspend Button, to add a distinct "Suspend" button. The GNOME default was, and seem to remain, to offer only a "Power off" button, which seems ludicrous to me.

I spend a lot of time inside of Terminals. I use GNOME terminal, but I disable or hide tabs, the menubar and the scrollbar. Here's one of my top comfort tips for working in terminals: I set the default terminal size to 120x32, up from 80x24. It took me a long time to realise that I habitually resized every terminal I started.

I've saved the best for last: The Put Windows GNOME shell extension allows you to set up keyboard shortcuts for moving and resizing the focussed window to different regions of the desktop. I disable the built-in shortcuts for "view splits" and rely upon "Put Windows" instead, which is much more useful: with the default implementation, once "snapped", you can't resize windows (widen or narrow them) unless you first "unsnap" them. But sometimes you don't want a 50/50 split. "Put Windows" doesn't have that restriction; but it also lets you cycle between different (user-configurable) splits: I use something like 50/50, 30/70, 70/30. It also lets you move things to corners as well as sides, and also top/bottom splits, which is very useful for comparing spreadsheets (as I pointed out eight years ago).

"Put Windows" really works marvels and entirely replaces SizeUp that I loved on Mac.

Planet DebianTim Retout: Blog Posts

CryptogramHacking School Surveillance Systems

Lance Vick suggesting that students hack their schools' surveillance systems.

"This is an ethical minefield that I feel students would be well within their rights to challenge, and if needed, undermine," he said.

Of course, there are a lot more laws in place against this sort of thing than there were in -- say -- the 1980s, but it's still worth thinking about.

EDITED TO ADD (1/2): Another essay on the topic.

Planet DebianSylvain Beucler: Debian LTS and ELTS - December 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In December, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 16.5h for LTS (out of 30 max) and 16.5h for ELTS (max).

This is less than usual, AFAICS due to having more team members requesting more hours (while I'm above average), and less unused hours given back (or given back too late).

ELTS - Wheezy

  • libonig: finish work started in November:
  • CVE-2019-19203/libonig: can't reproduce, backport non-trivial likely to introduce bugs,
  • CVE-2019-19012,CVE-2019-19204,CVE-2019-19246/libonig: security upload
  • libpcap: attempt to recap vulnerabilities mismatch (possibly affecting ELA-173-1/DLA-1967-1); no follow-up from upstream
  • CVE-2019-19317,CVE-2019-19603,CVE-2019-19645/sqlite3: triage: not-affected (development version only)
  • CVE-2019-1551/openssl: triage: not-affected; discuss LTS triage rationale
  • CVE-2019-14861,CVE-2019-14870/samba: triage: not-affected
  • CVE-2019-19725/sysstat: triage: not-affected (vulnerable code introduced in v11.7.1)
  • CVE-2019-15845,CVE-2019-16201,CVE-2019-16254,CVE-2019-16255/ruby1.9.1: security upload

LTS - Jessie

  • CVE-2019-19012,CVE-2019-19204,CVE-2019-19246/libonig: shared work with ELTS, security upload
  • libpcap: shared work with ELTS
  • libav: finish work started in November:
  • CVE-2018-18829/libav: triage: postponed (libav-specific issue, no patch)
  • CVE-2018-11224/libav: triage: postponed (libav-specific issue, no patch)
  • CVE-2017-18247/libav: triage: ignored (not reproducible, no targeted patch)
  • CVE-2017-18246/libav: triage: ignored (not reproducible)
  • CVE-2017-18245/libav: reproduce, track down fix in ffmpeg
  • CVE-2017-18244/libav: triage: ignored (not reproducible)
  • CVE-2017-18243/libav: triage: ignored (not reproducible)
  • CVE-2017-18242/libav: triage: ignored (not reproducible)
  • CVE-2017-17127/libav: reproduce, track down fix in ffmpeg
  • CVE-2016-9824/libav: triage: ignored: usan (undefined sanitized) warning only, no patch
  • CVE-2016-9823/libav: triage: ignored: usan (undefined sanitized) warning only, no patch
  • CVE-2016-5115/libav: triage: postpone due different (indirect mplayer) vulnerability and lack of time
  • CVE-2017-17127,CVE-2017-18245,CVE-2018-19128,CVE-2018-19130,CVE-2019-14443,CVE-2019-17542/libav: security upload

Documentation/Scripts

Worse Than FailureCodeSOD: Untested Builds

Kaylee E made an "oops" and checked in a unit test with a bug in it which caused the test to fail. She didn't notice right away, and thus the commit hit their CI pipeline and was automatically pulled by the build server. She assumed that when she checked the logs she'd see the error, but she didn't. The build completed, and Tests (0/0) ran successfully.

Now, Kaylee was new to the codebase, and since she'd been doing small changes, she'd simply written and run tests around explicitly the functionality she was testing. She hadn't yet done a full test run locally, so that was her next step. From there, it was easy to see why the build server didn't automatically run tests.

[Test] public void TestDateNew() { String date = DateTime.Now.ToShortDateString(); Assert.IsTrue(date = "26/08/2016"); }

Visual Studio's Git integration also displayed a little tip above the method, announcing: 0 References | Murray Linwood, 646 days ago | 1 author, 1 change.

Now, this test could work if you mocked out the DateTime object, which is what you should do if you're testing dates. Of course, that doesn't make the test any more useful, as it's just testing basic, built-in .NET functionality. Besides, I suspect that "Murray" is already familiar with mocking.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianRuss Allbery: 2019 Book Reading in Review

In 2019, I finished and reviewed 40 books, the same as in 2018. Technically, I read two more books than 2018, since I've finished two books (one just before midnight) that I've not yet reviewed, but I'll stick with counting only those books for which I've published a review. I did a little bit better this year in spreading my reading out over the year instead of only reading on vacation. Finding time to write reviews was another matter; apologies for the flood of catch-up reviews in the last week of December.

I met both of my reading goals for last year — maintaining my current reading pace and catching up on award winners and nominees — but only barely in both cases. 2020 will bring schedule and life changes for me, and one of my goals is to carve out more room for daily reading.

I have 10 out of 10 ratings to two books this year, one fiction and one non-fiction. The novel was Arkady Martine's exceptional debut A Memory Called Empire, which is one of the best science fiction novels I've read. It's populated with a fully imagined society, wonderful characters, political maneuvering, and a thoughtful portrayal of the cultural impact of empire and colonialism. I can hardly wait for the sequel.

The non-fiction book was On the Clock by Emily Guendelsberger, a brilliant piece of investigative journalism that looks at the working conditions of the modern American working class through the lens of an Amazon warehouse job, a call center, and a McDonald's. If you want to understand how work and life feels to the people taking the brunt of the day-to-day work in the United States, I cannot recommend it highly enough. These jobs are not what they were ten or twenty years ago, and the differences may not be what you expect.

The novels that received 9 out of 10 ratins from me in 2019 were The Calculating Stars by Mary Robinette Kowal, and The Shell Seekers by Rosamunde Pilcher. Kowal's novel is the best fictional portrayal of anxiety that I've ever read (with bonus alternate history space programs!) and fully deserves its Hugo, Nebula, and Locus awards. Pilcher's novel is outside of my normal genres, a generational saga with family drama and some romance. It was a very satisfying vacation book, a long, sprawling drama that one can settle into and be assured that the characters will find a way to do the right thing.

On the non-fiction side, I gave a 9 out of 10 rating to Bad Blood, John Carreyou's almost-unbelievable story of the rise and fall of Theranos, the blood testing company that reached a $10 billion valuation without ever having a working product. And, to close out the year, I gave a 9 out of 10 rating to Benjamin Dreyer's Dreyer's English, a collection of advice on the English language from a copy editor. If you love reading books about punctuation trivia or grammatical geeking, seek this one out.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

Worse Than FailureBest of…: Best of 2019: When Unique Isn't Unique

We close out our recap of 2019 and head into the new year with one last flashback: when vendors forget what the definition of "unique" is. Original -- Remy

Palm III 24

Gather 'round, young'uns, for a tale from the Dark Ages of mobile programming: the days before the iPhone launched. Despite what Apple might have you believe, the iPhone wasn't the first portable computing device. Today's submitter, Jack, was working for a company that streamed music to these non-iPhone devices, such as the Palm Treo or the Samsung Blackjack. As launch day approached for the new client for Windows Mobile 6, our submitter realized that he'd yet to try the client on a non-phone device (called a PDA, for those of you too young to recall). So he tracked down an HP iPaq on eBay just so he could verify that it worked on a device without the phone API.

The device arrived a few days out from launch, after QA had already approved the build on other devices. It should've been a quick test: sideload the app, stream a few tracks, log in, log out. But when Jack opened the app for the first time on the new device, it was already logged into someone's account! He closed it and relaunched, only to find himself in a different, also inappropriate account. What on earth?!

The only thing Jack could find in common between the users he was logged in as was that they were running the same model of PDA. That was the crucial key to resolving the issue. To distinguish which device was making the calls to the streaming service, Jack used a call in Windows Mobile that would return a unique ID for each mobile device. In most devices, it would base this identifier on the IMEI, ensuring uniqueness—but not on the HP iPaq. All HP devices could automatically log into the account of the most recently used iPaq, providing the user logged out and back in, as it would generate a recent-user record with the device ID.

Jack had read the documentation many times, and it always stated that the ID was guaranteed to be unique. Either HP had a different definition of "unique" than anyone else, or they had a major security bug!

Jack emailed HP, but they had no plans to fix the issue, so he had to whip up an alternate method of generating a UUID in the case that the user was on this device. The launch had to be pushed back to accommodate it, but the hole was plugged, and life went on as usual.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianBalasankar 'Balu' C: FOSS contributions in 2019

Heyo,

I have been interested in the concept of Freedom - both in the technical and social ecosystems for almost a decade now. Even though I am not a harcore contributor or anything, I have been involved in it for few years now - as an enthusiast, a contributor, a mentor, and above all an evangelist. Since 2019 is coming to an end, I thought I will note down what all I did last year as a FOSS person.

GitLab

My job at GitLab is that of a Distribution Engineer. In simple terms, I have to deal with anything that a user/customer may use to install or deploy GitLab. My team maintains the omnibus-gitlab packages for various OSs, docker image, AWS AMIs and Marketplace listings, Cloud Native docker images, Helm charts for Kubernetes, etc.

My job description is essentially dealing with the above mentioned tasks only, and as part of my day job I don’t usually have to write and backend Rails/Go code. However, I also find GitLab as a good open source project and have been contributing few features to it over the year. Few main reasons I started doing this are

  1. An opportunity to learn more Rails. GitLab is a pretty good project to do that, from an engineering perspective.
  2. Most of the features I implemented are the ones I wanted from GitLab, the product. The rest are technically simpler issues with less complexity(relates to the point above, regarding getting better at Rails).
  3. I know the never-ending dilemma our Product team goes through to always maintain the balance of CE v/s EE features in every release, and prioritizing appropriate issues from a mountain of backlog to be done on each milestone. In my mind, it is easier for both them and me if I just implemented something rather than asked them to schedule it to be done by a backend team, so that I cane enjoy the feature. To note, most of the issues I tackled already had Accepting Merge Requests label on them, which meant Product was in agreement that the feature was worthy of having, but there were issues with more priority to be tackled first.

So, here are the features/enhancements I implemented in GitLab, as an interested contributor in the selfish interest of improving my Rails understanding and to get features that I wanted without much waiting:

  1. Add number of repositories to usage ping data
  2. Provide an API endpoint to get GPG signature of a commit
  3. Add ability to set project path and name when forking a project via API
  4. Add predefined CI variable to provide GitLab FQDN
  5. Ensure changelog filenames have less than 99 characters
  6. Support notifications to be fired for protected branches also
  7. Set X-GitLab-NotificationReason header in emails that are sent due to explicit subscription to an issue/MR
  8. Truncate recommended branch name to a sane length
  9. Support passing CI variables as push options
  10. Add option to configure branches for which emails should be sent on push

Swathanthra Malayalam Computing

I have been a volunteer at Swathanthra Malayalam Computing for almost 8 years now. Most of my contributions are towards various localization efforts that SMC coordinates. Last year, my major contributions were improving our fonts build process to help various packaging efforts (well, selfish reason - I wanted my life as the maintainer of Debian packages to be easier), implementing CI based workflows for various projects and helping in evangelism.

  1. Ensuring all our fonts build with Python3
  2. Ensuring all our fonts have proper appstream metadata files
  3. Add an FAQ page to Malayalam Speech Corpus
  4. Add release workflow using CI for Magisk font module

Debian

I have been a Debian contributor for almost 8 years, became a Debian Maintainer 3 years after my first stint with Debian, and have been a Debian Developer for 2 years. My activities as a Debian contributor this year are:

  1. Continuing maintenance of fonts-smc-* and hyphen-indic packages.
  2. Packaging of gopass password manager. This has been going on very slow.
  3. Reviewing and sponsoring various Ruby and Go packages.
  4. Help GitLab packaging efforts, both as a Debian Developer and a GitLab employee.

Other FOSS projects

In addition to the main projects I am a part of, I contributed to few FOSS last year, either due to personal interest, or as part of my job. They are:

  1. Calamares - I initiated and spearheaded the localization of Calamares installer to Malayalam language. It reached 100% translated status within a month.
  2. Chef
    1. Fix openSUSE Leap and SLES detection in Chef Ohai 14
    2. Make runit service’s control commands configurable in Chef Runit cookbook
  3. Mozilla - Being one of the Managers for Malayalam Localization team of Mozilla, I helped coordinate localizations of various projects, interact with Mozilla staff for the community in clarifying their concerns, getting new projects added for localization etc.

Talks

I also gave few talks regarding various FOSS topics that I am interested/knowledgeable in during 2019. List and details can be found at the talks page.

Overall, I think 2019 was a good year for the FOSS person in me. Next year, I plan to be more active in Debian because from the above list I think that is where I didn’t contribute as much as I wanted.

,

Worse Than FailureBest of…: Best of 2019: The Internship of Things

Did you get some nice shiny new IoT devices for the holidays this year? Hope they weren't the Initech brand. Original --Remy

Mindy was pretty excited to start her internship with Initech's Internet-of-Things division. She'd been hearing at every job fair how IoT was still going to be blowing up in a few years, and how important it would be for her career to have some background in it.

It was a pretty standard internship. Mindy went to meetings, shadowed developers, did some light-but-heavily-supervised changes to the website for controlling your thermostat/camera/refrigerator all in one device.

As part of testing, Mindy created a customer account on the QA environment for the site. She chucked a junk password at it, only to get a message: "Your password must be at least 8 characters long, contain at least three digits, not in sequence, four symbols, at least one space, and end with a letter, and not be more than 10 characters."

"Um, that's quite the password rule," Mindy said to her mentor, Bob.

"Well, you know how it is, most people use one password for every site, and we don't want them to do that here. That way, when our database leaks again, it minimizes the harm."

"Right, but it's not like you're storing the passwords anyway, right?" Mindy said. She knew that even leaked hashes could be dangerous, but good salting/hashing would go a long way.

"Of course we are," Bob said. "We're selling web connected thermostats to what can be charitably called 'twelve-o-clock flashers'. You know what those are, right? Every clock in their house is flashing twelve?" Bob sneered. "They can't figure out the site, so we often have to log into their account to fix the things they break."

A few days later, Initech was ready to push a firmware update to all of the Model Q baby monitor cameras. Mindy was invited to watch the process so she could understand their workflow. It started off pretty reasonable: their CI/CD system had a verified build, signed off, ready to deploy.

"So, we've got a deployment farm running in the cloud," Bob explained. "There are thousands of these devices, right? So we start by putting the binary up in an S3 bucket." Bob typed a few commands to upload the binary. "What's really important for our process is that it follows this naming convention. Because the next thing we're going to do is spin up a half dozen EC2 instances- virtual servers in the cloud."

A few more commands later, and then Bob had six sessions open to cloud servers in tmux. "Now, these servers are 'clean instances', so the very first thing I have to do is upload our SSH keys." Bob ran an ssh-copy-id command to copy the SSH key from his computer up to the six cloud VMs.

"Wait, you're using your personal SSH keys?"

"No, that'd be crazy!" Bob said. "There's one global key for every one of our Model Q cameras. We've all got a copy of it on our laptops."

"All… the developers?"

"Everybody on the team," Bob said. "Developers to management."

"On their laptops?"

"Well, we were worried about storing something so sensitive on the network."

Bob continued the process, which involved launching a script that would query a webservice to see which Model Q cameras were online, then sshing into them, having them curl down the latest firmware, and then self-update. "For the first few days, we leave all six VMs running, but once most of them have gotten the update, we'll just leave one cloud service running," Bob explained. "Helps us manage costs."

It's safe to say Mindy learned a lot during her internship. Mostly, she learned, "don't buy anything from Initech."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Worse Than FailureBest of…: Best Of 2019: The Hardware Virus

We continue our holiday break by looking back at the true gift that kept on giving, the whole year round. Original. --Remy

Dvi-cable

Jen was a few weeks into her new helpdesk job. Unlike past jobs, she started getting her own support tickets quickly—but a more veteran employee, Stanley, had been tasked with showing her the ropes. He also got notification of Jen's tickets, and they worked on them together. A new ticket had just come in, asking for someone to replace the DVI cable that'd gone missing from Conference Room 3. Such cables were the means by which coworkers connected their laptops to projectors for presentations.

Easy enough. Jen left her cube to head for the hardware "closet"—really, more of a room crammed full of cables, peripherals, and computer parts. On a dusty shelf in a remote corner, she spotted what she was looking for. The coiled cable was a bit grimy with age, but looked serviceable. She picked it up and headed to Stanley's cube, leaning against the threshold when she got there.

"That ticket that just came in? I found the cable they want. I'll go walk it down." Jen held it up and waggled it.

Stanley was seated, facing away from her at first. He swiveled to face her, eyed the cable, then went pale. "Where did you find that?"

"In the closet. What, is it—?"

"I thought they'd been purged." Stanley beckoned her forward. "Get in here!"

Jen inched deeper into the cube. As soon as he could reach it, Stanley snatched the cable out of her hand, threw it into the trash can sitting on the floor beside him, and dumped out his full mug of coffee on it for good measure.

"What the hell are you doing?" Jen blurted.

Stanley looked up at her desperately. "Have you used it already?"

"Uh, no?"

"Thank the gods!" He collapsed back in his swivel chair with relief, then feebly kicked at the trash can. The contents sloshed around inside, but the bin remained upright.

"What's this about?" Jen demanded. "What's wrong with the cable?"

Under the harsh office lighting, Stanley seemed to have aged thirty years. He motioned for Jen to take the empty chair across from his. Once she'd sat down, he continued nervously and quietly. "I don't know if you'll believe me. The powers-that-be would be angry if word were to spread. But, you've seen it. You very nearly fell victim to it. I must relate the tale, no matter how vile."

Jen frowned. "Of what?"

Stanley hesitated. "I need more coffee."

He picked up his mug and walked out, literally leaving Jen at the edge of her seat. She managed to sit back, but her mind was restless, wondering just what had her mentor so upset.

Eventually, Stanley returned with a fresh mug of coffee. Once he'd returned to his chair, he placed the mug on his desk and seemed to forget all about it. With clear reluctance, he focused on Jen. "I don't know where to start. The beginning, I suppose. It fell upon us from out of nowhere. Some say it's the spawn of a Sales meeting; others blame a code review gone horribly wrong. In the end, it matters little. It came alive and spread like fire, leaving destruction and chaos in its wake."

Jen's heart thumped with apprehension. "What? What came alive?"

Stanley's voice dropped to a whisper. "The hardware virus."

"Hardware virus?" Jen repeated, eyes wide.

Stanley glared. "You're going to tell me there's no such thing, but I tell you, I've seen it! The DVI cables ..."

He trailed off helplessly, reclining in his chair. When he straightened and resumed, his demeanor was calmer, but weary.

"At some godforsaken point in space and time, a single pin on one of our DVI cables was irrevocably bent. This was the source of the contagion," he explained. "Whenever the cable was plugged into a laptop, it cracked the plastic composing the laptop's DVI port, contorting it in a way that resisted all mortal attempt at repair. Any time another DVI cable was plugged into that laptop, its pin was bent in just the same way as with the original cable.

"That was how it spread. Cable infected laptop, laptop infected cable, all with vicious speed. There was no hope for the infected. We ... we were forced to round up and replace every single victim. I was knee-deep in the carnage, Jen. I see it in my nightmares. The waste, the despair, the endless reimaging!"

Stanley buried his head in his hands. It was a while before he raised his haunted gaze again. "I don't know how long it took, but it ran its course; the support tickets stopped coming in. Our superiors consider the matter resolved ... but I've never been able to let my guard down." He glanced warily at the trash can, then made eye contact with Jen. "Take no chances with any DVI cables you find within this building. Buy your own, and keep them with you at all times. If you see any more of those—" he pointed an accusing finger at the bin "—don't go near them, don't try taking a paperclip to them. There's everything to lose, and nothing to gain. Do you understand?"

Unable to manage words, Jen nodded instead.

"Good." The haunted expression vanished in favor of grim determination. Stanley stood, then rummaged through a desk drawer loaded with office supplies. He handed Jen a pair of scissors, and armed himself with a brassy letter opener.

"Our job now is to track down the missing cable that resulted in your support ticket," he continued. "If we're lucky, someone's absent-mindedly walked off with it. If we're not, we may find that this is step one in the virus' plan to re-invade. Off we go!"

Jen's mind reeled, but she sprang to her feet and followed Stanley out of the cubicle, telling herself to be ready for anything.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Krebs on SecurityHappy 10th Birthday, KrebsOnSecurity.com

Today marks the 10th anniversary of KrebsOnSecurity.com! Over the past decade, the site has featured more than 1,800 stories focusing mainly on cybercrime, computer security and user privacy concerns. And what a decade it has been.

Stories here have exposed countless scams, data breaches, cybercrooks and corporate stumbles. In the ten years since its inception, the site has attracted more than 37,000 newsletter subscribers, and nearly 100 million pageviews generated by roughly 40 million unique visitors.

Some of those 40 million visitors left more than 100,000 comments. The community that has sprung up around KrebsOnSecurity has been truly humbling and a joy to watch, and I’m eternally grateful for all your contributions.

One housekeeping note: A good chunk of the loyal readers here are understandably security- and privacy-conscious, and many block advertisements by default — including the ads displayed here.

Just a reminder that KrebsOnSecurity does not run third-party ads and has no plans to change that; all of the creatives you see on this site are hosted in-house, are purely image-based, and are vetted first by Yours Truly. Love them or hate ’em, these ads help keep the content at KrebsOnSecurity free to any and all readers. If you’re currently blocking ads here, please consider making an exception for this site.

Last but certainly not least, thank you for your readership. I couldn’t have done this without your encouragement, wisdom, tips and support. Here’s wishing you all a happy, healthy and wealthy 2020, and for another decade of stories to come.

,

Sam VargheseSmith’s weakness to short-pitched bowling has been exposed

There are two things one can take away from the Australia-New Zealand Test series, even though it is not yet over, and the third and final match remains to be played in Sydney early next year.

One, the rankings system that the International Cricket Conference uses is out of sync with reality; if Australia, ranked fifth, can beat second-ranked New Zealand with so much of ease, then whatever decides those rankings needs sore re-examination.

The second, and probably more interesting, revelation has been the exposure of Steve Smith’s vulnerability to good short-pitched bowling. Smith has been called many things since he started to accumulate runs, and is now often likened to the late Sir Donald Bradman.

But his inability to play short stuff was demonstrated by New Zealander Neil Wagner, the only one of the four New Zealand pacemen (ok, medium-pacers, none of them can be classified fast) who uses the bouncer intelligently. Wagner dismissed Smith in both innings, with a beauty of a snorter at the face accounting for him in the first innings.

In the second innings, again it was a ball that was just above hip level that got Smith; he tried to paddle it around but lost control and was caught backward of square leg.

This method of packing off Smith cannot be used in the shorter forms of the game, as there are strict limits on short-pitched bowling, and anyone who persists will be called for wides, then warned and finally stopped from bowling. The short form of the game is all about making runs and the ICC wants to keep it that way, with the balance favouring batsmen.

And Smith does not have to play against the Australian bowlers; they form the best pace attack globally and he could be troubled by the likes of James Pattinson and Patrick Cummings if he were to face them. The rest of the fast bowling fraternity, to the degree it exists now, would have taken note, however, and one is pretty sure that India’s Jasprit Bumrah will test him when next they meet in a Test.

Australia are not scheduled to play any more Tests until next summer; there are four tours planned but all are only for playing the shorter forms of the game. India are due to visit next summer and hence one will not see Smith tested in this way until then.

That the ICC’s ranking is a mess is hardly news. The organisation does little that can be called sensible and once put in place a system for determining the change in playing conditions in the event of rain. Designed by the late Richie Benaud, it was used in the 1992 World Cup and the level of ridiculousness was shown by the fact that South Africa was, at one stage in the semi-finals, required to make 22 runs off one ball to win. This happened after rain interrupted the game.

Cory DoctorowScience fiction, Canada and the 2020s: my look at the decade ahead for the Globe and Mail

The editors of Canada’s Globe and Mail asked me to reflect on what science fiction can tell us about the 2020s for their end-of-the-decade package; I wrote about how science fiction can’t predict the future, but might inspire it, and how the dystopian malaise of science fiction can be turned into a inspiring tale of “adversity met and overcome – hard work and commitment wrenching a limping victory from the jaws of defeat.”

I describe a scenario for a “Canadian miracle”: “As the vast majority of Canadians come to realize the scale of the crisis, they are finally successful in their demand that their government address it unilaterally, without waiting for other countries to agree.”

Canada goes on a war footing: Full employment is guaranteed to anyone who will work on the energy transition – building wind, tide and solar facilities; power storage systems; electrified transit systems; high-speed rail; and retrofits to existing housing stock for an order-of-magnitude increase in energy and thermal efficiency. All of these are entirely precedented – retrofitting the housing stock is not so different from the job we undertook to purge our homes of lead paint and asbestos, and the cause every bit as urgent.

How will we pay for it? The same way we paid for the Second World War: spending the money into existence (much easier now that we can do so with a keyboard rather than a printing press), then running a massive campaign to sequester all that money in war bonds so it doesn’t cause inflation.

The justification for taking such extreme measures is obvious: a 1000 Year Reich is a horror too ghastly to countenance, but rendering our planet incapable of sustaining human life is even worse.

Science fiction and the unforeseeable future: In the 2020s, let’s imagine better things [Cory Doctorow/Globe and Mail]

science fiction,canada,green new deal,gnd,canadian miracle,2020s,climate emergency,truth and reconciliation jt-a-monster-of-history

Rondam RamblingsThe mother of all buyer's remorse

[Part of an ongoing series of exchanges with Jimmy Weiss.] Jimmy Weiss responded to my post on teleology and why I reject Jimmy's wager (not to be confused with Pascal's wager) nearly a month ago.  I apologize to Jimmy and anyone who has been waiting with bated breath for my response (yeah, right) for the long delay.  Somehow, life keeps happening while I'm not paying attention. So, finally, to

,

Krebs on SecurityRansomware at IT Services Provider Synoptek

Synoptek, a California business that provides cloud hosting and IT management services to more than a thousand customers nationwide, suffered a ransomware attack this week that has disrupted operations for many of its clients, according to sources. The company has reportedly paid a ransom demand in a bid to restore operations as quickly as possible.

Irvine, Calif.-based Synoptek is a managed service provider that maintains a variety of cloud-based services for more than 1,100 customers across a broad spectrum of industries, including state and local governments, financial services, healthcare, manufacturing, media, retail and software. The company has nearly a thousand employees and brought in more than $100 million in revenue in the past year, according to their Web site.

A now-deleted Tweet from Synoptek on Dec. 20 warned against the dangers of phishing-based cyberattacks, less than three days prior to their (apparently phishing-based) Sodinokibi ransomware infestation.

News of the incident first surfaced on Reddit, which lit up on Christmas Eve with posts from people working at companies affected by the outage. The only official statement about any kind of incident came late Friday evening from the company’s Twitter page, which said that on Dec. 23 it experienced a “credential compromise which has been contained,” and that Synoptek “took immediate action and have been working diligently with customers to remediate the situation.”

Synoptek has not yet responded to multiple requests for comment. But two sources who work at the company have now confirmed their employer was hit by Sodinokibi, a potent ransomware strain also known as “rEvil” that encrypts data and demands a cryptocurrency payment in return for a digital key that unlocks access to infected systems. Those sources also say the company paid their extortionists an unverified sum in exchange for decryption keys.

Sources also confirm that both the State of California and the U.S. Department of Homeland Security have been reaching out to state and local entities potentially affected by the attack. One Synoptek customer briefed on the attack who asked to remain anonymous said that once inside Synoptek’s systems, the intruders used a remote management tool to install the ransomware on client systems.

Much like other ransomware gangs operating today, the crooks behind Sodiniokibi seem to focus on targeting IT providers. And it’s not hard to see why: With each passing day of an attack, customers affected by it vent their anger and frustration on social media, which places increased pressure on the provider to simply pay up.

A Sodinokibi attack earlier this month on Colorado-based IT services firm Complete Technology Solutions resulted in ransomware being installed on computers at more than 100 dentistry practices that relied on the company. In August, Wisconsin-based IT provider PerCSoft was hit by Sodinokibi, causing outages for more than 400 clients.

To put added pressure on victims to negotiate payment, the purveyors of Sodinokibi recently stated that they plan to publish data stolen from companies infected with their malware who elect to rebuild their operations instead of paying the ransom.

In addition, the group behind the Maze Ransomware malware strain recently began following through on a similar threat, erecting a site on the public Internet that lists victims by name and includes samples of sensitive documents stolen from victims who have opted not to pay. When the site was first set up on Dec. 14, it listed just eight victims; as of today, there are more than two dozen companies named.

,

CryptogramFriday Squid Blogging: New Species of Bobtail Squid

Euprymna brenneri was discovered in the waters of Okinawa.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Cthulhu Fhtagn to Continue

"I'm not sure if Barcelona Metro is asking for my ticket or a blood sacrifice," Paweł S. writes.

 

Scott M. wrote, "I know VBA is considered to be a venerable language, but ...old enough to run on the Commodore PET?

 

"I don't know about you, but I would LOVE to spend anywhere between negative two and three thousand dollars," writes Alex S.

 

"12.09€ for a mouse and a pair of Adidas trainers? What a deal! ...oh wait...could we lose the recipient? Nevermind, no worries here," Vivia N. wrote.

 

Pascal writes, "Today I learned that Google Translate will sometimes adjust email addresses."

 

Bruce T. writes, "I recieved a deluge of emails from a Car Hire insurance provider (6 in total in the space of 4 minutes) and, oddly enough, each email appears to have completely failed in the templating engine or mail merge job."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

LongNowThe Enlightenment is Dead, Long Live the Entanglement

Quantum entanglement. (Courtesy: iStock/Traffic-Analyzer)

We humans are changing. We have become so intertwined with what we have created that we are no longer separate from it. We have outgrown the distinction between the natural and the artificial. We are what we make. We are our thoughts, whether they are created by our neurons, by our electronically augmented minds, by our technologically mediated social interactions, or by our machines themselves. We are our bodies, whether they are born in womb or test tube, our genes inherited or designed, organs augmented, repaired, transplanted, or manufactured. Our prosthetic enhancements are as simple as contact lenses and tattoos and as complex as robotic limbs and search engines. They are both functional and aesthetic. We are our perceptions, whether they are through our eyes and ears or our sensory-fused hyper-spectral sensors, processed as much by computers as by our own cortex. We are our institutions, cooperating super-organisms, entangled amalgams of people and machines with super-human intelligence, processing, sensing, deciding, acting. Our home planet is inhabited by both engineered organisms and evolved machines. Our very atmosphere is the emergent creation of forests, farms and factories. Our networks of commerce, power and communications are becoming as richly interconnected as ecologies and nervous systems. Empowered by the tools of the Enlightenment, connected by networked flows of freight and fuel and finance, by information and ideas, we are becoming something new. We are at the dawn of the Age of Entanglement.

Antoine Lavoisier conducting an experiment related to combustion generated by amplified sun light

In the last age, the Age of Enlightenment, we learned that nature followed laws. By understanding these laws, we could predict and manipulate. We invented science. We learned to break the code of nature and thus empowered, we began to shape the world in the pursuit of our own happiness. We granted ourselves god-like powers: to fly, to communicate across vast distances, to hold frozen moments of sight and sound, to transmute elements, to create new plants and animals. We created new worlds entirely from our imagination. Even Time we harnessed. The same laws that allowed us to explain the motions of the planets, enabled us to build the pendulum of a clock. Thus time itself, once generated by the rhythms of our bodies and the rhythms of the heavens, was redefined by the rhythms of our machines. With our newfound knowledge of natural laws we orchestrated fantastic chains of causes and effect in our political, legal, and economic systems as well as in our mechanisms. Our philosophies neatly separated man and nature, mind and matter, cause and effect. We learned to control.

ENIAC, (Electronic Numerical Integrator And Computer), the first electronic general purpose computer.

Eventually, in the ultimate expression of our Enlightenment exuberance, we constructed digital computers, the very embodiments of cause and effect. Computers are the cathedrals of the Enlightenment, the ultimate expression of logical deterministic control.¹ Through them, we learned to manipulate knowledge, the currency of the Enlightenment, beyond the capacity of our own minds. We constructed new realities. We built complex algorithms with unpredictable behavior. Thus, within this monument to Enlightenment thinking, we sowed the seeds of its demise. We began to build systems with emergent behaviors that grew beyond our own understanding, creating the first crack in the foundation.

The second threat to the foundation of the Enlightenment was in the institutions we created. Our communication technology allowed us to build enterprises of unimaginable scope and capability. A modern corporation or NGO has tens of thousands of people, most of whom have never met one another, who are capable of coordinated action, making decisions that shape the world. Governments are even larger. New kinds of self-organizing collaborations, enabled by our global communications networks, are beginning to emerge. All these kinds of enterprises can become more powerful than the individual humans that created them, and in many senses, they have goals of their own. They tend to act in ways that increase their control of resources and enhance their own survival. They are able to perceive and process far more information than a single human, manipulate more matter and energy, act in more ways and places, command more power, and focus more attention. The individual is no longer the most influential player on the world stage.

As our technological and institutional creations have become more complex, our relationship to them has changed. We now relate to them as we once related to nature. Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.

Photo by Franck V. on Unsplash

The final blow to the Enlightenment will come when we build into our machines the power to learn, adapt, create and evolve. In doing so, we will give them the power to surpass us, to shape the world and themselves in ways that we never could have imagined. We have already given our institutions the ability to act on our behalf, and we are destined to have the same uneasy balance of power with our machines. We will make the same attempts to build in checks and balances, to keep their goals aligned with ours. We will face similar challenges. In doing so we need to move far away from the understandable logic of Enlightenment thinking, into something more complicated. We will worry less about the unpredictable forces of nature than about the unpredictable behaviors of our own constructions.

Neri Oxman’s “Silk Pavilion” was made by 6,500 computer-guided silkworms. Photo by Markus Kayser

So what is this brave new world that we are creating, governed neither by the mysteries of nature or the logic of science, but by the magic of their entanglement? It is governed by the mathematics of strange attractors. Its geometry is fractal. Its music is improvisational and generative rather than composed: Eno instead of Mozart. Its art is about process more than artifact. Its roots are in Grey Walter’s cybernetic tortoises,² Marvin Minsky’s randomly wired SNARC learning machine,³ and Nicholas Negroponte’s Seek,⁴ in which the architecture of a living space emerged from the interaction of a observant robot with a horde of gerbils. The aesthetic of the Entanglement is the beauty that emerges from processes that are neither entirely natural nor artificial, but blend the best of both: the webs of Neri Oxman’s silk worms,⁵ ⁶ spun over a robot-wired mesh; the physical telepresence of Hiroshi Ishii’s tactile displays⁷ ⁸ or his living bioLogic fabric.⁹ We can no longer see ourselves as separate from the natural world or our technology, but as a part of them, integrated, codependent, and entangled.

Unlike the Enlightenment, where progress was analytic and came from taking things apart, progress in the Age of Entanglement is synthetic and comes from putting things together. Instead of classifying organisms, we construct them. Instead of discovering new worlds, we create them. And our process of creation is very different. Think of the canonical image of collaboration during the Enlightenment: fifty-five white men in powdered wigs sitting in a Philadelphia room, writing the rules of the American Constitution. Contrast that with an image of the global collaboration that constructed the Wikipedia, an interconnected document that is too large and too rapidly changing for any single contributor to even read.

A beautiful example of an Entanglement process is the use of simulated biologically-inspired algorithms to design artificial objects through evolution and morphogenesis. Multiple designs are mutated, bred and selected over many generations in a process analogous to Darwinian selection. The artifacts created by such processes look very different from those produced by engineering.¹⁰ An evolved motorcycle chassis will look more like a pelvic bone than a bicycle frame.¹¹ A computer program produced by a process of evolutionary design may be as difficult to understand as a neural circuit in the brain. Thus, the artifacts that are designed by these biologically-inspired processes take on both the beneficial and the problematic characteristics of biological organisms.¹² Their beauty is in their functional adaption. This is the elegance of the Entanglement: a new expression of beauty emerging from process. In an Entangled design process, the humans will often have input without control; for example, they may influence aesthetic choices by participating in the selection process or by tuning parameters. Such processes lend themselves to collaboration among multiple machines and multiple humans because the interfaces between the parts are fluid and adaptive. The final product is very much a collaborative effort of humans and machines, often with a superior result. It may exhibit behaviors that are surprising to the humans. Some of these behaviors may be adaptive. For example, early walking machines evolved on the Connection Machine took advantage of an obscure round-off error in the floating-point unit that the human programmers did not even know existed.¹³ In this sense, artifacts created by the entangled processes may have some of the robustness of a biological organism, as well as some of the surprise and delight.

Besides displaying the organic beauty of organisms, such designs may also exhibit their complex inscrutability, since it may not be obvious how the features in the artifact correspond to the functional requirements. For example, it may be difficult to tell the purpose of a particular line of code in an evolved program. In fact, the very concept of it having a specific purpose is probably ill-formed. The notion of functional decomposition comes from the engineering process of arranging components to embody causes and effects, so functional intention is an artifact of the engineering process. Simulated biological processes do not understand the system in the same sense that a human designer does. Instead, they discover what works without understanding, which has both strengths and weaknesses. Entanglement artifacts are simultaneously artificial and natural; they are both made and born. In the Age of Entanglement, the distinction has little significance.

As we are becoming more entangled with our technologies, we are also becoming more entangled with each other. The power (physical, political, and social) has shifted from comprehensible hierarchies to less-intelligible networks. We can no longer understand how the world works by breaking it down into loosely-connected parts that reflect the hierarchy of physical space or deliberate design. Instead, we must watch the flows of information, ideas, energy and matter that connect us, and the networks of communication, trust, and distribution that enable these flows. This, as Joshua Ramo¹⁴ has pointed out, is “the nature of our age.”

So what are we to think about this new relationship with our technology and with each other? Should we fear it or embrace it? The answer is both. Like any new powerful force in the world, like Science, it will be used for both good and evil. And even when it is intended to be used for good, we will make mistakes. Humanity has been dealing with this conundrum ever since the first cooking fire got out of control and burned down the forest. Recognizing this does not absolve us from our responsibility, it reminds us why it is important. We are remaking ourselves, and we need to choose wisely what we are to become.


Hillis, D. (2016). The Enlightenment is Dead, Long Live the Entanglement. Journal of Design and Sciencehttps://doi.org/10.21428/1a042043. Redistributed under Attribution 4.0 International (CC BY 4.0). Images have been added.

Footnotes

[1] Ramo, J.C. The Seventh Sense: Power, Fortune, and Survival in the Age of Networks.

[2] Augmented Age. Autodesk University. 11906.

[3] Augmented Age. Autodesk University. 11906.

[4] bioLogic: Natto Cells as Nanoactuators for Shape Changing Interfaces. 1–10.

[5] CAD Is a Lie: Generative Design to the Rescue. Jan. 6.

[6] Control of a Powered Ankle–Foot Prosthesis Based on a Neuromuscular Model. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING,. 20, 2.

[7] Evolving 3D Morphology and Behavior by Competition. Artificial life. 1, 4, 353–372.

[8] Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. 461–470.

[9] Robotically controlled fiber-based manufacturing as case study for biomimetic digital fabrication. Green Design, Materials and Manufacturing Processes, CRC Press (London). 473–8.

[10] Seek.

[11] Shape Displays: Spatial Interaction with Dynamic Physical Form. Computer Graphics and Applications, IEEE. 35, 5, 5–11.

[12] Silk pavilion: a case study in fiber-based digital fabrication. Proc. Fabricate. 248–255.

[13] Talking Nets: An Oral History of Neural Networks. 304–305.

[14] Turing’s Cathedral: The Origins of the Digital Universe. ISBN 1400075998.

CryptogramChinese Hackers Bypassing Two-Factor Authentication

Interesting story of how a Chinese state-sponsored hacking group is bypassing the RSA SecurID two-factor authentication system.

How they did it remains unclear; although, the Fox-IT team has their theory. They said APT20 stole an RSA SecurID software token from a hacked system, which the Chinese actor then used on its computers to generate valid one-time codes and bypass 2FA at will.

Normally, this wouldn't be possible. To use one of these software tokens, the user would need to connect a physical (hardware) device to their computer. The device and the software token would then generate a valid 2FA code. If the device was missing, the RSA SecureID software would generate an error.

The Fox-IT team explains how hackers might have gone around this issue:

The software token is generated for a specific system, but of course this system specific value could easily be retrieved by the actor when having access to the system of the victim.

As it turns out, the actor does not actually need to go through the trouble of obtaining the victim's system specific value, because this specific value is only checked when importing the SecurID Token Seed, and has no relation to the seed used to generate actual 2-factor tokens. This means the actor can actually simply patch the check which verifies if the imported soft token was generated for this system, and does not need to bother with stealing the system specific value at all.

In short, all the actor has to do to make use of the 2 factor authentication codes is to steal an RSA SecurID Software Token and to patch 1 instruction, which results in the generation of valid tokens.

Worse Than FailureBest of…: Best of 2019: Temporal Obfuscation

It's the holiday season, and we use this opportunity to take a week and reflect on the best stories of the year. Here, we reach back to January for a tale of variable names and convention. --Remy

We've all been inflicted with completely overdesigned overly generalized systems created by architects managers who didn't know how to scope things, or when to stop.

We've all encountered premature optimization, and the subtle horrors that can spawn therefrom.

For that matter, we've all inherited code that was written by individuals cow-orkers who didn't understand that this is not good variable naming policy.

Jay's boss was a self-taught programmer from way back in the day and learned early on to write code that would conserve both memory and CPU compilation cycles for underpowered computers.

He was assigned to work on such a program written by his boss. It quickly became apparent that when it came to variable names, let's just say that his boss was one of those people who believed that usefully descriptive variable names took so much longer to compile that he preemptively chose not to use them, or comments, in order to expedite compiling. Further, he made everything global to save the cost of pushing/popping variables to/from the stack. He even had a convention for naming his variables. Integers were named I1, I2, I3..., strings were named S1, S2, S3..., booleans were named F1, F2, F3...

Thus, his programs were filled with intuitively self-explanatory statements like I23 = J4 + K17. Jay studied the program files for some time and had absolutely no clue as to what it was supposed to do, let alone how.

He decided that the only sane thing that could be done was to figure out what each of those variables represented and rename it to something appropriate. For example, he figured out that S4 was customer name, and then went through the program and replaced every instance of S4 with customer_name. Rinse and repeat for every variable declaration. He spent countless hours at this and thought that he was finally making sense of the program, when he came to a line that, after variable renaming, now said: account_balance = account_balance - zip_code.

Clearly, that seemed wrong. Okay, he must have made a mistake somewhere, so he went back and checked what made him think that those variables were account balance and zip code. Unfortunately, that's exactly what they represented... at the top of the program.

To his chagrin, Jay soon realized that his boss, to save memory, had re-used variables for totally different purposes at different places in the program. The variable that contained zip code at the top contained item cost further down, and account balance elsewhere. The meaning of each variable changed not only by code location and context, but also temporally throughout the execution of the program.

It was at this point that Jay began his nervous breakdown.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Worse Than FailureBest of…: Classic WTF: The Glitch Who Stole Christmas

It's Christmas, and we're going to spend the next week remembering the best moments of 2019, but for now, let's go back to an old Christmas classic. Original.

Every Dev down in Devville liked Christmas a lot…
But the PM who lived in the corner office did NOT!
The PM hated Christmas! The whole Christmas season!
Now, please don’t ask why. No one quite knows his reason.
It could be his head wasn’t screwed on just right.
It could be, that his project timeline was too tight,
But I think the most likely reason of all,
May have been that his brain was two sizes too small.

Whatever the reason, his brain or his sprint,
He stood there on Christmas Eve, squinting a squint,
Staring down from his desk with a sour, PM grimace,
At the cold dark monitors around the office.

For he knew every Dev down in Devville beneath,
Was busy now, hanging a mistletoe wreath.
“And they’re hanging their stockings!”“ he snarled with a sneer
”The milestone’s tomorrow! It’s practically here!“
Then he growled, with his PM fingers nervously drumming,
”I MUST find some way to stop Christmas from coming!"
For tomorrow, he knew, all the Dev girls and boys,
Would wake up bright and early. They’d rush for their toys!

And then! Oh, the noise! Oh, the Noise!
Noise! Noise! Noise!
That’s one thing he hated! The NOISE!
NOISE! NOISE! NOISE!

Then, the devs, young and old, would sit down to a feast.
And they’d feast! And they’d feast! And they’d FEAST!
FEAST! FEAST! FEAST!

They would feast on Soylent, and rare energy drinks,
This was something the PM couldn’t stand to think,
And THEN they’d do something he liked least of all!

Every dev down in Devville, the tall and the small,
Would log on together, network lights blinking.
They’d stand, lan-on-lan. And the devs would start playing!
They’d play! And they’d play! And they’d PLAY!
PLAY! PLAY! PLAY!

And the more the PM thought of this dev Christmas-thing,
The more the PM thought, “I must stop this whole thing!”
“Why, for twenty-three years I’ve put up with it now!”
“I must stop this Christmas from coming! But HOW?”

Then, he got an idea! An awful idea!
The PM got a wonderful, awful idea!

“I know just what to do!” the PM laughed with a hoot,
And then he ran a command and made a server to reboot.
And he chuckled, and clucked, “What a great PM trick!”
“With the server down, they’ll need to come back in, and quick!”
“All I need is an outage…” the PM looked around.
But, since load balancers are robust, there was none to be found.

Did that stop the old PM? No! The PM simply said,
“If I can’t make an outage, I’ll fake one instead!”
So he fired up Outlook, made the font color red,
And typed out a message which frantically said:

“The server is down, the application has crashed,
The developers responsible should have their heads bashed!

Then the PM clicked SEND and the chain started down,
From the CEO to the devs, asnooze in their town.
All their windows were dark. Quiet snow filled the air.
All the devs were all dreaming sweet dreams without care.

Then he did the same thing to the other Devs’ projects,
Leaving bugs and errors and emails with scary subjects.
“The project is late, we surely are doomed,”
He wrote and sent and the emails zoomed.

And the PM grabbed the source tree and he started to skim,
When he heard someone asking, “Why are you in VIM?”
He turned around fast, and he saw a small Dev!
Little Tina-Kiev Dev, who was an SAII,
The PM had been caught by this tiny code enabler,
Who’d came to the office for her red stapler.

She stared at the PM and said, “Project Lead, why,”
“Why are you checking our source tree? WHY?”
But you know, that old PM was so smart and so slick,
He thought up a lie and he thought it up quick!
“Why, my sweet little tot,” the fake developer lied,
“A line in this code won’t lint and that commit’s denied”
“So I’m checking in a patch, my dear.”
“I’ll release it out there after I fix it up here.”
And this fib fooled the dev. Then he patted her head.
And he got her a red stapler and sent her to bed.

“Feh, feh to the devs!” he was PMishly humming.
“They’re finding out now that no Christmas is coming!”
“They’re just waking up! I know what they’ll do!”
“Their mouths will hang open a minute or two,”
“Then the devs down in Devville will all cry ‘Boo hoo!’”
“That’s a noise,” pipped the PM, “That I simply must hear.”

So he paused. And the PM put his hand to his ear.
And he did hear a sound rising over the snow.
It started in low. Then it started to grow.
But the sound wasn’t sad! Why this sounded merry!
It couldn’t be so! But it WAS merry! Very!

He stared down at Devville! The PM popped his eyes!
Then he shook! What he saw was a shocking surprise!

Every Dev down in Devville, the tall and the small,
Was playing! Without any calls at all!
He hadn’t stopped Christmas from coming! It CAME!
Somehow or other, it came just the same!

And the PM, with his PM-feet in sensible shoes,
Stood puzzling and trying to understand this news.
“I sent emails! I marked them important!”
“I filed tickets with statuses of urgent!”
And he puzzled three hours, till his puzzler was sore.
Then, the PM thought of something he hadn’t thought of before!

“Maybe Christmas,” he thought, “doesn’t disrupt my sprint,”
“Maybe Christmas… perhaps blocked days aren’t a misprint.”
And what happened then? Well… in Devville they say,
That the PM’s small brain grew three sizes that day!
And the minute his schedule didn’t feel quite so tight,
He whizzed out of the office through the bright morning light.

Happy Holidays!

Image credits:
Uses the following assets:

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

CryptogramToTok Is an Emirati Spying Tool

The smartphone messaging app ToTok is actually an Emirati spying tool:

But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

Apple and Google have removed it from their app stores. If you have it on your phone, delete it now.

Worse Than FailureCodeSOD: Caga Tió

As we plow into the holiday season, it’s important to remember that each submission- each bit of bad code, each horror story, each personal confession- is its own little gift to us. And, when you write a bit of bad code, you can think of it as a gift for whoever follows you.

Photograph of a typical contemporary Tió

Georgeanna recently opened a gift. She was wondering how their logging layer managed its configuration. She assumed that it would just read it from the config file, but when she tried to change where the logging file got written, say, to report.log, it would turn into report.log.staging.log.

It wasn’t hard to figure out why:

if ($env === "staging") {
    $logpath = self::getLogPath();
    /* Since the staging environment uses the same .ini as the
    * production environment, do an override here. */
    self::$logfile = $logpath . "staging.log";
}

The comment sums it up. Instead of managing multiple configuration files and deciding which one to use when you deploy the code, this just used one single config file and then conditionals to decide what behavior to use.

This reminds me of a gift I opened once. I once worked for a company where every application was supposed to reference the standard Environment.dll, and then check isProd or isStaging or isDev and use conditionals to change behavior (instead of having a per-environment config file).

Worse still was what happened when I opened the DLL code: it just checked c:\environment.txt which had “prod”, “stage”, or “dev” written in it. No, it didn’t handle exceptions.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

TEDA dangerous woman: Pat Mitchell speaks at TEDWomen 2019

Pat Mitchell speaks at TEDWomen 2019: Bold + Brilliant, December 4-6, 2019, Palm Springs, California. Photo: Marla Aufmuth / TED

Pat Mitchell has nothing left to prove and much less to lose. Now more than ever, she cares less about what others say, speaks her mind freely — and she’s angry, too. She’s become a dangerous woman, through and through.

Not dangerous, as in feared, but fearless; a force to be reckoned with.

On the TEDWomen stage, she invites all women, men and allies to join her in embracing the risks necessary to create a world where safety, respect and truth burn brighter than the darkness of our current times.

“This is all possible because we’re ready for this. We’re better prepared than any generation ever before us,” she says. “Better resourced, better connected, and in many parts of the world we’re living longer than ever.”

On the cusp of 77 years old, Mitchell would know what it takes to make possibilities reality from her own career blazing an award-winning trail across media and television. Before she launched TEDWomen, she produced and hosted breakthrough television for women, and presided over CNN Productions, PBS and the Paley Center for Media, taking risks all along the way.

“I became a risk-taker early in my life’s journey. I had to, or have my life defined by the limitations for girls growing up in the rural South, especially … with no money, influence or connections,” she says. “But what wasn’t limited was my curiosity about the world beyond my small town.”

She acknowledges her trajectory was colored with gendered advice — become blonde (she did), drop your voice (she tried), lower your necklines (she didn’t) — that sometimes made it difficult to strike a balance between her leadership and womanhood. But now, declaring her pride as a woman leader, activist, advocate and feminist, she couldn’t care less what others say.

Even further, Mitchell states that women shouldn’t wait to be empowered — they must wield the power they already hold. What’s needed are more opportunities to claim, use and share it; for those who’ve forged their paths to reach back and help change the nature of power by dismantling some of the barriers that remain for those who follow.

Iconic playwright George Bernard Shaw, she shares, once wrote: “Life is not a brief candle to me. It is a sort of splendid torch which I have got hold of for a moment, and I want to make it burn as brightly as possible before handing it on to future generations.”

Pat Mitchell believes we’re more than equipped to move our communities forward, together. We have the funds, the technology and the media platforms to elevate each other’s stories and ideas for a better livelihood, a better planet.

And for Mitchell there’s no question that she walks in the same footsteps as Shaw’s, looking forward to a near future where we are willing to take more risks, to be more fearless, to speak up, speak out and show up for one another.

“At this point in my life’s journey, I am not passing my torch,” she says. “I am holding my splendid torch higher than ever, boldly and brilliantly — inviting you to join me in its dangerous light.”

Pat Mitchell speaks at TEDWomen 2019: Bold + Brilliant, December 4-6, 2019, Palm Springs, California. Photo: Marla Aufmuth / TED

Worse Than FailureOut Of Necessity

Cathédrale Saint-Étienne de Toulouse - chapelle des reliques - Confessionnal PM31000752

Zev, a longtime reader of The Daily WTF, has a confession to make.

It all started with the best of intentions. Zev works for a large company doing custom development; they use various databases and tools, but the most common tool they're asked to develop against is VBA for Microsoft Excel with an Access backend. One recent project involved data moving from an on-premise SQL Server solution to the cloud. This meant rebuilding all their reports to connect to an API instead of using ODBC to get the data. Enter Zev.

The cloud tool was pretty well developed. By passing in an API key, you could get data back in a variety of formats, including JSON, HTML, XML, and CSV. Obviously choice number one was JSON, which is quickly becoming the de facto language of APIs everywhere. Upon doing a quick survey, however, Zev found many of his users were stuck on Office 2013, which can't parse JSON natively.

No worries. There's always XML. Zev churned out a quick Excel file with an XML-map in it and used code to pull the data down from the API on demand. Now the hard part: plugging into Access. Turns out, in Office 2013, you can't use a network XML file as a data source, only a local one.

Well, Excel can feed the data into a table, which Access can read, but that takes longer. In Zev's case, far too long: minutes, for a relatively small amount of data. Okay, no problem; the code can download the XML to a local file, then connect to it as an XML table. Except that turns out to be no faster.

Zev's next try was to build Excel files for each of the queries, then connect Access to the Excel files as tables. Then he could add code to open and refresh the Excel files before using them. On some days, that took longer than the old way, while on other days it worked fine. And sometimes it managed to lose the Excel files, or they'd run into lock file issues. What gives?

Zev's testing concluded that the same query returning took twice as long via XML as it did via CSV, which makes sense: XML is about twice as fat as CSV. So the final product used VBA to download the data as a CSV file, then connect to the CSV file as a local Excel table through Access.

In Zev's own words:

My greatest fear is that someone will see this code and submit it to the Daily WTF and ask “why?”

We tried to use JSON, because it is the new hotness. But lo, our tools did not support it. We tried to use XML because it was the last hotness. But lo, it took too long to process. Shuddering and sobbing, we defaulted to CSV. And so we wrote code 20 years out of date, not out of hubris or lack of desire to learn, but out of cold, heartless, necessity.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Sam VargheseFast bowlers have lost their balls

There was a time in the 20th century when there were more class fast bowlers in the game of cricket than at any other. Between 1974 and 1994, pacemen emerged in different countries as though they were coming off an assembly line.

It made the game of cricket, which many call boring, an exciting spectacle.

From Dennis Lillee and Jeff Thomson, to Andy Roberts, Michael Holding, Colin Croft, Joel Garner, the late Malcolm Marshall, Imran Khan, Sarfraz Nawaz, Wasim Akram, Waqar Younis, Devon Malcolm, Bob Willis, Ian Botham, Allan Donald, Fanie de Villiers, Richard Hadlee, Courtney Walsh, Curtley Ambrose, Patrick Patterson and Craig McDermott, they were of several different types and temperaments as is to be expected.

But in one aspect they were all the same: they went out aiming to scare the batsmen into getting out and they mostly succeeded. At times, they resorted to verballing the batsmen as when Marshall reportedly told David Boon, “Are you going to get out or do I have to come around the wicket and kill you?”

Since the mid-1990s, the type of fast bowler who has been emerging has changed. There is an obsession with keeping the runs down, something which even started preoccupying the mind of Ambrose, once a bowler who had a deadly yorker that would send the stumps cartwheeling. The new brand of paceman was typified by Glenn McGrath who was overly keen on length and line and bored the hell out of those watching on.

During those two decades, there was every chance that there would be blood on the pitch before the day was out. After that, it became much less common.

True, since then we have seen the death of a batsman, Phillip Hughes, in first-class cricket in 2014, but that was due to an inadvertent accident rather than deliberate targeting by a bowler. It wasn’t a case of a bowler like Croft, an unpleasant man, who wouldn’t bother going over the wicket at all, but would come around the wicket right from the start of his spell. No, the man who caused Hughes’ death was Sean Abbott, not even one who bowls express pace, and one yet to ascend to the national ranks. A helmet with a flap at the back to protect Hughes’ neck would probably have saved him.

Where once the crowd looked for a particular paceman to come on and excite them, these days there are mostly yawns. And that’s because there is little to rouse the passions of those in the stands. The batsmen don’t need the kind of skills or bravery that players like Brian Close showed; on more than one occasion, the Englishman took repeated blows on the body in order to ensure that he did not lose his wicket.

These days, fast bowlers do not know how to get the ball to come up to the ribcage or chest and frighten the hell out of batsmen. There is a bunch of commentators who keep jabbering on about the speed of each ball, but when the bowler chooses to go past the off-stump or only focuses on keeping the runs down, what is the point? During the Ashes in 2019, there was much excitement when Jamaican-born Jofra Archer pinged Steve Smith on the noggin, flooring the Australian and ensuring that he would have to miss the next Test. But such occurrences are the exception, never the rule.

There were any number of spells bowled by pacemen in those 20 years which can be described as hostile. But since then, the cricket field has become a sedate place, where one is expected to be a gentleman, never a fierce competitor like Lillee, who once aimed a kick at the backside of Pakistan’s Javed Miandad. The latter charged down and tried to belt the moustachioed Australian with his bat. Oh, for a scene like that when Australia next plays a Test at home.

Things have got to the point that a few bouncers bowled during the first Test of the ongoing series resulted in the media resort to using the word “bodyline”. Yes, seriously! And we are talking here of bowlers like Tim Southee and Neil Wagner, more school teacher types, and hardly the sort to inspire fear in even a college XI. There is just one word for this: exaggeration.

This does not mean that bowlers cannot do their jobs well. No, they are efficient at winning games for their countries. Men like Mitchell Starc, Patrick Cummins and Jaspreet Bumrah take plenty of wickets and give Australia and India respectively an advantage. But it all ends there. You wouldn’t go to a ground specifically because one of them was going to figure in a game. On the other hand, it was well worth a trip to the ground to watch Holding skim over the surface with the grace of a ballet dancer, en route to creating havoc at the other end. The umpire hardly heard a sound as the man known as Whispering Death reached the crease and delivered the ball in one smooth motion.

So is cricket a better game today than it was in the 1970s, 1980s or 1990s? Most certainly not. There are a lot of international games, in all three formats. But it has become overly skewed in favour of the batsmen, to the extent that Australia even went to the extent of using sandpaper to roughen the ball in 2018 to try and get an advantage during a Test series against South Africa. Captain Steve Smith, David Warner and Cameron Bancroft spent some time away from the game for creating what came to be known as Sandpapergate.

I guess one has to be resigned to the placid spectacle. There is more than a little effort directed towards trying to hype things up by means of sound, colour and spectacle at the various grounds. But nothing will ever substitute for the sight of a Thomson thundering up to the crease, flinging his head back and hurling a projectile at some quivering batsmen 22 yards away. There was something earthy and primitive about it. Cricket is now too corporatised for there to ever be another Jeff Thomson.

,

Cory DoctorowParty Discipline, a Walkaway story (Part 4) (the final part!)

In my latest podcast (MP3), I conclude my serial reading of my novella Party Discipline, which I wrote while on a 35-city, 45-day tour for my novel Walkaway in 2017; Party Discipline is a story set in the world of Walkaway, about two high-school seniors who conspire to throw a “Communist Party” at a sheet metal factory whose owners are shutting down and stealing their workers’ final paychecks. These parties are both literally parties — music, dancing, intoxicants — and “Communist” in that the partygoers take over the means of production and start them up, giving away the products they create to the attendees. Walkaway opens with a Communist Party and I wanted to dig into what might go into pulling one of those off.

Here’s part 1 of the reading, here’s part 2, and here’s part 3.

We rode back to Burbank with Shirelle on my lap and one of my butt-cheeks squeezed between the edge of the passenger seat and the door. The truck squeaked on its suspension as we went over the potholes, riding low with a huge load of shopping carts under tarps in its bed. The carts were pretty amazing: strong as hell but light enough for me to lift one over my head, using crazy math to create a tensegrity structure that would hold up to serious abuse. They were rustproof, super-steerable and could be reconfigured into different compartment-sizes or shelves with grills that clipped to the sides. And light as they were, you put enough of them into a truck and they’d weigh a ton. A literal ton, and Jose—our driver’s—truck was only rated for a half-ton. It was a rough ride.

Our plan was to pull up on skid row and start handing out carts to anyone around, giving people two or three to share with their friends. Each truck had a different stretch we were going to hit, but as we got close to our spot, two things became very apparent: one, there were no homeless people around, because two, the place was crawling with five-oh. The Burbank cops had their dumb old tanks out, big armored MRAPs they used for riot control and whenever they wanted to put on a show of force, and there was a lot of crime-scene tape and blinking lights on hobby-horses.

MP3

,

CryptogramFriday Squid Blogging: Streamlined Quick Unfolding Investigation Drone

Yet another squid acronym.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Cory DoctorowRadicalized is one of the LA Public Library’s books of the year!

It’s not just the CBC and the Wall Street Journal — I was delighted to see this morning that Radicalized, my 2019 book of four science fiction novellas made the LA Public Library’s list of the top books of 2019! “As always his writing is sharp and clear, covering the absurdities that surround and infiltrate our lives, and predicts new ones waiting for us just around the corner. A compelling, thought provoking, macabre funny read.”

Cory DoctorowMy annual Daddy-Daughter Xmas Podcast: interview with an 11-year-old

Every year, I record a short podcast with my daughter, Poesy. Originally, we’d just sing Christmas carols, but with Poesy being nearly 12, we’ve had a moratorium on singing. This year, I interviewed Poe about her favorite Youtubers, books, apps, and pass-times, as well as her feelings on data-retention (meh) and horses (love ’em). And we even manage to squeeze in a song!

Google AdsenseMarketing Communications Specialist

It’s vital to create a flawless website user experience (UX). A UX strategy can help you rectify anything that compromises the website experience.

LongNowAI Unearths New Nazca Line in the Shape of a Humanoid Figure

The Nazca lines in Peru have baffled archaeologists for a century. Photo Credit: Jon Arnold Images Ltd/Alamy Stock Photo

In Southern Peru, deep in the Nazca Desert, ancient etchings spread across the landscape. To an observer at ground level, they appear as lines cut into the desert surface. Most are straight, while others curve and change direction, seemingly at random. Viewed from foothills or in the air, however, the etchings are revealed as figurative symbols, or geoglyphs. From this vantage, the Nazca lines take the form of geometric and biometric shapes, depicting plants, animals, and anthropomorphic figures.

The meaning and purpose of the Nazca lines have remained a mystery to archaeologists since their modern discovery in the 01920s. Some theorize that the etchings mark solstice points. Others believe they are artistic offerings to deities in the sky.

Archaeologists estimate the Nazca created several thousand lines between 0200 BCE and 0600 CE, using a methodical process of extracting specific stones to expose the white sands beneath. Researchers have long believed that there are many more Nazca lines yet to be discovered, but traditional methods of identifying the lines are time-consuming and demanding. Additionally, many of the lines have been damaged from floods, and disrupted by roads and infrastructure expansion.

Humanoid figure is the newest addition to the Nazca Lines. Photo Credit: IBM Research.

In recent years, a research team at Yamagata University has turned to an unconventional aid in its search: artificial intelligence. And it’s working better than anyone expected. On 15 November 02019, after decades of fieldwork and with extensive collaboration with IBM and their PAIRS Geoscope, the team announced that a total of 143 new designs had been uncovered.

The AI technology deploys deep-learning algorithms in order to synthesize vast and diverse data from LiDAR, drone and satellite images, to geospatial and geographical surveys. The result is high-fidelity 3-D maps of the surrounding search areas. Next, the AI is taught via a neural network to recognize the data patterns of known lines. The AI then searches for new ones over a stretch of 5 kilometers of terrain.

Left, Humanoid, Right, Humanoid Processed Picture. Photo Credit: Yamagata University IBM Japan.

One of the more curious recent discoveries was the above futuristic-looking humanoid figure.

The image is processed to outline and highlight the etchings for vastly improved visibility. The figure joins a collection of more than 2,000 previously known Nazca Lines. Other symbols include a fish, hermit bird and two-headed snake. In addition, IBM made this detection technology open source so other ventures can gain from the system, for example, to identify crops and improve irrigation management across the globe. The team plans to continue its work using more capable AI systems, like laser mapping data and advanced aerial images.

The project, with angles of both investigation and preservation, aims to document and understand the Nazca Lines as a whole. Once the team have a better understanding of the distribution of the lines, they can accelerate research towards the best way to preserve and protect them.

Learn More

  • Read Yamagata University’s press release on the newly-discovered geoglyphs.
  • Learn more about the IBM PAIRS geoscope technology that is helping scientists discover more geoglyphs.

Worse Than FailureError'd: Laws of Thermodynamics be Damned!

"I went to check my heat and, much to my surprise, my house had broken the laws of physics," Robert J. writes.

 

Dylan N. wrote, "I never have liked Cyber Mondays."

 

"When it comes to sending emergency alerts, my school is SO bad that it [INSERT_HOW_BAD_TEXT]!!" Jack writes.

 

Pascal wrote, "Ooh! By the looks of this guy, Twilight Zone's might have some weird face shifting plot twist!"

 

"Sigh...Looks like I'll need to wait approximately 584,942,417 years to hear from my friends again..." writes Matthieu G.

 

"I tried accessing Interactive Broker's simulated trading account on a random Friday, and well, maybe I just picked the wrong time or something becuase, behold, HTTP error code 601," Dima R. wrote,

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Harald WelteSoftware Freedom Podcast #3 about Free Software mobile phone communication

Recently I had the pleasure of being part of the 3rd incarnation of a new podcast series by the Free Software Foundation Europe: The Software Freedom Podcast.

In episode 3, Matthias and Bonnie of the FSFE are interviewing me about various high-level topics related to [the lack of] Free Software in cellular telephony, as well as some of the projects that I was involved in (Openmoko, Osmocom).

We've also touched the current mainstream / political debate about Huawei and 5G networks, where my position can only be summarized as: It doesn't matter much in which country the related proprietary software is being developed. What we need is Free / Open Source software that can be publicly audited, and we need a method by which the operator can ensure that a given version of that FOSS software is actually executing on his equipment.

Thanks to the FSFE for covering such underdeveloped areas of Free Software, and to use their podcast to distribute related information and ideas.

CryptogramLousy IoT Security

DTEN makes smart screens and whiteboards for videoconferencing systems. Forescout found that their security is terrible:

In total, our researchers discovered five vulnerabilities of four different kinds:

  • Data exposure: PDF files of shared whiteboards (e.g. meeting notes) and other sensitive files (e.g., OTA -- over-the-air updates) were stored in a publicly accessible AWS S3 bucket that also lacked TLS encryption (CVE-2019-16270, CVE-2019-16274).
  • Unauthenticated web server: a web server running Android OS on port 8080 discloses all whiteboards stored locally on the device (CVE-2019-16271).

  • Arbitrary code execution: unauthenticated root shell access through Android Debug Bridge (ADB) leads to arbitrary code execution and system administration (CVE-2019-16273).

  • Access to Factory Settings: provides full administrative access and thus a covert ability to capture Windows host data from Android, including the Zoom meeting content (audio, video, screenshare) (CVE-2019-16272).

These aren't subtle vulnerabilities. These are stupid design decisions made by engineers who had no idea how to create a secure system. And this, in a nutshell, is the problem with the Internet of Things.

From a Wired article:

One issue that jumped out at the researchers: The DTEN system stored notes and annotations written through the whiteboard feature in an Amazon Web Services bucket that was exposed on the open internet. This means that customers could have accessed PDFs of each others' slides, screenshots, and notes just by changing the numbers in the URL they used to view their own. Or anyone could have remotely nabbed the entire trove of customers' data. Additionally, DTEN hadn't set up HTTPS web encryption on the customer web server to protect connections from prying eyes. DTEN fixed both of these issues on October 7. A few weeks later, the company also fixed a similar whiteboard PDF access issue that would have allowed anyone on a company's network to access all of its stored whiteboard data.

[...]

The researchers also discovered two ways that an attacker on the same network as DTEN devices could manipulate the video conferencing units to monitor all video and audio feeds and, in one case, to take full control. DTEN hardware runs Android primarily, but uses Microsoft Windows for Zoom. The researchers found that they can access a development tool known as "Android Debug Bridge," either wirelessly or through USB ports or ethernet, to take over a unit. The other bug also relates to exposed Android factory settings. The researchers note that attempting to implement both operating systems creates more opportunities for misconfigurations and exposure. DTEN says that it will push patches for both bugs by the end of the year.

Boing Boing article.

CryptogramAttacker Causes Epileptic Seizure over the Internet

This isn't a first, but I think it will be the first conviction:

The GIF set off a highly unusual court battle that is expected to equip those in similar circumstances with a new tool for battling threatening trolls and cyberbullies. On Monday, the man who sent Eichenwald the moving image, John Rayne Rivello, was set to appear in a Dallas County district court. A last-minute rescheduling delayed the proceeding until Jan. 31, but Rivello is still expected to plead guilty to aggravated assault. And he may be the first of many.

The Epilepsy Foundation announced on Monday it lodged a sweeping slate of criminal complaints against a legion of copycats who targeted people with epilepsy and sent them an onslaught of strobe GIFs -- a frightening phenomenon that unfolded in a short period of time during the organization's marking of National Epilepsy Awareness Month in November.

[...]

Rivello's supporters -- among them, neo-Nazis and white nationalists, including Richard Spencer -- have also argued that the issue is about freedom of speech. But in an amicus brief to the criminal case, the First Amendment Clinic at Duke University School of Law argued Rivello's actions were not constitutionally protected.

"A brawler who tattoos a message onto his knuckles does not throw every punch with the weight of First Amendment protection behind him," the brief stated. "Conduct like this does not constitute speech, nor should it. A deliberate attempt to cause physical injury to someone does not come close to the expression which the First Amendment is designed to protect."

Another article.

EDITED TO ADD(12/19): More articles.

Worse Than FailureLying Metrics

Locator LED

Our anonymous submitter—we'll call him Russell—was a senior engineer supporting an equally anonymous web service that was used by his company's desktop software for returning required data. Russell had a habit of monitoring the service's performance each day, always on the lookout for trouble. One fateful morning, the anomalies piled on thick.

Over the past 24 hours, the host server's average response time had halved, and yet the service was also suddenly dealing with four times as many requests as usual. Average CPU and memory usage on the server had doubled, as had the load on the Oracle host. Even stranger, there was no increase in server errors.

Russell couldn't imagine what might've happened, as no changes had been deployed. However, his product team had recently committed to reducing average server response time. It was possible that someone else had modified an upstream service or some database queries. He emailed the rest of the team and other teams he worked closely with, detailing what he'd seen and asking whether anyone had any pertinent information.

The response from the engineers was basically, Hmm, odd. No, we didn't change anything. The response from the product architects really shouldn't have surprised Russell, given he'd been working in enterprise for nearly 20 years. The reply-all frenzy can be summed up as, You mean we've already fulfilled our commitment to reduce average response time?! LET'S FIRE OFF A SELF-CONGRATULATORY COMPANY-WIDE EMAIL!!!

Upon seeing this, Russell immediately replied: Hold on, let's try to find out what's happening here first.

Unfortunately, he was too late to stop the announcement, but that didn't stop him from investigating further. He remembered that their default monitoring of server errors filtered out 404s. Upon turning off that filter, he found that the number of 404s thrown by the server roughly matched the number of additional requests. Previously, average response time had been around 100ms; at present, it was about 45ms. This "triumph" hid the fact that the numerous 404s were processed in about 10ms each, while the non-404 requests were processed in about 150ms each—50% slower than usual. In other words, the web service's performance had been seriously degraded.

Russell dug further to figure out who was performing this low-key DDoS attack. The requests were authenticated, so he knew the calls were coming from inside the house. He managed to trace them to another product within his company. This product had to make a request to his web service in about 1% of their sessions, but that considerably slowed down their handling of those particular sessions. As a result, someone had modified the product to fire off an asynchronous request to Russell's service for every session, simply ignoring the response if it was a 404.

Russell emailed his findings to his team, but received no reply. Feeling bold, he directly contacted the project manager of the offending product. This led to the biggest WTF of all: the PM apologized and got the change rolled back right away. By the next day, everything was back to normal—but the product architects were angry over the embarrassment caused by their own premature celebration. They were likely also miffed about being forced to find real ways of improving average server response time. Their misplaced ire led to Russell being fired a short time later.

However, our story has a happy ending. The super-responsive product team hired Russell back on after a couple of months, with a 25% pay raise. He retained seniority, and was allowed to keep his former benefits as well as his severance package. In the end, the forces that'd sought to be rid of him had only succeeded in giving him a highly-paid vacation.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Sociological ImagesVenti Voting?

I just wrapped up my political sociology class for the semester. We spent a lot of time talking about conflict and polarization, reading research on why people avoid politics, the spread of political outrage, and why exactly liberals drink lattes. When we become polarized, small choices in culture and consumption—even just a cup of coffee—can become signals for political identities. 

After the liberals and lattes piece, one of my students wrote a reflection memo and mentioned a previous instructor telling them which brand of coffee to drink if they wanted to support a certain political party. This caught my attention, because (at least in the student’s recollection) the instructor was completely wrong. This led to a great discussion about corporate political donations, especially how frequent contributions often go bipartisan.

But where does your money go when you buy your morning coffee? Thanks to open-access data on political contributions, we can look at the partisan lean of the top four largest coffee chains in the United States.

Starbucks’ swing to the left is notable here, as is the rightward spike in Dunkin’s donations in the 2014 midterms. While these patterns tend to follow the standard corporate image for each, it is important to remember that even chains that lean one way still mix their donations. In midterm years like 2012 and 2014, about 20% of Starbucks’ donations went to Republicans.

One side effect of political polarization is that corporate politics don’t always follow cultural codes. For another good recent example of this, see Chick-fil-A reconsidering its donation policies.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureShining Brillance

Jarad was still recovering from his encounter with Intelligenuity’s most “brillant” programmer, Keisha, when a new hire, Aaron, showed up at Jarad’s office.

The large project that dominated their timelines remained their efforts to migrate from .NET to Java, but Aaron was hired to keep the .NET side of things on track, handling bugs, new features that were desperately needed, and just general maintenance. It was made emphatically clear by the project managers that hiring more .NET developers was not an admission that the conversion to Java had failed, but would “free up resources” to better focus on the Java side of things.

Aaron moved fast to establish himself. He scheduled a presentation in the first week. He was vague about what, exactly, the presentation was about ahead of time. So, when the lights came down and the projector lit up, everyone was a bit surprised to see their .NET code in his slides.

“This,” he explained, “is our application code. I wanted to give you a walk through the code, so we all as a team have a better understanding.”

Jarad and his co-workers exchanged glances, silently wondering if this was for real. Was Aaron really about to explain the code they had written to them?

“This line here,” Aaron said, pointing to a for loop, “is an interesting construct. It will repeat the code which follows to be repeated once for each element in the array.” A few slides later, highlighting a line which read, x = new AccountModel(), Aaron explained. “This creates an instance of an account model object. The instance is of the class, while the class defines what is common across all objects.”

That hour long meeting was one of the longest hours of Jarad’s life. It was a perfect storm of tedium, insult, and incompetence.

Afterwards, Jarad grabbed his manager, Regine. “Like, do you think Aaron is going to actually be a good fit?”

“Oh, I’m sure he’ll be fine. Look how well he understands our code already!”

That laid out the pattern of working with Aaron. During one team meeting, the team got sidetracked discussing the best approach to managing a very specific exception in a very specific section of their code. Fifteen minutes after the meeting, Aaron followed up with an email: “Re: Exception Handling”, which consisted of a bad paraphrase of the Execption class documentation from the MSDN site. Another day, during another meeting, someone mentioned concurrency, so Aaron followed up with an email that broadly plagiarized a Stack Overflow post describing the ProcessThread object.

And, on each one of those emails, Regine and several other project managers were CCed. The result was that the management team felt that Aaron was a great communicator, who constantly was adding value to the team. He was a mentor. An asset. The kind of person that should be invited to every one of the project management meetings, because he was extremely technical but also the kind of communicator and go-getter that had management written all over him.

Among the developers, Aaron’s commits were a running joke. He submitted non-working code, code that violated every standard practice and styleguide entry they used, code with out tests, code with tests that would pass no matter what happened, code that didn’t compile, and code that was clearly copy/pasted from a tutorial without bothering to try and fix the indentation.

It was no surprise then, that a few months later, Aaron announced that he was now a “System Architect”, a role that did not actually exist in their org-chart, but Aaron assured them meant he could tell them how to write software. Jarad went to Regine, along with a few other developers, and raised their concerns. Specifically: Aaron had invented a new job role and was claiming authority he didn’t have, he didn’t have the seniority for a promotion at this time, he didn’t actually know what he was doing, and he was killing team morale.

“Are you familiar with the crab mentality?” Regine asked. “I’m concerned that you’re being poor team players and a negative influence. You should be happy for Aaron’s success, because it reflects on how good our team is!”

Jarad and the rest of the team soon discovered that Regine was right. Now that Aaron was a “System Architect” he was too busy building presentations, emailing barely comprehensible and often inaccurate summaries of documentation, and scheduling meetings to actually write any code. Team performance improved, and it was trivial to configure one’s inbox to spam Aaron’s messages.

Aaron’s “communication style” kept getting him scheduled to do more presentations where he could explain simple programming concepts to different layers of management. The general consensus was that they didn’t understand what he was talking about, but he must be very smart to talk about it with a PowerPoint deck.

After their next release of their .NET product, Aaron scheduled a meeting with some of the upper tier management to review the project. He once again dazzled them with his explanation of the difference between an object and a class, with a brief foray into the difference between reference and value types, and then followed up with an email, thanking them all for their time.

On this email, he CCed the VP of the company.

The VP of the company was also one of the founders, and was a deeply technical person. She never related her reasoning to anyone, but based on Aaron’s email, she scheduled a meeting with him. It was no trick finding out that the meeting was going to take place: Aaron made sure to let everyone on the team know. “I have to block off everything from 3PM on Thursday, because I have a meeting with the VP.” “Can we table that? It’s probably best if we discuss after my meeting with the VP.” “I’ll be back later, it’s time for my meeting with the VP.”

No one knows exactly what happened in that meeting. What was said or done is between Aaron and the VP. But 45 minutes later, both Aaron and the VP walked onto the developers’ floor. Aaron was watching his shoes, and the VP was staring daggers at the back of his neck. She marched Aaron into Regine’s office, and closed the door. For the next twenty minutes, the VP vented her frustration. When her voice got raised, words like “enabling” and “incompetence” and “inappropriate” and “hiring practices” leaked out.

The VP stormed back out, leaving Regine and Aaron to discuss Aaron’s severance. That was the last day anyone saw Aaron.

Well, until Jarad started thinking about attending a local tech conference. Aaron, as it turns out, will be one of the speakers, discussing some “cutting edge” .NET topics.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Krebs on SecurityNuclear Bot Author Arrested in Sextortion Case

Last summer, a wave of sextortion emails began flooding inboxes around the world. The spammers behind this scheme claimed they’d hacked your computer and recorded videos of you watching porn, and promised to release the embarrassing footage to all your contacts unless a bitcoin demand was paid. Now, French authorities say they’ve charged two men they believe are responsible for masterminding this scam. One of them is a 21-year-old hacker interviewed by KrebsOnSecurity in 2017 who openly admitted to authoring a banking trojan called “Nuclear Bot.”

On Dec. 15, the French news daily Le Parisien published a report stating that French authorities had arrested and charged two men in the sextortion scheme. The story doesn’t name either individual, but rather refers to one of the accused only by the pseudonym “Antoine I.,” noting that his first had been changed (presumably to protect his identity because he hasn’t yet been convicted of a crime).

“According to sources close to the investigation, Antoine I. surrendered to the French authorities at the beginning of the month, after being hunted down all over Europe,” the story notes. “The young Frenchman, who lived between Ukraine, Poland and the Baltic countries, was indicted on 6 December for ‘extortion by organized gang, fraudulent access to a data processing system and money laundering.’ He was placed in pre-trial detention.”

According to Le Parisien, Antoine I. admitted to being the inventor of the initial 2018 sextortion scam, which was subsequently imitated by countless other ne’er-do-wells. The story says the two men deployed malware to compromise at least 2,000 computers that were used to blast out the sextortion emails.

While that story is light on details about the identities of the accused, an earlier version of it published Dec. 14 includes more helpful clues. The Dec. 14 piece said Antoine I. had been interviewed by KrebsOnSecurity in April 2017, where he boasted about having created Nuclear Bot, a malware strain designed to steal banking credentials from victims.

My April 2017 exposé featured an interview with Augustin Inzirillo, a young man who came across as deeply conflicted about his chosen career path. That path became traceable after he released the computer code for Nuclear Bot on GitHub. Inzirillo outed himself by defending the sophistication of his malware after it was ridiculed by both security researchers and denizens of the cybercrime underground, where copies of the code wound up for sale. From that story:

“It was a big mistake, because now I know people will reuse my code to steal money from other people,” Inzirillo told KrebsOnSecurity in an online chat.

Inzirillo released the code on GitHub with a short note explaining his motivations, and included a contact email address at a domain (inzirillo.com) set up long ago by his father, Daniel Inzirillo.

KrebsOnSecurity also reached out to Daniel, and heard back from him roughly an hour before Augustin replied to requests for an interview. Inzirillo the elder said his son used the family domain name in his source code release as part of a misguided attempt to impress him.

“He didn’t do it for money,” said Daniel Inzirillo, whose CV shows he has built an impressive career in computer programming and working for various financial institutions. “He did it to spite all the cyber shitheads. The idea was that they wouldn’t be able to sell his software anymore because it was now free for grabs.”

If Augustin Inzirillo ever did truly desire to change his ways, it wasn’t clear from his apparent actions last summer: The Le Parisien story says the sextortion scams netted the Frenchman and his co-conspirator at least a million Euros.

In August 2018, KrebsOnSecurity was contacted by a researcher working with French authorities on the investigation who said he suspected the young man was bragging on Twitter that he used a custom version of Nuclear Bot dubbed “TinyNuke” to steal funds from customers of French and Polish banks.

The source said this individual used the now-defunct Twitter account @tiny_gang1 to taunt French authorities, while showing off a fan of 100-Euro notes allegedly gained from his illicit activities (see image above). It seemed to the source that Inzirillo wanted to get caught, because at one point @tiny_gang1 even privately shared a copy of Inzirillo’s French passport to prove his identity and accomplishments to the researcher.

“He modified the Tinynuke’s config several times, and we saw numerous modifications in the malware code too,” the source said. “We tried to compare his samples with the leaked code available on GitHub and we noticed that the guy actually was using a more advanced version with features that don’t exist in the publicly available repositories. As an example, custom samples have video recording functionality, socks proxy and other features. So the guy clearly improved the source code and recompiled a new version for every new campaign.”

The source said the person behind the @tiny_gang Twitter account attacked French targets with custom versions of TinyNuke in one to three campaigns per week earlier this year, harvesting French bank accounts and laundering the stolen funds via a money mule network based mostly in the United Kingdom.

“If the guy behind this campaign is the malware author, it could easily explain the modifications happening with the malware, and his French is pretty good,” the researcher told KrebsOnSecurity. “He’s really provocative and I think he wants to be arrested in France because it could be a good way to become famous and maybe prove that his malware works (to resell it after?).”

The source said the TinyNuke author threatened him with physical harm after the researcher insulted his intelligence while trying to goad him into disclosing more details about his cybercrime activities.

“The guy has a serious ego problem,” the researcher said. “He likes when we talk about him and he hates when we mock him. He got really angry as time went by and started personally threatening me. In the last [TinyNuke malware configuration file] targeting Poland we found a long message dedicated to me with clear physical threats.”

All of the above is consistent with the findings detailed in the Le Parisien report, which quoted French investigators saying Antoine I. in October 2019 used a now-deleted Twitter account to taunt the authorities into looking for him. In one such post, he included a picture of himself holding a beer, saying: “On the train to Naples. You should send me a registered letter instead of threatening guys informally.”

The Le Parisien story also said Antoine I. threatened a researcher working with French authorities on the investigation (the researcher is referred to pseudonymously as “Marc”).

“I make a lot more money than you, I am younger, more intelligent,” Antoine I. reportedly wrote in July 2018 to Marc. “If you do not stop playing with me, I will put a bullet in your head. ”

French authorities say the defendant managed his extortion operations while traveling throughout Ukraine and other parts of Eastern Europe. But at some point he decided to return home to France, despite knowing investigators there were hunting him. According to Le Parisien, he told the French authorities he wanted to cooperate in the investigation and that he no longer wished to live like a fugitive.

CryptogramIranian Attacks on Industrial Control Systems

New details:

At the CyberwarCon conference in Arlington, Virginia, on Thursday, Microsoft security researcher Ned Moran plans to present new findings from the company's threat intelligence group that show a shift in the activity of the Iranian hacker group APT33, also known by the names Holmium, Refined Kitten, or Elfin. Microsoft has watched the group carry out so-called password-spraying attacks over the past year that try just a few common passwords across user accounts at tens of thousands of organizations. That's generally considered a crude and indiscriminate form of hacking. But over the last two months, Microsoft says APT33 has significantly narrowed its password spraying to around 2,000 organizations per month, while increasing the number of accounts targeted at each of those organizations almost tenfold on average.

[...]

The hackers' motivation -- and which industrial control systems they've actually breached -- remains unclear. Moran speculates that the group is seeking to gain a foothold to carry out cyberattacks with physically disruptive effects. "They're going after these producers and manufacturers of control systems, but I don't think they're the end targets," says Moran. "They're trying to find the downstream customer, to find out how they work and who uses them. They're looking to inflict some pain on someone's critical infrastructure that makes use of these control systems."

It's unclear whether the attackers are causing any actual damage, or just gaining access for some future use.

Worse Than FailureCodeSOD: We Go to School

Sometimes, it feels like any programming question you might have has a thread on StackOverflow. It might not have an answer, but it’s probably there. Between that, online guidebooks, tools with decent documentation, YouTube programming tutorials there are a lot of great ways to learn how to solve any given programming task.

Andreas R had a programming task. Specifically, Andreas wanted to create sortable tables that worked like those on MediaWiki sites. A quick google for “sort html table” turned up a source which offered… this.

function sortTable() {
  var table, rows, switching, i, x, y, shouldSwitch;
  table = document.getElementById("myTable");
  switching = true;
  /* Make a loop that will continue until
  no switching has been done: */
  while (switching) {
    // Start by saying: no switching is done:
    switching = false;
    rows = table.rows;
    /* Loop through all table rows (except the
    first, which contains table headers): */
    for (i = 1; i < (rows.length - 1); i++) {
      // Start by saying there should be no switching:
      shouldSwitch = false;
      /* Get the two elements you want to compare,
      one from current row and one from the next: */
      x = rows[i].getElementsByTagName("TD")[0];
      y = rows[i + 1].getElementsByTagName("TD")[0];
      // Check if the two rows should switch place:
      if (x.innerHTML.toLowerCase() > y.innerHTML.toLowerCase()) {
        // If so, mark as a switch and break the loop:
        shouldSwitch = true;
        break;
      }
    }
    if (shouldSwitch) {
      /* If a switch has been marked, make the switch
      and mark that a switch has been done: */
      rows[i].parentNode.insertBefore(rows[i + 1], rows[i]);
      switching = true;
    }
  }
}

This code works, for very limited values of “works”. It works by doing a bubble sort until we stop swapping entries. It always skips the first row, under the assumption that we’re looking at a table with headers. It only ever sorts by the first column. It does all the sorting directly in the DOM, which is a great way to really add some overhead to your data manipulation.

There are a lot of shady, skeevy tutorial sites, and some of them are really good at search engine optimization. This is one of those. It’s the sort of site anyone with any experience knows is a bad source, but those without that experience are left to learn the hard way.

TRWTF are sites that spend more time and energy on SEO than on providing helpful content. At least when we share bad code, we know it’s bad- and so does our audience.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Krebs on SecurityRansomware Gangs Now Outing Victim Businesses That Don’t Pay Up

As if the scourge of ransomware wasn’t bad enough already: Several prominent purveyors of ransomware have signaled they plan to start publishing data stolen from victims who refuse to pay up. To make matters worse, one ransomware gang has now created a public Web site identifying recent victim companies that have chosen to rebuild their operations instead of quietly acquiescing to their tormentors.

The message displayed at the top of the Maze Ransomware public shaming site.

Less than 48 hours ago, the cybercriminals behind the Maze Ransomware strain erected a Web site on the public Internet, and it currently lists the company names and corresponding Web sites for eight victims of their malware that have declined to pay a ransom demand.

“Represented here companies dont wish to cooperate with us, and trying to hide our successful attack on their resources,” the site explains in broken English. “Wait for their databases and private papers here. Follow the news!”

KrebsOnSecurity was able to verify that at least one of the companies listed on the site indeed recently suffered from a Maze ransomware infestation that has not yet been reported in the news media.

The information disclosed for each Maze victim includes the initial date of infection, several stolen Microsoft Office, text and PDF files, the total volume of files allegedly exfiltrated from victims (measured in Gigabytes), as well as the IP addresses and machine names of the servers infected by Maze.

As shocking as this new development may be to some, it’s not like the bad guys haven’t warned us this was coming.

“For years, ransomware developers and affiliates have been telling victims that they must pay the ransom or stolen data would be publicly released,” said Lawrence Abrams, founder of the computer security blog and victim assistance site BleepingComputer.com. “While it has been a well-known secret that ransomware actors snoop through victim’s data, and in many cases steal it before the data is encrypted, they never actually carried out their threats of releasing it.”

Abrams said that changed at the end of last month, when the crooks behind Maze Ransomware threatened Allied Universal that if they did not pay the ransom, they would release their files. When they did not receive a payment, they released 700MB worth of data on a hacking forum.

“Ransomware attacks are now data breaches,” Abrams said. “During ransomware attacks, some threat actors have told companies that they are familiar with internal company secrets after reading the company’s files. Even though this should be considered a data breach, many ransomware victims simply swept it under the rug in the hopes that nobody would ever find out. Now that ransomware operators are releasing victim’s data, this will need to change and companies will have to treat these attacks like data breaches.”

The move by Maze Ransomware comes just days after the cybercriminals responsible for managing the “Sodinokibi/rEvil” ransomware empire posted on a popular dark Web forum that they also plan to start using stolen files and data as public leverage to get victims to pay ransoms.

The leader of the Sodinokibi/rEvil ransomware gang promising to name and shame victims publicly in a recent cybercrime forum post. Image: BleepingComputer.

This is especially ghastly news for companies that may already face steep fines and other penalties for failing to report breaches and safeguard their customers’ data. For example, healthcare providers are required to report ransomware incidents to the U.S. Department of Health and Human Services, which often documents breaches involving lost or stolen healthcare data on its own site.

While these victims may be able to avoid reporting ransomware incidents if they can show forensic evidence demonstrating that patient data was never taken or accessed, sites like the one that Maze Ransomware has now erected could soon dramatically complicate these incidents.

Cory DoctorowParty Discipline, a Walkaway story (Part 3)

In my latest podcast (MP3), I continue my serial reading of my novella Party Discipline, which I wrote while on a 35-city, 45-day tour for my novel Walkaway in 2017; Party Discipline is a story set in the world of Walkaway, about two high-school seniors who conspire to throw a “Communist Party” at a sheet metal factory whose owners are shutting down and stealing their workers’ final paychecks. These parties are both literally parties — music, dancing, intoxicants — and “Communist” in that the partygoers take over the means of production and start them up, giving away the products they create to the attendees. Walkaway opens with a Communist Party and I wanted to dig into what might go into pulling one of those off.

Here’s part 1 of the reading and here’s part 2.

We told them they could go home if they didn’t want to risk coming to the Communist party, but we told them that after we told them that they were the only kids in the whole school we trusted enough to invite to it, and made sure they all knew that if they backed out, there’d be no hard feelings—and no chance to change their mind later tonight when they were at a corny party with a bunch of kids instead of making glorious revolution.

Every one of them said they’d come.

I’d found an all-ages show in Encino that night, two miles from Steelbridge, Antoine’s old job. We got piled into Ubers heading for the club, chatting about inconsequentialities for the in-car cameras and mics, and every one of us paid cover for the club, making sure to use traceable payment systems that would alibi us as having gone in for the night. Then we all met in the back alley, letting ourselves out of the fire-doors in ones and twos. I did a head-count to make sure we were all there, squashed together in a spot out of view of the one remaining camera back there (I’d taken out the other one the day before, wearing a hoodie and gloves, sliding along the wall so that I was out of its range until I was reaching up to smear it with some old crank-case oil).

We hugged the wall until we were back out into the side streets. All our phones were off and bagged, and everyone had maps that used back streets without cameras to get to Steelbridge. We strung out in groups of two to five, at least half a block between us, so no one would see a big group of kids Walking While Brown and call in the cops.

MP3

,

Krebs on SecurityInside ‘Evil Corp,’ a $100M Cybercrime Menace

The U.S. Justice Department this month offered a $5 million bounty for information leading to the arrest and conviction of a Russian man indicted for allegedly orchestrating a vast, international cybercrime network that called itself “Evil Corp” and stole roughly $100 million from businesses and consumers. As it happens, for several years KrebsOnSecurity closely monitored the day-to-day communications and activities of the accused and his accomplices. What follows is an insider’s look at the back-end operations of this gang.

Image: FBI

The $5 million reward is being offered for 32 year-old Maksim V. Yakubets, who the government says went by the nicknames “aqua,” and “aquamo,” among others. The feds allege Aqua led an elite cybercrime ring with at least 16 others who used advanced, custom-made strains of malware known as “JabberZeus” and “Bugat” (a.k.a. “Dridex“) to steal banking credentials from employees at hundreds of small- to mid-sized companies in the United States and Europe.

From 2009 to the present, Aqua’s primary role in the conspiracy was recruiting and managing a continuous supply of unwitting or complicit accomplices to help Evil Corp. launder money stolen from their victims and transfer funds to members of the conspiracy based in Russia, Ukraine and other parts of Eastern Europe. These accomplices, known as “money mules,” are typically recruited via work-at-home job solicitations sent out by email and to people who have submitted their resumes to job search Web sites.

Money mule recruiters tend to target people looking for part-time, remote employment, and the jobs usually involve little work other than receiving and forwarding bank transfers. People who bite on these offers sometimes receive small commissions for each successful transfer, but just as often end up getting stiffed out of a promised payday, and/or receiving a visit or threatening letter from law enforcement agencies that track such crime (more on that in a moment).

HITCHED TO A MULE

KrebsOnSecurity first encountered Aqua’s work in 2008 as a reporter for The Washington Post. A source said they’d stumbled upon a way to intercept and read the daily online chats between Aqua and several other mule recruiters and malware purveyors who were stealing hundreds of thousands of dollars weekly from hacked businesses.

The source also discovered a pattern in the naming convention and appearance of several money mule recruitment Web sites being operated by Aqua. People who responded to recruitment messages were invited to create an account at one of these sites, enter personal and bank account data (mules were told they would be processing payments for their employer’s “programmers” based in Eastern Europe) and then log in each day to check for new messages.

Each mule was given busy work or menial tasks for a few days or weeks prior to being asked to handle money transfers. I believe this was an effort to weed out unreliable money mules. After all, those who showed up late for work tended to cost the crooks a lot of money, as the victim’s bank would usually try to reverse any transfers that hadn’t already been withdrawn by the mules.

One of several sites set up by Aqua and others to recruit and manage money mules.

When it came time to transfer stolen funds, the recruiters would send a message through the mule site saying something like: “Good morning [mule name here]. Our client — XYZ Corp. — is sending you some money today. Please visit your bank now and withdraw this payment in cash, and then wire the funds in equal payments — minus your commission — to these three individuals in Eastern Europe.”

Only, in every case the company mentioned as the “client” was in fact a small business whose payroll accounts they’d already hacked into.

Here’s where it got interesting. Each of these mule recruitment sites had the same security weakness: Anyone could register, and after logging in any user could view messages sent to and from all other users simply by changing a number in the browser’s address bar. As a result, it was trivial to automate the retrieval of messages sent to every money mule registered across dozens of these fake company sites.

So, each day for several years my morning routine went as follows: Make a pot of coffee; shuffle over to the computer and view the messages Aqua and his co-conspirators had sent to their money mules over the previous 12-24 hours; look up the victim company names in Google; pick up the phone to warn each that they were in the process of being robbed by the Russian Cyber Mob.

My spiel on all of these calls was more or less the same: “You probably have no idea who I am, but here’s all my contact info and what I do. Your payroll accounts have been hacked, and you’re about to lose a great deal of money. You should contact your bank immediately and have them put a hold on any pending transfers before it’s too late. Feel free to call me back afterwards if you want more information about how I know all this, but for now please just call or visit your bank.”

Messages to and from a money mule working for Aqua’s crew, circa May 2011.

In many instances, my call would come in just minutes or hours before an unauthorized payroll batch was processed by the victim company’s bank, and some of those notifications prevented what otherwise would have been enormous losses — often several times the amount of the organization’s normal weekly payroll. At some point I stopped counting how many tens of thousands of dollars those calls saved victims, but over several years it was probably in the millions.

Just as often, the victim company would suspect that I was somehow involved in the robbery, and soon after alerting them I would receive a call from an FBI agent or from a police officer in the victim’s hometown. Those were always interesting conversations. Needless to say, the victims that spun their wheels chasing after me usually suffered far more substantial financial losses (mainly because they delayed calling their financial institution until it was too late).

Collectively, these notifications to Evil Corp.’s victims led to dozens of stories over several years about small businesses battling their financial institutions to recover their losses. I don’t believe I ever wrote about a single victim that wasn’t okay with my calling attention to their plight and to the sophistication of the threat facing other companies.

LOW FRIENDS IN HIGH PLACES

According to the U.S. Justice Department, Yakubets/Aqua served as leader of Evil Corp. and was responsible for managing and supervising the group’s cybercrime activities in deploying and using the Jabberzeus and Dridex banking malware. The DOJ notes that prior to serving in this leadership role for Evil Corp, Yakubets was also directly associated with Evgeniy “Slavik” Bogachev, a previously designated Russian cybercriminal responsible for the distribution of the Zeus, Jabber Zeus, and GameOver Zeus malware schemes who currently has a $3 million FBI bounty on his head.

Evgeniy M. Bogachev, in undated photos.

As noted in previous stories here, during times of conflict with Russia’s neighbors, Slavik was known to retool his crime machines to search for classified information on victim systems in regions of the world that were of strategic interest to the Russian government – particularly in Turkey and Ukraine.

“Cybercriminals are recruited to Russia’s national cause through a mix of coercion, payments and appeals to patriotic sentiment,â€� reads a 2017 story from The Register on security firm Cybereason’s analysis of the Russian cybercrime scene. “Russia’s use of private contractors also has other benefits in helping to decrease overall operational costs, mitigating the risk of detection and gaining technical expertise that they cannot recruit directly into the government. Combining a cyber-militia with official state-sponsored hacking teams has created the most technically advanced and bold cybercriminal community in the world.â€�

This is interesting because the U.S. Treasury Department says Yukabets as of 2017 was working for the Russian FSB, one of Russia’s leading intelligence organizations.

“As of April 2018, Yakubets was in the process of obtaining a license to work with Russian classified information from the FSB,” notes a statement from the Treasury.

The Treasury Department’s role in this action is key because it means the United States has now imposed economic sanctions on Yukabets and 16 accused associates, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with these individuals.

The Justice Department’s criminal complaint against Yukabets (PDF) mentions several intercepted chat communications between Aqua and his alleged associates in which they puzzle over why KrebsOnSecurity seemed to know so much about their internal operations and victims. In the following chat conversations (translated from Russian), Aqua and others discuss a story I wrote for The Washington Post in 2009 about their theft of hundreds of thousands of dollars from the payroll accounts of Bullitt County, Ky:

tank: [Are you] there?
indep: Yeah.
indep: Greetings.
tank: http://voices.washingtonpost.com/securityfix/2009/07/an_odyssey_of_fraud_part_ii.html#more
tank: This is still about me.
tank: Originator: BULLITT COUNTY FISCAL Company: Bullitt County Fiscal Court
tank: He is the account from which we cashed.
tank: Today someone else send this news.
tank: I’m reading and thinking: Let me take a look at history. For some reason this name is familiar.
tank: I’m on line and I’ll look. Ah, here is this shit.
indep: How are you?
tank: Did you get my announcements?
indep: Well, I congratulate [you].
indep: This is just fuck when they write about you in the news.
tank: Whose [What]?
tank: 😀
indep: Too much publicity is not needed.
tank: Well, so nobody knows who they are talking about.

tank: Well, nevertheless, they were writing about us.
aqua: So because of whom did they lock Western Union for Ukraine?
aqua: Tough shit.
tank: *************Originator: BULLITT COUNTY FISCAL Company: Bullitt
County Fiscal Court
aqua: So?
aqua: This is the court system.
tank: Shit.
tank: Yes
aqua: This is why they fucked [nailed?] several drops.
tank: Yes, indeed.
aqua: Well, fuck. Hackers: It’s true they stole a lot of money.

At roughly the same time, one of Aqua’s crew had a chat with Slavik, who used the nickname “lucky12345” at the time:

tank: Are you there?
tank: This is what they damn wrote about me.
tank: http://voices.washingtonpost.com/securityfix/2009/07/an_odyssey_of_fraud_part_ii.html#more
tank: I’ll take a quick look at history
tank: Originator: BULLITT COUNTY FISCAL Company: Bullitt County Fiscal Court
tank: Well, you got [it] from that cash-in.
lucky12345: From 200K?
tank: Well, they are not the right amounts and the cash out from that account was shitty.
tank: Levak was written there.
tank: Because now the entire USA knows about Zeus.
tank: 😀
lucky12345: It’s fucked.

On Dec. 13, 2009, one of the Jabberzeus gang’s money mule recruiters –- a crook who used the pseudonym “Jim Rogersâ€� — somehow learned about something I hadn’t shared beyond a few trusted friends at that point: That The Washington Post had eliminated my job in the process of merging the newspaper’s Web site (where I worked at the time) with the dead tree edition. The following is an exchange between Jim Rogers and the above-quoted “tankâ€�:

jim_rogers: There is a rumor that our favorite (Brian) didn’t get his contract extension at Washington Post. We are giddily awaiting confirmation 🙂 Good news expected exactly by the New Year! Besides us no one reads his column 🙂

tank: Mr. Fucking Brian Fucking Kerbs!

In March 2010, Aqua would divulge in an encrypted chat that his crew was working directly with the Zeus author (Slavik/Lucky12345), but that they found him abrasive and difficult to tolerate:

dimka: I read about the king of seas, was it your handy work?
aqua: what are you talking about? show me
dimka: zeus
aqua: 🙂
aqua: yes, we are using it right now
aqua: its developer sits with us on the system
dimka: it’s a popular thing
aqua: but, he, fucker, annoyed the hell out of everyone, doesn’t want to write bypass of interactives (scans) and trojan penetration 35-40%, bitch
aqua: yeah, shit
aqua: we need better
aqua: http://voices.washingtonpost.com/securityfix read it 🙂 here you find almost everything about us 🙂
dimka: I think everything will be slightly different, if you think so
aqua: we, in this system, the big dog, the rest on the system are doing small crap

Later that month, Aqua bemoaned even more publicity about their work, pointing to a KrebsOnSecurity story about a sophisticated attack in which their malware not only intercepted a one-time password needed to log in to the victim’s bank account, but even modified the bank’s own Web site as displayed in the victim’s browser to point to a phony customer support number.

Ironically, the fake bank phone number was what tipped off the victim company employee. In this instance, the victim’s bank — Fifth Third Bank (referred to as “53” in the chat below) was able to claw back the money stolen by Aqua’s money mules, but not funds that were taken via fraudulent international wire transfers. The cybercriminals in this chat also complain they will need a newly-obfuscated version of their malware due to public exposure:

aqua: tomorrow, everything should work.
aqua: fuck, we need to find more socks for spam.
aqua: okay, so tomorrow Petro [another conspirator who went by the nickname Petr0vich] will give us a [new] .exe
jtk: ok
jim_rogers: this one doesn’t work
jim_rogers: http://www.krebsonsecurity.com/2010/03/crooks-crank-up-volume-of-e-banking-attacks/
jim_rogers: here it’s written about my transfer from 53. How I made a number of wires like it said there. And a woman burnt the deal because of a fake phone number.

ANTI-MULE INITIATIVE

In tandem with the indictments against Evil Corp, the Justice Department joined with officials from Europol to execute a law enforcement action and public awareness campaign to combat money mule activity.

“More than 90% of money mule transactions identified through the European Money Mule Actions are linked to cybercrime,” Europol wrote in a statement about the action. “The illegal money often comes from criminal activities like phishing, malware attacks, online auction fraud, e-commerce fraud, business e-mail compromise (BEC) and CEO fraud, romance scams, holiday fraud (booking fraud) and many others.”

The DOJ said U.S. law enforcement disrupted mule networks that spanned from Hawaii to Florida and from Alaska to Maine. Actions were taken to halt the conduct of over 600 domestic money mules, including 30 individuals who were criminally charged for their roles in receiving victim payments and providing the fraud proceeds to accomplices.

Some tips from Europol on how to spot money mule recruitment scams dressed up as legitimate job offers.

It’s good to see more public education about the damage that money mules inflict, because without them most of these criminal schemes simply fall apart. Aside from helping to launder funds from banking trojan victims, money mules often are instrumental in fleecing elderly people taken in by various online confidence scams.

It’s also great to see the U.S. government finally wielding its most powerful weapon against cybercriminals based in Russia and other safe havens for such activity: Economic sanctions that severely restrict cybercriminals’ access to ill-gotten gains and the ability to launder the proceeds of their crimes by investing in overseas assets.

Further reading:

DOJ press conference remarks on Yakubets
FBI charges announced in malware conspiracy
2019 indictment of Yakubets, Turashev. et al.
2010 Criminal complaint vs. Yukabets, et. al.
FBI “wanted” alert on Igor “Enki” Turashev
US-CERT alert on Dridex

,

Sam VargheseA small step for Australian women, a giant leap for Tracey Spicer

A year and nine months after she founded NOW Australia claiming it was meant to focus on the problem of women being sexually harassed in the workplace, former TV newsreader Tracey Spicer is once again avoiding public appearances in order, she claims, to focus on her own mental health.

Spicer has retreated like this on earlier occasions too: she disappeared after actor John Jarratt was cleared of harassment charges and also when actor Geoffrey Rush won a case against the Daily Telegraph that had accused him of sexual harassment.

After a series of incidents that can only lead to one conclusion – Spicer’s embrace of the #MeToo movement was meant more to embellish her own image than anything else – the women’s movement in Australia has been put on the back foot and left wondering how it will recover from the Spicer show.

In 2006, after 14 years at Channel Ten, Spicer was sacked when she returned to work after having a second child. She turned it into an exercise to gain publicity, accusing the network of discrimination and threatening a court fight, but later accepting a settlement. That itself should have made any observer understand what she was about; had she wanted to expose discrimination, she would have gone ahead with the threatened case. But this episode served its purpose and gave her a public profile.

After a stint with Sky News, with whom she worked until 2015, Spicer took up the #MeToo mantle soon after the exposure of the antics of film moghul Harvey Weinstein in the US came to light in October 2017. She put out a tweet, inviting women to send her their stories of harassment, saying: “Currently, I am investigating two long-term offenders in our media industry. Please, contact me privately to tell your stories.” It must be noted that prior to this, Spicer had dropped hints here and there that she understood that the problem was widespread.

But when her tweet resulted in a large number of responses, Spicer professed that she was amazed to hear from such a large number of women. This contradicted what she had been saying prior to her tweet. She could not keep up with the responses to these poor souls. In March the following year, NOW Australia was set up, apparently to cater to these women’s needs. They needed professional help – from lawyers, counsellors, psychologists and the like. Spicer has no qualifications apart from a general graduate degree.

Unlike its American counterpart, known as Time’s Up, NOW has not managed to raise the funds or support needed to run such a show. It has been something of a disaster and the rosy pictures painted in the media have been an exaggeration. In fact, the media coverage has been the only area in which NOW has excelled. In reality, the women who sought solace by writing to Spicer have been led up the garden path. And there are a fair number of them, more than 2000.

Spicer has used some of the material she collected to front a three-part TV program on the ABC under the name Silent No More. But that has led to more revelations which do not cast her in a very good light.

For one, Spicer, who has always played up the fact that she has 30 years’ media experience, allowed the production company making the show to film her sitting at a computer where complaints from some women were clearly visible. This was in early versions of the program which were distributed to media for publicity.

One thing which journalists are taught on day one is to never reveal sources or source material. Yet when Spicer was asked about this major lapse, she blamed the ABC and the production company! It is part of a pattern – she refuses to accept the blame for anything that has blown up in her face.

When three of the women whose names were exposed in this manner made comments to media outlets that were critical of Spicer, she retaliated by sending them legal notices and demanding they keep mum. In one case, Spicer demanded $1500 as legal costs. The saviour of sexually harassed women had turned out to be a different kind of harasser herself. Would this encourage women to tell their tales to others? Hardly.

Spicer has also lied when it suited her and helped to boost her profile. Australia set up an inquiry into sexual harassment in the workplace in 2018 and, in a newspaper article, Spicer claimed that she had proposed the idea to the Sex Discrimination Commissioner, Kate Jenkins. But Jenkins denied that Spicer had any role in the setting up of the inquiry, telling the Buzzfeed website: “Tracey Spicer was not involved in conceiving of or establishing the national inquiry, nor did she suggest the idea of the inquiry to me.”

In November, Spicer managed to wrangle an invitation to address the National Press Club in Canberra. After her talk, she fielded questions from the audience. Three questions from women journalists – Claudia Long of the ABC, Gina Rushton of Buzzfeed and Alice Workman of The Australian – were met with spin.

Long asked whether Spicer’s mismanagement of the responses had possibly knocked some of the steam out of the women’s movement; Rushton asked whether the remainder of the 2000-plus women who had written to Spicer should also be concerned about their privacy; and Workman asked why Spicer had allowed cameras to film her computer screen and whether she was concerned that this was potentially unethical as a journalist.

Spicer evaded answering any of these questions. She just talked around the queries in what was a perfect display of what PR people do.

The impression that Spicer has used her foray into the #MeToo movement in Australia as a PR blitz for herself gathered steam after she was given three hours on the ABC to front a program titled Silent No More.

There was little of substance in the program which only served to give people various angles of Spicer’s visage, featured numerous motherhood statements from her and some patronising comments to both men and women at large. It gave the impression that sexual harassment is a PR problem.

The absence of any serious discussion of sexual harassment with qualified people – psychologists, counsellors, medical staff or lawyers – was notable. Spicer has no qualifications beyond a general graduate degree and is incapable of bringing an expert view to the issue. She, herself, has not experienced sexual harassment beyond the garden variety that practically every woman in the workplace goes through.

Whatever happens to the women’s movement in Australia, one thing is clear: Tracey Spicer has put the brakes on at a very pivotal moment. As the saying goes, one needs to strike while the iron is hot. That moment has long passed. Spicer has done sexually harassed women a singular disservice.

https://www.theage.com.au/culture/tv-and-radio/false-and-malicious-tracey-spicer-s-lawyer-hits-back-at-detractors-20191206-p53hnu.html

https://www.buzzfeed.com/hannahryan/metoo-movement-now-australia-tracey-spicer

https://www.afr.com/rear-window/how-dumb-does-tracey-spicer-think-we-are-20191203-p53gbl

,

LongNowThe 5 Questions We Need to Answer About Artificial Intelligence — Gurjeet Singh at The Interval

Creators of AI systems have a responsibility to figure out how they might go wrong, and govern them accordingly.

From Gurjeet Singh’s Interval talk, “The Shape of Data and Things to Come.”

About this Talk

Big Data promises unparalleled insights, but the larger the data, the harder they are to find. The key to unlocking them was discovered by mathematicians in the 18th century. A modern mathematician explains how to find patterns in data with new algorithms for old math.

About Gurjeet Singh

Gurjeet Singh is Chief AI Officer and co-founder of Symphony AyasdiAI. He leads a technology movement that emphasizes the importance of extracting insight from data, not just storing and organizing it. Beginning with his tenure as a graduate student in Stanford’s Mathematics Department he has developed key mathematical and machine learning algorithms for Topological Data Analysis (TDA) and their applications. Before starting Ayasdi, he worked at Google and Texas Instruments.

Dr. Singh holds a Technology degree from Delhi University and a Computational Mathematics Ph.D. from Stanford. He serves on the Technology Advisory Board at HSBC and on the U.S. Commodity Futures Trading Commission’s Technology Advisory Committee. He was named to Silicon Valley Business Journal’s “40 Under 40” list in 02015. Gurjeet lives in Palo Alto with his wife and two children and develops multi-legged robots in his spare time.

,

LongNowIs Mars the Solution for Earth’s Problems?

Geologist Marcia Bjornerud and Long Now’s Executive Director Alexander Rose debate about whether going to Mars is a viable long-term sustainability plan for human survival.

From Marcia Bjornerud’s Long Now talk, “Timefulness.”

About the Talk

We need a poly-temporal worldview to embrace the overlapping rates of change that our world runs on, especially the huge, powerful changes that are mostly invisible to us.

Geologist Marcia Bjornerud teaches that kind of time literacy. With it, we become at home in the deep past and engaged with the deep future. We learn to “think like a planet.”

As for climate change… “Dazzled by our own creations,” Bjornerud writes, “we have forgotten that we are wholly embedded in a much older, more powerful world whose constancy we take for granted…. Averse to even the smallest changes, we have now set the stage for environmental deviations that will be larger and less predictable than any we have faced before.”

About Marcia Bjornerud

A professor of geology and environmental studies at Lawrence University in Wisconsin, Marcia Bjornerud is author of Timefulness: How Thinking Like a Geologist Can Help Save the World (2018) and Reading the Rocks: The Autobiography of the Earth (2005).

MEsystemd-nspawn and Private Networking

Currently there’s two things I want to do with my PC at the same time, one is watching streaming services like ABC iView (which won’t run from non-Australian IP addresses) and another is torrenting over a VPN. I had considered doing something ugly with iptables to try and get routing done on a per-UID basis but that seemed to difficult. At the time I wasn’t aware of the ip rule add uidrange [1] option. So setting up a private networking namespace with a systemd-nspawn container seemed like a good idea.

Chroot Setup

For the chroot (which I use as a slang term for a copy of a Linux installation in a subdirectory) I used a btrfs subvol that’s a snapshot of the root subvol. The idea is that when I upgrade the root system I can just recreate the chroot with a new snapshot.

To get this working I created files in the root subvol which are used for the container.

I created a script like the following named /usr/local/sbin/container-sshd to launch the container. It sets up the networking and executes sshd. The systemd-nspawn program is designed to launch init but that’s not required, I prefer to just launch sshd so there’s only one running process in a container that’s not being actively used.

#!/bin/bash

# restorecon commands only needed for SE Linux
/sbin/restorecon -R /dev
/bin/mount none -t tmpfs /run
/bin/mkdir -p /run/sshd
/sbin/restorecon -R /run /tmp
/sbin/ifconfig host0 10.3.0.2 netmask 255.255.0.0
/sbin/route add default gw 10.2.0.1
exec /usr/sbin/sshd -D -f /etc/ssh/sshd_torrent_config

How to Launch It

To setup the container I used a command like “/usr/bin/systemd-nspawn -D /subvols/torrent -M torrent –bind=/home -n /usr/local/sbin/container-sshd“.

First I had tried the --network-ipvlan option which creates a new IP address on the same MAC address. That gave me an interface iv-br0 on the container that I could use normally (br0 being the bridge used in my workstation as it’s primary network interface). The IP address I assigned to that was in the same subnet as br0, but for some reason that’s unknown to me (maybe an interaction between bridging and network namespaces) I couldn’t access it from the host, I could only access it from other hosts on the network. I then tried the --network-macvlan option (to create a new MAC address for virtual networking), but that had the same problem with accessing the IP address from the local host outside the container as well as problems with MAC redirection to the primary MAC of the host (again maybe an interaction with bridging).

Then I tried just the “-n” option which gave it a private network interface. That created an interface named ve-torrent on the host side and one named host0 in the container. Using ifconfig and route to configure the interface in the container before launching sshd is easy. I haven’t yet determined a good way of configuring the host side of the private network interface automatically.

I had to use a bind for /home because /home is a subvol and therefore doesn’t get included in the container by default.

How it Works

Now when it’s running I can just “ssh -X” to the container and then run graphical programs that use the VPN while at the same time running graphical programs on the main host that don’t use the VPN.

Things To Do

Find out why --network-ipvlan and --network-macvlan don’t work with communication from the same host.

Find out why --network-macvlan gives errors about MAC redirection when pinging.

Determine a good way of setting up the host side after the systemd-nspawn program has run.

Find out if there are better ways of solving this problem, this way works but might not be ideal. Comments welcome.

,

LongNowDigital Repatriations: Historic Recordings Returned to Passamaquoddy Tribe

Walter Jesse Fewkes records the Passamaquoddy Tribe in 01890. Photo: Passamaquoddy Cultural Heritage Museum.

In 01890, anthropologist Jesse Walter Fewkes traveled to Eastern Maine to document the Passamaquoddy Tribe. By then, war, disease, and unhonored treaties by local and federal authorities had reduced the tribe to a few hundred members.

Fewkes brought with him one of Thomas Edison’s phonographs — a technology less than a decade old. Over the course of several days, Fewkes recorded members of the tribe singing, telling stories, and providing basic pronunciation examples of words for things like numbers and days onto large, wax cylinders in 3 minute increments.

The Fewkes recordings represented a significant ethnographic advancement for the burgeoning field of Anthropology. It was the first time sounds had ever been recorded in the field. The recordings were given to Boston’s Peabody Museum, and acquired by the Library of Congress in 01976. It wasn’t until the 01980s, when the Library of Congress informed the Passamaquoddy of their existence, that any tribal members heard the recordings. The Passamaquoddy discovered that some material that was considered sacred and not meant to be heard outside of the tribe, such as a funeral ceremony, had been available for the general public to listen to for years.

The knowledge of the recordings came at a time of resurgence for the Passamaquoddy following a century of hardship. As E. Tammy Kim, writing in The New Yorker, puts it: “For decades, tribal members had suffered extreme poverty, seen their language banned by the Catholic priests and nuns who oversaw the reservations, and lost their kids to the child-welfare system.” But the Tribe recently won a landmark land claims settlement, and was awarded funds to purchase 150,000 acres of land. This resulted in many displaced tribal members returning to the tribe’s two reservations. Many of these “off-reservation returnees” were disconnected from Passamaquoddy culture. Some had intermarried, and did not speak the Passamaquoddy language. The Passamaquoddy Tribe estimates that 70% of its members could speak the language 30 years ago. Today, only 12% of its 3,600 members are fluent.

Those statistics may soon change. In recent years, technological advances in audio restoration, coupled with Passamaquoddy activism around preserving tribal culture and language, has led to the Library of Congress launching a new project of “digital repatriation” for the Passamaquoddy recordings called “Ancestral Voices.” The project’s goal is to confer curatorial control of the recordings back to the Tribe.

The process of enacting digital repatriations involves both technological and anthropological hurdles. The recordings are first cleaned for clarity of sound. There is still the crackle of age, but the content is now clearly audible and understandable. Next is the assignment of “Traditional Knowledge Labels,” a system developed by Professor Jane Anderson of New York University. Traditional Knowledge Labeling is “‘designed to identify and clarify which material has community-specific restrictions regarding access and use.

These labels work to prevent future mis-use of indigenous recordings and ensure that sacred material culture stays within the community and not widely disseminated, as has happened in the past.

Dwayne Tomah transcribing and translating a wax cylinder recording. Photo by Robbie Feinberg/Maine Public.

The transcription, translation and interpretation of the recordings required speakers of the Passamaquoddy language. In an interview with Art Canvas, Dwayne Tomah, a current member of the Passamaquoddy Tribe, found this to be an emotional and poignant process:

“I really wept. Hearing their voices. Knowing that I’m probably one of the last fluent speakers on the reservation. And that we’re still continuing this process to be able to revitalize our language and bring it back to life again, so to speak. And give it some attention that it really deserves.”

One of the main results from the digital repatriation was the creation of the Passamaquoddy Peoples’ Knowledge Portal. The entire recording collection can be found here, along with historical context, films, vocabulary guides and photographs. This website provides continuity between the past, present and future of the Tribe, providing a space and access for future Passamaquoddy generations to learn ancestral and traditional knowledge.

Members of the Passamaquoddy tribe dancing during a traditional tribal inauguration ceremony in August 02015. Photo via Island Institute.

“Language is both an embodiment of human culture, as well as the primary means of its maintenance and transmission,” writes Dr. Laura Welcher, a linguist and Director of Long Now’s long-term language archiving Rosetta Project. “When languages are lost, the transmission of traditional culture is often abruptly severed.” In seeking to correct erasures, reverse the extinction of languages, and reconstitute ritual, repatriation projects aim to restore this cultural transmission. Once indigenous people hear the voices of their ancestors that has previously been denied to them, it empowers them to reclaim their voice in the here and now.

Learn More

  • Explore the Passamaquoddy Peoples’ Knowledge Portal, where you can listen to Fewkes recordings.
  • Read E. Tammy Kim’s piece in The New Yorker on the digital repatriations project. 
  • Learn more about the usage of Traditional Knowledge labels. 

,

LongNowThe role of 80-million year-old rocks in American slavery — Lewis Dartnell at The Interval

When cretaceous-age rocks in the Southern US eroded over millions of years, they produced a uniquely rich, fertile soil that landowners realized was ideal for growing cash crops such as cotton. It was the soil from these rocks that slaves toiled over in the era of American slavery—and the same ground that ultimately became the epicenter of the Civil Rights Movement.

From Lewis Dartnell’s talk at The Interval, “ORIGINS: How Earth’s history shaped human history.”

About the talk

From the cultivation of the first crops to the founding of modern states, the human story is the story of environmental forces, from plate tectonics and climate change, to atmospheric circulation and ocean currents.

Professor Lewis Dartnell will dive into the planet’s deep past, where history becomes science, to explore a web of connections that underwrites our modern world, and that can help us face the challenges of the future.

About Lewis Dartnell

Lewis Dartnell is a Professor of Science Communication at the University of Westminster. Before that, he completed his biology degree at the University of Oxford and his PhD at UCL, and then worked as the UK Space Agency research fellow at the University of Leicester, studying astrobiology and searching for signs of life on Mars. He has won several awards for his science writing and contributes to the Guardian, The Times, and New Scientist. He is also the author of three books. He lives in London, UK.

,

Sam VargheseTest cricket is becoming a joke

Pakistan look like they will lose by an innings again to Australia, meaning that the two-Test series will end in a wipeout.

The question is: why are so many weak teams coming to Australia and playing matches that end up being hopelessly one-sided, resulting in very few people going to watch them?

Or is it the case that there is no other option given that India cannot come to Australia every year and play?

Pakistan has not played international cricket at home since 2009 when Sri Lanka toured. During that tour, terrorists attacked a bus carrying the Sri Lankans.

Since then, Pakistan has played all its international games in either Dubai or Abu Dhabi. The stadiums are grand but there are only a handful of expats who turn up to watch.

Worse, the fans at home are unable to see their heroes in action and interest in the game has plummetted. This means less and less kids turning to cricket and a system which once produced world-class players by the score now hardly produces any.

Pakistan has to make do with what it has. And since it has no choice, what happens in washouts like it will soon experience in Australia.

,

LongNowExperiencing Deep Time Through Visual Storytelling

Geological Time Spiral

Deep time is a notoriously hard concept to grasp. Our lived human experience is grounded in a timeframe that is at odds with the geological time frame of millions or billions of years. Since geologists began figuring out the true scale of geologic time, they have tried to communicate this scale through a series of metaphors, maps, and visualizations. Famous examples of this include Carl Sagan’s Cosmic Calendar, and for children, Montessori’s Clock of Eras. Advances in mapping and data visualization technologies have enabled new forms of visual storytelling for understanding these time frames. Two visualizations have been recently developed that address the temporal depth and endurance of our universe in novel and effective ways.

Deep Time Walk

Deep Time Walk is an engaging and innovative app that transforms deep time into an embodied experience, a mobile virtual time travel. Listeners plug headphones in and walk the entire journey of Earth’s 4.6 billion year-old history in just 4.6km.

Deep Time Walk app.

Deep Time Walk uses a dramatized dialogue between a questioning protagonist and patient scientist to explain complex topics in a relatable format. Written as a collaboration between playwright Peter Oswald and Dr. Stephan Harding, Deep Time Walk guides you to walk and encounter evolutionary significant events, from the emergence of volcanoes to the first appearance of oxygen-producing photosynthesis. You, the listener and walker, are frequently addressed, to check you are still following as you walk 2 million years in just 2 meters.

Narrative is a powerful tool for connection and understanding. By creating a story with relatable characters, Deep Time Walk removes the listener from the present and walks them into the distant past. The conversational (and sometimes poetic) storytelling produces empathy and connection, which works to ground the individual personally into this global enduring epic.

This translation of time into distance creates an effective microcosm by transforming the complexity of 1,000,000 years into the comprehensible and familiar metric of one meter. Through this, Deep Time Walk claims to help users understand ‘the destructive impact we are now having on the Earth’s complex climate in the blink of a geological eye.’

Ancient Earth

Ancient Earth, a temporal map of the world, approaches deep time differently. Built as an interactive tool, Ancient Earth works to visualize the extensive geographic and tectonic shifts of the last 750 million years and maps them comparatively onto the globe of today. Developed by Ian Webster, the curator of the world’s largest digital dinosaur database, with the use of C.R. Scotese’s paleogeographic maps, Ancient Earth captures deep time in physical space, on Earth.

450 millions years ago, Late Ordovician era. The pink dot is New York City. (Ian Webster/Ancient Earth)

Ancient Earth catapults the viewer back to the emergence of single-celled organisms, such as green algae, in the Cryogenian ice age 750 million years ago, before leaping ahead 320 million years to the Silurian Period, when mass extinctions coincided with progression of complex life on land. What is most striking about this ancient world is that it is barely recognizable; it looks disarmingly dark, cold and watery.

Sliding forward to 240 million years ago, the user encounters another equally distorted Earth; one landmass, Pangea, dominated as the singular supercontinent that encompassed the world. The map has plenty of useful features, from simple yet effective dropdown time jump options, such as the evolution of the first flowers, to location-specific searches, enabling users to track the journey of their hometown across both time and space. As Meilan Solly writes in a piece for The Smithsonian, “interested parties can now superimpose the political boundaries of today onto the geographic formations of yesteryear.”

What both Ancient Earth and Deep Time Walk achieve is compelling because of their user experience. Both engage directly with the individual and bring them into a narrative: Deep Time Walk is an embodied experience and drama; Ancient Earth encourages you to map your hometown across the ages. By making it relatable and personal, these apps start to help us conceptualize deep time.

Learn More

  • Watch geologist Marcia Bjornerud’s 02019 Long Now talk about deep time, “Timefulness.”
  • Read “How the Concept of Deep Time is Changing” in The Atlantic.

,

LongNowMove Slow and Preserve Things

La French Tech recently interviewed Long Now Director of Development Nicholas Paul Brysiewicz on the appropriate role of long-term thinking in an increasingly accelerated world.

,

LongNowA Trips Festival for the Digital Age

Leading up to each edition of Sónar is a visual messaging campaign that’s come to be known as the SónarImage. This year’s SónarImage, above, was a short film, ‘Je te tiens’, directed by Sónar co-founder Sergio Caballero.

Two series of radio transmissions are currently beaming through interstellar space — bound, their senders hope, for intelligent life on a distant planet. The transmissions contain 38 encoded pieces of music, each ten seconds in length, created by far-out but nonetheless earth-bound musicians.

At this writing, the first of the transmissions will have recently exited the Oort cloud, an expanse of icy cometary nuclei made of cosmic dust. It is expected to reach its destination, the exoplanet GJ273b, on November 3rd, 02030–12.5 years after it was sent from Earth. The second transmission will arrive six months later.

The exoplanet, known as Luyten’s Star, appears to meet the necessary conditions to harbor life. If it does, and if it is intelligent life, and if its denizens deign to reply, the soonest Earthlings can hope to hear back is 02043.

For the organizers of Sónar, an arts, design and electronic music festival in Barcelona, Spain, that would constitute perfect timing. The festival partnered with METI (Messaging Extraterrestrial Intelligence) to send the radio transmissions last year for its 25th anniversary celebration, with the hope that it might get a response time for its 50th.

A satellite in Tromsø, Norway, where the Sónar festival, in partnership with METI, sent radio transmissions to a potentially habitable exoplanet.

A multi-decade project to contact alien life might not seem like typical festival fare. But Sónar isn’t a typical festival. For over a quarter century, it has sought to bridge the worlds of art and technology, the popular and the avant garde, and club culture and cyberculture. Each edition of the festival offers a glimpse of possible futures and frontiers, from the latest technological advances in artificial intelligence and quantum computing to the next trends in music and multimedia art. The music festival is coupled with Sónar+D, a four-day technology and design conference of talks, workshops, immersive experiences, and exhibitions.

Sónar epitomizes what Stanford historian Fred Turner calls a network forum — a place “within which members of multiple communities [can] meet and collaborate and imagine themselves as members of a single community”:

Within the network forum, […] contributors create new rhetorical tools with which to express and facilitate their new collaborations. Network forums need not be confined to media. Think tanks, conferences, even open-air markets—all can serve as forums in which one or more entrepreneurs gather members of multiple networks, allow them to communicate and collaborate, and so facilitate the formation of both new networks and new contact languages.

Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism (02006), Chicago University Press, pp. 72–3.
Madeline Gannon teaches a masterclass at Sónar+D 02019 on “The Future of Humans and Machines: Human-Robot Interaction across the arts, sciences, and society.”

José Luis de Vicente, the lead curator of Sónar+D, describes the festival’s curatorial approach as anti-disciplinary.

“Sónar pioneered a model where the lines between musician, visual artist, technologist and sometimes even entrepreneur are really blurred,” de Vicente tells me. “We wanted to be a vessel for people who transition between those spaces.”

But then he pauses — realizing, perhaps, that he’s made too bold a claim.

“You know, there’s a problem in this community where we always think we’re inventing everything from scratch,” he says. “But there’s easily 50 years of tradition in this kind of thing.”


That tradition began, de Vicente says, with the 01966 Trips Festival, a watershed moment in the history of American counterculture that inaugurated the psychedelic sixties. The three-day event, held in San Francisco’s Longshoreman’s Hall, was organized by future Long Now co-founder Stewart Brand, whom de Vicente calls a “godfather of digital culture.” At the time, Brand was part of Ken Kesey’s Merry Pranksters, an outfit of psychedelic enthusiasts who had begun throwing parties (dubbed ‘Acid Tests’) some months prior.

Stewart Brand and Ken Kesey, 01966. California Historical Society.

The Acid Tests were small, haphazard and sketchy affairs, taking place in houses, on beaches, and in small music venues. Attendees helped themselves to LSD served out of trash containers and danced all night under multi-colored lights to the improvised noodlings of an up-and-coming blues band called the Warlocks, soon to be the Grateful Dead.

There was talk amongst the Pranksters of throwing a bigger party — fire a flare into the San Francisco sky and see who comes. But the Pranksters, for all their spontaneity, lacked a certain organizational focus.

“I knew in my heart that we were not going to be able to pull that off,” Brand recalled. “But that it ought to happen.” Brand, along with electronic music composer Ramón Sender, took it upon themselves to make it so. They secured Longshoreman’s Hall as a venue and enlisted the help of concert promoter Bill Graham.

The California Historical Society sets the scene:

Over 10,000 people, many taking LSD, attended the three-day event. Although the event included music, it was not billed as a concert per se. Rather, it was promoted as an immersive and participatory multi-media experience. Virtually the entire Bay Area’s avant-garde arts scene was involved, including the San Francisco Mime Troupe, the Open Theater, the Dancer’s Workshop and the San Francisco Tape Music Center. Yet it was the performances by emerging rock music groups the Grateful Dead and Big Brother and the Holding Company which captured the attention of attendees. It was the first major performance by the Dead in San Francisco, and the combination of the band’s music, the hall’s sound system and the visually captivating light shows over the three days that created a format that would soon dominate the city’s music halls. Bill Graham took over the Fillmore Auditorium for good just two week later, and his first weekend was advertised as the “sights and sounds of the Trips Festival.” As Tom Wolfe says in the Electric Kool-Aid Acid Test, “the Haight-Ashbury era began that weekend.” The world would never be the same.

Trips Festival poster. “This is the FIRST gathering of its kind anywhere,” the poster reads. “The TRIP — or electronic performance — is a new medium of communication and entertainment.”

“The Trips Festival was the original event saying a show should be a multi-sensorial experience,” de Vicente says. It was also an early originator of the idea that engineers and artists could work together in fruitful collaboration, a model that drove the San Francisco Bay Area’s transition from counterculture to cyberculture over the second half of the twentieth century. Finally, the Trips Festival introduced the notion of “no spectators,” which Stewart Brand defines as “the idea that an audience shows up to a certain kind of event expecting to do something, not just to see something.” “No Spectators” later became a guiding principle of both rave culture and festivals like Burning Man. 

Another lesser known but equally pivotal chapter in the multimedia artistic tradition that gave rise to Sónar occurred in New York City, at the 69th Regiment Armory, nine months after the Trips Festival. 9 Evenings: Theatre and Engineering sought to bridge the worlds of art and technology through showcasing collaborations between avant garde artists and engineers from Bell Labs.

Robert Rauschenberg’s “Open Score” performed at 9 Evenings. La Critique.

The project was started by Bell Labs engineer Billy Klüver and graphic artist Robert Rauschenberg, who later founded Experiments in Art and Technology to further explore the artistic possibilities of electric space. In the ten months leading up to 9 Evenings, Bell Labs engineers worked with artists John Cage, Lucinda Childs, Merce Cunningham, Öyvind Fahlström, Alex Hay, Deborah Hay, Steve Paxton, Robert Rauschenberg, David Tudor, and Robert Whitman to create technologies that would enable new forms of artistic expression:

Their collaboration produced many “firsts” in the use of new technology for the theater, both with specially-designed systems and equipment and with innovative use of existing equipment. Closed-circuit television and television projection was used on stage for the first time; a fiber-optics camera picked up objects in a performer’s pocket; an infrared television camera captured action in total darkness; a Doppler sonar device translated movement into sound; and portable wireless FM transmitters and amplifiers transmitted speech and body sounds to Armory loudspeakers.

9 Evenings didn’t really look like an exhibition in a museum,” de Vicente says. “It looked way more like what Sónar By Night looks like — which is a huge dark hangar with thousands of people watching something that you wouldn’t naturally recognize as a performance.”


Sónar 01997.

Fast forward to 01994. Analog has given way to digital, the “happening” has given way to the rave and the club, and the amplified electricity of psychedelic rock has given way to the thumping bass of electronic dance music.

“DJs were already superstars,” writes music journalist James Davidson, “but the thriving club scene needed its Mecca — and […] it was left to three Catalans to give birth to the festival that now defines its genre.”

Quantum Garden by Aalto University, at SonarHub, 02019.

Sónar was founded by music journalist Ricard Robles and musicians/visual artists Enric Palau and Sergio Caballero. They billed the first gathering in 01994 as the “Festival of Advanced Music and Multimedia Art.” A Record and Technology Fair — what would later evolve into the Sónar+D conference — took place alongside the festival, which was attended by some 6,000 people. (Today, attendance at Sónar has swelled to over 126,000 people, with approximately 6,000 professionals participating in its Sónar+D conference.)

“When Sónar started in the mid 01990s, it contained that element [from the Trips Festival and 9 Evenings] of investigating this spectrum that an event can be both popular and avant garde at the same time,” de Vicente says. “You have this clash of people who would normally be in the space of experimental electronics with the people who are part of the audience of club culture, techno and house, and they mingle in very interesting ways.” 

Over the years, this mingling expanded to include members of research departments in universities and the first hacker spaces, which emerged as new centers of creativity with the rise of the web.

Memo Akten’s Deep Meditations at Sónar+D 02019.

It’s a heady, and at times overwhelming, brew. Days at Sónar start soberly at the Fira Montjuic convention center, with lanyard-donning techies attending panels and talks on quantum computing, artificial intelligence, and the future of the internet; perusing a technology trade fair (dubbed the SónarHub) where hackers hawk their latest prototypes; or experiencing the latest in cutting edge, immersive multimedia art, like Memo Akten’s Deep Meditations installation, a “slow, meditative, meticulously crafted journey of slowly evolving images and sounds — through the imagination of an artificial neural network trained on everything(literally images labelled ‘everything’ from the photo sharing website Flickr), as well other abstract, subjective concepts such aslife, love, art, faith, ritual, worship, god, nature, universe, cosmos and many more.”

Attendees help themselves to paella at Sónar 02019.

The character of the event changes markedly in the afternoon, when Sónar by Day begins. Thumping bass echoes through the festival grounds that now teem with club culture enthusiasts, attendance swelling by the hour.At the SónarVillage, an outdoor pavilion flanked by food trucks and beer stalls, DJs regale the paella-eating masses.

On smaller stages throughout the venue, avant garde artists debut new visions of the ambient frontier, like musician and programmer Holly Herndon’s new show/album PROTO, which she collaborated on with an artificial intelligence she dubs “Spawn,” or Daito Manabe’s AV tech experience that uses an MRI scanner that visualizes brain states reacting to the music being played.

A keynote address is delivered in the late afternoon in the convention auditorium. (In 02016, that honor fell to Long Now co-founder Brian Eno.) This year, Robert del Naja, the frontman of Massive Attack, discussed his band’s methodology of combining generative art, music and technology, as well as its recent project of encoding its debut album into DNA that can be sprayed from an aerosol can.

Björk performing at Sónar 02017.

Once darkness falls, attendees flood a different venue across Barcelona, the Fira Gran Via, for Sónar by Night. Tens of thousands of people pack overcrowded, sweaty hangars and dance until dawn, taking occasional breaks to hit the bumper car course. The music here is decidedly more mainstream, and over the years has featured the likes of Daft Punk, Kraftwerk, Björk, and Thom Yorke. DJs dominate the hours after midnight, with some playing six-hour sets. Once the sun rises, Sónar attendees are provided a brief reprieve before the festivities begin again in the late morning.

“Sónar is not a festival that values sleeping,” a Sónar veteran told me at del Naja’s keynote. “But the future is worth staying up for.”


Immersive Hub installation at SonarHub.

In recent years, Sónar has expanded its time horizon of the future to focus on coming centuries and millennia. As humanity grapples with multigenerational challenges like climate change and rapid technological advance, there’s been an increased emphasis on long-term thinking at Sónar — evidenced by both the themes and projects of Sónar and the ideas presented by speakers themselves.

Jay Springett, on stage, left, at a panel on the future of the internet, Sónar 02019.

“There is a deficiency of long-term thinking in western culture,” Jay Springett, a London based theorist, said at a panel on the future of the internet at this year’s Sónar+D. “It will be vital that we think at multigenerational time depths about everything from internet technologies to tree planting, given the challenges that humanity faces. Our modern world seeks to focus us towards the short-term, and praises quarterly growth. But in the real world, away from high frequency ledger entries and global capital flows, it takes 100–120 years for an oak tree to grow from seed to full canopy height. It takes three human generations to grow a tree. This is real growth. And I’d like to propose that everything that occurs in the duration between the decision to plant an acorn to the tree’s full grown crown is short-term thinking.”

The site of the SonarCalling transmissions in Tromsø, Norway.

SónarCalling, the festival’s attempt to message extraterrestrials, is Sónar’s most ambitious long-term project to date. It served as an organizing principle for the festival’s 25th anniversary, grounding its lines of inquiry around questions of exploration, messaging, intelligence, and designing for longevity.

“Sónar has always been about exploring and scanning the musical cultures of the planet,” de Vicente says. Broadening Sónar’s scope beyond Earth, de Vicente says, requires thinking in different scales of space — which necessarily implies deeper explorations of time.

Installation of pieces of the 10,000 Year Clock.

To that end, Sónar invited Alexander Rose, Executive Director at Long Now, to speak about the 10,000 Year Clock and what it’s taught Long Now about thinking about problems in millennial time scales. Rose emphasized that central value of the Clock lies not in the object itself, but in the myth of long-term thinking it can help inspire.

“Some of the truly multi-millennial artifacts we have in civilization are stories,” Rose said, citing the Epic of Gilgamesh as an example. “What we’re really trying to do is build a story. The Clock is the mechanism by which a myth can hopefully be created. If the Clock lasts, great. But if it creates enough of a story, that myth could probably outlast the Clock by thousands of years.”

The engineering challenges in building a 10,000 year object were unprecedented, to be sure. But with the Clock nearing completion, the real challenge, Rose said, is building a 10,000 year institution that protects the Clock and keeps it relevant.

“We’re crossing an interesting time,” Rose said. “By the time we’re about 25 years old — the same age as this festival — we will have finished this very experimental phase, and will move into a phase where this very notional, perfect object that we’ve talked about is now going to be real and in the world and open to criticism. So moving forward, it’s going to be a very different institution, I think, than it has been up to now.”

Sónar finds itself in a similar moment of reflection, now that it has reached its 25-year-milestone. “We are trying to create a conversation that can only happen at 25 year intervals,” de Vicente says of the SónarCalling project. “It’s a way of asking what things will be like at the fiftieth edition of the festival.”

José Luis de Vicente, curator of Sónar+D.

Asking what the festival will be like in 25 years is implicitly a question about where things stand today. De Vicente, who was part of that first generation of technologists who started getting online in the early 01990s when the web was undergirded by techno-utopian principles, has lately found himself questioning what kind of art and technology event digital culture needs in its current fraught, polarized moment.

“We have never been in such a critical moment of dissatisfaction, of acknowledgment that a lot of these cultures that we built are not making society a better place,” he says. “It’s hard not to be cynical. But at the same time, it’s pretty exciting. These events are artifacts, they are devices. We’re going need different kinds of devices for shaping of the next stage of digital culture, to recapture that energy of possibility from the early days of the web.”

There’s reason to be optimistic that Sónar, as one of the world’s most forward-looking festivals, will be a leader in shaping what that next stage looks like, as it has been in the past. And that in 02043, if an intelligent civilization from beyond the solar system decides we’re worthy of a response, there’ll be a crowd of artists and technologists dancing to the strange sounds of an avant garde future in the city of Barcelona, eager to receive the message.


Learn More

  • Keep up with the SonarCalling audio transmissions as they make their way across the cosmos.
  • Watch talks from this year’s edition of Sónar+D.
  • Read Long Now’s interview with Sónar curator José Luis de Vicente about the role of art in addressing climate change. 
  • Read Fred Turner’s From Cyberculture to Counterculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism (02006) for more on the history that gave rise to events like Sónar.

,

LongNowThe Size of Space

The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe. Starting with an astronaut, users can arrow through to different objects, celestial bodies and galaxies, ultimately zooming out to the observable universe.

,

ME4K Monitors

A couple of years ago a relative who uses a Linux workstation I support bought a 4K (4096*2160 resolution) monitor. That meant that I had to get 4K working, which was 2 years of pain for me and probably not enough benefit for them to justify it. Recently I had the opportunity to buy some 4K monitors at a low enough price that it didn’t make sense to refuse so I got to experience it myself.

The Need for 4K

I’m getting older and my vision is decreasing as expected. I recently got new glasses and got a pair of reading glasses as a reduced ability to change focus is common as you get older. Unfortunately I made a mistake when requesting the focus distance for the reading glasses and they work well for phones, tablets, and books but not for laptops and desktop computers. Now I have the option of either spending a moderate amount of money to buy a new pair of reading glasses or just dealing with the fact that laptop/desktop use isn’t going to be as good until the next time I need new glasses (sometime 2021).

I like having lots of terminal windows on my desktop. For common tasks I might need a few terminals open at a time and if I get interrupted in a task I like to leave the terminal windows for it open so I can easily go back to it. Having more 80*25 terminal windows on screen increases my productivity. My previous monitor was 2560*1440 which for years had allowed me to have a 4*4 array of non-overlapping terminal windows as well as another 8 or 9 overlapping ones if I needed more. 16 terminals allows me to ssh to lots of systems and edit lots of files in vi. Earlier this year I had found it difficult to read the font size that previously worked well for me so I had to use a larger font that meant that only 3*3 terminals would fit on my screen. Going from 16 non-overlapping windows and an optional 8 overlapping to 9 non-overlapping and an optional 6 overlapping is a significant difference. I could get a second monitor, and I won’t rule out doing so at some future time. But it’s not ideal.

When I got a 4K monitor working properly I found that I could go back to a smaller font that allowed 16 non overlapping windows. So I got a real benefit from a 4K monitor!

Video Hardware

Version 1.0 of HDMI released in 2002 only supports 1920*1080 (FullHD) resolution. Version 1.3 released in 2006 supported 2560*1440. Most of my collection of PCIe video cards have a maximum resolution of 1920*1080 in HDMI, so it seems that they only support HDMI 1.2 or earlier. When investigating this I wondered what version of PCIe they were using, the command “dmidecode |grep PCI” gives that information, seems that at least one PCIe video card supports PCIe 2 (released in 2007) but not HDMI 1.3 (released in 2006).

Many video cards in my collection support 2560*1440 with DVI but only 1920*1080 with HDMI. As 4K monitors don’t support DVI input that meant that when initially using a 4K monitor I was running in 1920*1080 instead of 2560*1440 with my old monitor.

I found that one of my old video cards supported 4K resolution, it has a NVidia GT630 chipset (here’s the page with specifications for that chipset [1]). It seems that because I have a video card with 2G of RAM I have the “Keplar” variant which supports 4K resolution. I got the video card in question because it uses PCIe*8 and I had a workstation that only had PCIe*8 slots and I didn’t feel like cutting a card down to size (which is apparently possible but not recommended), it is also fanless (quiet) which is handy if you don’t need a lot of GPU power.

A couple of months ago I checked the cheap video cards at my favourite computer store (MSY) and all the cheap ones didn’t support 4K resolution. Now it seems that all the video cards they sell could support 4K, by “could” I mean that a Google search of the chipset says that it’s possible but of course some surrounding chips could fail to support it.

The GT630 card is great for text, but the combination of it with a i5-2500 CPU (rating 6353 according to cpubenchmark.net [3]) doesn’t allow playing Netflix full-screen and on 1920*1080 videos scaled to full-screen sometimes gets mplayer messages about the CPU being too slow. I don’t know how much of this is due to the CPU and how much is due to the graphics hardware.

When trying the same system with an ATI Radeon R7 260X/360 graphics card (16* PCIe and draws enough power to need a separate connection to the PSU) the Netflix playback appears better but mplayer seems no better.

I guess I need a new PC to play 1920*1080 video scaled to full-screen on a 4K monitor. No idea what hardware will be needed to play actual 4K video. Comments offering suggestions in this regard will be appreciated.

Software Configuration

For GNOME apps (which you will probably run even if like me you use KDE for your desktop) you need to run commands like the following to scale menus etc:

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides "[{'Gdk/WindowScalingFactor', <2>}]"
gsettings set org.gnome.desktop.interface scaling-factor 2

For KDE run the System Settings app, go to Display and Monitor, then go to Displays and Scale Display to scale things.

The Arch Linux Wiki page on HiDPI [2] is good for information on how to make apps work with high DPI (or regular screens for people with poor vision).

Conclusion

4K displays are still rather painful, both in hardware and software configuration. For serious computer use it’s worth the hassle, but it doesn’t seem to be good for general use yet. 2560*1440 is pretty good and works with much more hardware and requires hardly any software configuration.

,

LongNowLong Now Partners with GitHub on its Long-term Archive Program for Open Source Code

Long Now is pleased to announce that we have partnered with GitHub on its new archive program to preserve open source software for future generations. 

The archive represents a significant step in averting a potential future digital dark age, when much of the software that powers modern civilization could be lost to bit rot. Taking its lessons from past examples when crucial cultural knowledge was lost, such as the Great Library of Alexandria (which was burned multiple times between 48 BCE 00640 CE) and the Roman recipe for concrete, the GitHub Archive is employing a LOCKSS (“Lots Of Copies Keep Stuff Safe”) approach to preserving open source code for the future. 

“We will protect this priceless knowledge by storing multiple copies, on an ongoing basis, across various data formats and locations,” GitHub says, “including a very-long-term archive designed to last at least 1,000 years.”That long-term archive is the GitHub Arctic Code Vault, in the Arctic World Archive in Svalbard, Norway—an archival facility 250 meters beneath the Arctic permafrost. The Arctic World Archive is adjacent to the Svalbard Global Seed Vault, and aims to preserve the world’s data in much the same way the Seed Vault preserves plant seeds. GitHub intends to store every public GitHub repository on film reels coated with iron oxide powder, which can be readable for 1,000 years using either a computer or a magnifying glass. Those who wish to add their code to the vault have until February 2nd, 02020 to do so. At that point, GitHub will take a snapshot of every public repository, and add it to the storage vault. GitHub plans to update the library every 5+ years.

Microsoft Research’s Project Silica storage device.

Another archival method is Microsoft Research’s newly-announced Project Silica quartz glass. Similar to the Rosetta Disk, Project Silica is designed to be a durable, long-term storage device.

Femtosecond lasers “encode data in [the] glass by creating layers of three-dimensional nanoscale gratings and deformations at various depths and angles,” Microsoft Research said in a press release. “Machine learning algorithms read the data back by decoding images and patterns that are created as polarized light shines through the glass.” GitHub intends to archive all public repositories on Microsoft’s Project Silica, which it believes could last for over 10,000 years. Like the Arctic Code Vault, GitHub plans to update the library every 5+ years.

Stewart Brand’s Pace Layers.

The GitHub archive program has adopted Long Now co-founder Stewart Brand’s pace layers framework for their code-archiving strategy. “This approach,” says GitHub, “is designed to maximize both flexibility and durability by providing a range of storage solutions, from real-time to long-term storage.”

GitHub’s Pace Layers approach to code-archiving.

Brand’s fast and slow layers are reconceptualized as hot, warm and cold. The hot layers (GitHub, GitHub Torrent, and GitHub archive) update in near-real time. The warm layers (the Internet Archive and the Software Heritage Foundation) update monthly to yearly. The cold layers (Oxford University’s Bodleian Library, the Arctic World Archive in Svalbard, and Microsoft Research’s Project Silica storage) update every five plus years. 

To ensure the future can use the software in its archive, GitHub has convened an Archive Program advisory panel of experts in technology and the humanities, including Long Now Executive Director Alexander Rose. The archive will include technical guides and a Tech Tree— “a roadmap and Rosetta Stone for future curious minds inheriting the archive’s data.”

An overview of the archive and how to use it, the Tech Tree will serve as a quickstart manual on software development and computing, bundled with a user guide for the archive. It will describe how to work backwards from raw data to source code and extract projects, directories, files, and data formats.

Inspired by Long Now’s Manual For Civilization, the archive will also include information on how to rebuild technologies from scratch.  

“It’s our hope,” GitHub says, “that [the Archive] will, both now and in the future, further publicize the worldwide open source movement; contribute to greater adoption of open source and open data policies worldwide; and encourage long-term thinking.”

,

LongNowHow Salvaging Ancient Shipwrecks Might Lead us to Unveil the Mystery of Dark Matter

Shipwreck of the Mandalay. Source: NPS.

In a Long Now talk on dark matter and dark energy, theoretical astrophysicist Priyamvada Natarajan said that “we simultaneously know quite a lot, and not a lot” about the key ingredients of our universe. What we do know is that dark energy makes up about 68% of the universe, with dark matter specifically, comprising about 27%. Dark matter is extremely pervasive but equally elusive. To date, scientists have been unsuccessful in actively detecting dark matter; they only know of its existence by virtue of the presence of other particles.

More information about dark matter could help answer fundamental cosmological questions: How do galaxies hold together without flying apart? Is the universe still expanding or not?

3D map of the large-scale distribution of dark matter. Source: NASA/ESA/Richard Massey (California Institute of Technology)

In order to conduct successful dark matter experiments, very specific environments and criteria need to be met — the most important being a lack of radiation. Dark matter detectors are far more sensitive to radiation than living beings. In order to minimize interference, external radiation, and in particular gamma rays, need to be blocked. The most successful shield known is lead. But any lead that dates after 5:29 a.m. on July 16, 01945¹ — the world’s first ever nuclear device detonation — will be contaminated, and will not have low enough levels of radiation to act effectively as a block. What researchers need, then, is antique lead. The best place to find it is at the bottom of the ocean.

Many shipwrecks, some of which sunk over 800 years ago, carried cargos of lead. Most often found in European waters, shipwrecked lead was originally forged as construction materials, weapons, or coins, and sometimes dates back all the way to Ancient Rome. Sunken, ancient lead is not only shielded under the water from cosmic rays, but after centuries of stagnation its unstable lead-210 isotope will have decayed into a stable lead-206. This rare and hard to source metal, is, according to Chamkaur Ghag, a physicist at University College London, “sort of like gold dust.

However, there have been many debates around the ethics of disturbing archeological sites in pursuit of scientific knowledge. Many of the shipwrecks are cultural sites of historical importance, preserved and protected under the sea. The question is whether it is worth sacrificing historical artifacts in order to do cutting-edge science that advances dark matter research.

This issue is further complicated by scarcity. Shipwrecked lead is finite, and much of the salvaged metal is auctioned off to both the scientific and the commercial sector. Low-background lead is essential for the production of microchips for computers and smartphones. Newly-produced low background lead is available, however sunken, ancient lead is much cheaper, resulting in competition from the microelectronics industry.

While the commercial sector has a choice in the matter of using ancient versus newly-produced low background lead, scientists running dark matter experiments argue that they do not. A 02015 paper found that “freshly mined lead is naturally polluted by radioactive elements such as nuclei from the uranium, thorium and actinium decay chains,” and concluded that “a source of lead with low 210Pb content (low-alpha lead) is a sine qua non condition for the successful outcome of direct searches [for dark matter]. These low levels of emission cannot be achieved by modern lead manufacturing capabilities.”

In a piece for The Atlantic, Robin George Andrews interviewed scientists who emphasized that the scientific approach towards using shipwrecked lead was cautious, limited, and mindful of ethical considerations:

[Fernando] Gonzalez-Zalba explains that the Romans produced about 88,000 tons of lead each year, and many experiments require only a tiny fraction of this. Scientists, he says, are also increasingly aware of and sensitive to the ethical dilemmas surrounding the extraction of low-background materials.

Particle physicists should keep cultural heritage and the origins of their materials front of mind, [Alan] Duffy says. But he emphasizes that low-background material is “certainly treated” as a precious resource and not used without consideration.

“More than ever, our worldview and our understanding of the cosmos hinges on unseen quantities,” Natarajan said in her Long Now talk. With careful balance between ethics and application, shipwrecked lead might one day help scientists make the unseen quantity of dark matter more visible.


Learn More

  • Read “Why the Search for Dark Matter Depends on Ancient Shipwrecks” by Robin George Andrews in The Atlantic.
  • Watch Priyamvada Natarajan’s 02016 Long Now Seminar, “Solving Dark Matter and Dark Energy.”
  • Read “The role of underwater cultural heritage on dark matter searches: Ancient lead, a dual perspective” by E.Perez-Alvaroa and M.F.Gonzalez-Zalbabc in Ocean & Coastal Management, Volume 103, January 02015, Pages 56–62 (subscription required).

,

LongNowFormer Secretary of Defense Jim Mattis on Long-term Thinking

In a new piece in The Atlantic, former Secretary of Defense James Mattis invokes long-term thinking as a necessary but increasingly-forgotten principle of American democracy:

Acting wisely means acting with a time horizon not of months or years but of generations. Short-term thinking tends toward the selfish: Better get mine while I can! Long-term thinking plays to higher ideals. Thomas Jefferson’s idea of “usufruct”—in his metaphor, the responsibility to preserve fertile topsoil from landowner to landowner—embodied an obligation of stewardship and intergenerational fairness. Our Founders thought in centuries. Such thinking discourages shortsighted temptations (such as passing an immense burden of national debt onto our descendants) and encourages the effective management of intractable problems. It conditions us to take heart from the slow accretion of small improvements—the slow accretion that gave us paved roads, public schools, and electrification. I remember being a boy in Washington State and the sense of wonder I felt as bridges replaced ferries on the Columbia River. I remember my grandfather pointing out new power lines extending into our rural part of the state. I think often of the long history of nuclear-arms control. Steady diplomatic engagement with Moscow over five decades—pursued until recently—ultimately gave us an approximately three-quarters reduction in nuclear arsenals, and greater security. Here’s the not-so-secret recipe, applicable to members of Congress and community activists alike: Set a strategic goal and keep at it. Former Secretary of State George Shultz, using his own Jeffersonian metaphor, likened the effort to gardening: a continual, never-ending process of tilling, planting, and weeding.

Jim Mattis, “The Enemy Within,” in The Atlantic.


,

LongNowAmerican Infrastructure’s “Technical Debt”

The wind driven Kincade fire burns near the town of Healdsburg, California, U.S., October 27, 2019. REUTERS/Stephen Lam – RC1E050253E0

With fires burning in California again, Alexis Madrigal has written a piece in The Atlantic on the technical debt embedded in America’s infrastructure:

A kind of toxic debt is embedded in much of the infrastructure that America built during the 20th century. For decades, corporate executives, as well as city, county, state, and federal officials, not to mention voters, have decided against doing the routine maintenance and deeper upgrades to ensure that electrical systems, roads, bridges, dams, and other infrastructure can function properly under a range of conditions. Kicking the can down the road like this is often seen as the profit-maximizing or politically expedient option. But it’s really borrowing against the future, without putting that debt on the books.

-Alexis Madrigal, “The Toxic Bubble of Technical Debt Threatening America” in The Atlantic.

,

MEKMail Crashing and LIBGL

One problem I’ve had recently on two systems with NVideo video cards is KMail crashing (SEGV) while reading mail. Sometimes it goes for months without having problems, and then it gets into a state where reading a few messages (or sometimes reading one particular message) causes a crash. The crash happens somewhere in the Mesa library stack.

In an attempt to investigate this I tried running KMail via ssh (as that precludes a lot of the GL stuff), but that crashed in a different way (I filed an upstream bug report [1]).

I have discovered a workaround for this issue, I set the environment variable LIBGL_ALWAYS_SOFTWARE=1 and then things work. At this stage I can’t be sure exactly where the problems are. As it’s certain KMail operations that trigger it I think that’s evidence of problems originating in KMail, but the end result when it happens often includes a kernel error log so there’s probably a problem in the Nouveau driver. I spent quite a lot of time investigating this, including recompiling most of the library stack with debugging mode and didn’t get much of a positive result. Hopefully putting it out there will help the next person who has such issues.

Here is a list of environment variables that can be set to debug LIBGL issues (strangely I couldn’t find documentation on this when Googling it). If you are stuck with a problem related to LIBGL you can try setting each of these to “1” in turn and see if it makes a difference. That can either be for the purpose of debugging a problem or creating a workaround that allows you to run the programs you need to run. I don’t know why GL is required to read email.

LIBGL_DIAGNOSTIC
LIBGL_ALWAYS_INDIRECT
LIBGL_ALWAYS_SOFTWARE
LIBGL_DRI3_DISABLE
LIBGL_NO_DRAWARRAYS
LIBGL_DEBUG
LIBGL_DRIVERS_PATH
LIBGL_DRIVERS_DIR
LIBGL_SHOW_FPS

,

Rondam RamblingsThoughts on a power outage: what worked, what didn't

We lost our electrical power for 45 hours as a result of the wildfires and high winds in Northern California.  This is by far the longest power outage we have ever had to endure, and we learned a lot about how to deal with them.  I thought I'd share some of the lessons. What Worked This outage was a lot easier to deal with than it might have been because we had a lot of warning.  PG&E started